Last updated: 
2 weeks 8 hours ago
Blog Manager
One of Jisc’s activities is to monitor and, where possible, influence regulatory developments that affect us and our customer universities, colleges and schools as operators of large computer networks. Since Janet and its customer networks are classified by Ofcom as private networks, postings here are likely to concentrate on the regulation of those networks. Postings here are, to the best of our knowledge, accurate on the date they are made, but may well become out of date or unreliable at unpredictable times thereafter. Before taking action that may have legal consequences, you should talk to your own lawyers. NEW: To help navigate the many posts on the General Data Protection Regulation, I've classified them as most relevant to developing a GDPR compliance process, GDPR's effect on specific topics, or how the GDPR is being developed. Or you can just use my free GDPR project plan.

Group administrators:

Level of Assurance: are we approaching a limit?

Tuesday, March 11, 2014 - 14:03

I've had several conversations this week that related to what's commonly referred to as "level of assurance": how confident we can be that an account or other information about an on-line user actually relates to the person currently sitting at the keyboard. Governments may be concerned with multiple forms of documentary proof but I suspect that for most common uses in the education sector that may be over-complicating things. So long as the link between a human and their account is made by a traditional, static, password, and provided that password achieves a pretty basic (though still by no means universal) level of non-guessability, it seems to me that the main factor affecting level of assurance may well be how the user behaves with their password.

Once we improve beyond passwords that are open to simple or brute-force guessing, significant threats to the integrity of the link between the human and their account are password sharing and phishing. The risk from both of these depends almost entirely on the user's behaviour: additional password complexity, stronger proof of real-world identity, etc. don't help. There are plenty of anecdotes of students sharing passwords with one another, of researchers sharing certificates, and of both falling for phishing attacks. Nonetheless static passwords seem to be good enough for most purposes in both our professional and personal lives: the general level of behaviour, backed up by measures to detect and recover from account compromises, seems to produce a level of risk that both users and service providers find acceptable. The only sector that seems to have changed its standard form of authentication is banking – to access my bank account on-line I need to know a password and be in possession of a particular physical device. Using two factors to authenticate reduces the risk of compromise, even without a change in behaviour, but at some cost to users in convenience and considerable costs to the bank in providing and supporting all those hardware tokens.

If we want to reduce risk without moving to two-factor authentication, can we change users' behaviour, or is it a limit we have to live with? The Anti-Phishing Working Group has an excellent campaign to raise awareness of phishing and help users avoid falling for it. But the prevalence of account sharing probably depends on our instinctive perception of how valuable sole use of the account is. That value may not be what we (as service providers) expect: a long time ago I was surprised to discover that what persuaded students that sharing passwords was a bad idea wasn't giving away the ability to read files or access computers, it was handing a "friend" the ability to send a perfectly-forged e-mail to a tutor or boy/girlfriend! Nowadays we are moving towards single-sign-on, where one password gives access to all our on-line services and accounts. That's more convenient for the user and allows service providers to quickly secure all the affected accounts once the user realises they have made a mistake. By analogy with the real world, it seems to me that increasing the number of things accessible ought to increase the perceived value of the password and make us more careful: I'm happier to lend someone the key to just my house or just my car than I would be a master key that gave access to both. I'm not aware of any recent surveys that might confirm that idea. If not, it may be that we've reached the human limit of what can be done with static passwords. If that's right, then if that limit isn't sufficient for your application you may need to look at the cost and acceptability of two-factor authentication.


Twitter has directed me to a recent ENISA report on how e-banking and e-payment services are doing authentication. There seems to be a wide variety of both single-factor and two-factor systems in use, with banks apparently distinguishing between low-risk access such as seeing the balance in an account, where static password is still common, and higher risk activities such as transfers, where a second factor is more likely to be required. About 10% seem to still be accepting username/password for even "high-risk" transfers, though it's more common for those to require one-time-passwords generated by TAN list, time-based tokens or SMS.

Andrew, dont blame the users for falling for phishing attacks. Blame the developers who devised a federated identity management system that was open to phishing attacks.

You probably know that we have already re-designed federated identity management to remove the possibility of phishing attacks (as well as to integrate attribute aggregation from the ground up, and provide users will full consent and control). So perhaps now it is time to call the current UKAMF a legacy system, a good first attempt, and lets do it better now.

I don't see anyone as to blame for phishing (except the phishers, of course). I'm just noting that it's an external factor that places an upper bound on confidence in identity unless you move to a different authentication technology.

And one rather obvious point, though I've only recently got it this clear in my own mind, is that you can only federate identity management systems and policies that already exist. At the moment the user authentication systems that are used in universities generally are static username/password, so vulnerable to phishing. That's apparently an acceptable risk for universities for their own internal use: I've come across a few instances of people starting to issue 2FA to sysadmins, but none to general users. So federations that link existing university authentication systems are inevitably going to be limited by that upper bound at the moment. As and when stronger authentication systems start to be deployed, I'd hope that the existing federation policies will incorporate those.

But at the moment, if a service provider says the phishing bound isn't acceptable for them, then it seems to me that their current alternatives are either to find phishing-proof authentication systems that its users already have and federate those; or else issue a stronger authentication system itself (also taking on the costs of doing all the identity proofing that it could otherwise have borrowed from the federation members); or else try to persuade the relevant IdP members of the federation to introduce a multi-factor authentication system before they need it for their own internal purposes. All of those are non-trivial, of course, which is why I think service providers need to be really sure they can't live with what's currently available.

Any system that automatically redirects a user to their login home page, without the user explicitly going there himself (e.g. by typing in the URL or having it stored in a local bookmark) is open to phishing attacks. So our federation infrastructure was designed to support phishing attacks. When designing security systems (and federated login is a security system) then it is encumbent on the designers to identify the risks and put in mitigating features to counteract them. This was not done for federated login in the case of phishing attacks, either because phishing was not identified as a risk at the design time (most likely) or was thought to be a low risk and not worth counteracting (unlikely). So this is why I blame the system designers.

An alternative anti-phishing design is this. The user goes to the IDP, logs in, then goes to the SP and is asked "already logged in, then click here", is redirected to his IDP and it all works fine due to SSO, since the user is already logged in. The SP also says "if not already logged in, go to your IDP and login first". In this model the user will never be redirected to his IDP to login so can never be phished.

Another design is this. The SP displays its authz policy. If the user sees he can match it, he clicks on the Login/Go/Continue/whatever icon and this sends the policy to the browser which causes the right plugin to be called. This asks the user for his IDP, which he enters, then the policy is sent to the IDP along with a login request. The user logs in, the IDP sees which attributes are needed by the SP and returns them in the response to the SP.

This is why I am saying it is time to move to Federation Design 2 to counteract phishing attacks by design (and not relying on users to do the right thing)

Sadly users find very many other ways of handing their passwords to phishers :( Noone has ever tried to phish my password by faking a federation site; they try pretty much every day by faking webmail :(

As far as password sharing is concered, I have for a while though that every authentication sits somewhere between two extremes:

  1. Authentications where it's strongly in the user's interest not to disclose their authentication credentials, but doing so has little impact on the corresponding service provider.  For example I'm probably going to be careful about my credentials for electronic banking (because I don't want you to get my money) and for Facebook (because I may not want you to start saying things to my friends that appear to come from me).
  2. Authentications where it's mainly in the service provider's interest that the user doesn't disclose their authentication credentials but it's of little importance to the user. For example authentication to gain access to institution-subscribed electronic journals, or credentials giving access to personal subscription services such as Spottify.

Problems arise when a service wants to be in case 1 but is actually in case 2. I wrote a blog post i this a while ago that includes a few further thoughts:

Interesting analysis, thanks. Sounds plausible to me, though I'm not sure banking is the best example of case 1: account holders seem to expect the bank to resolve the problem even when it's the holder's fault that money has gone astray!

At least if it is a continuum there's a possibility of services where non-sharing is equally important to both user and service provider ;-)