Limits to Anti-Phishing

Status: Draft (as of 25 Jan 2006)
Authors: Jeffrey Nelson, David Jeske, Google, Inc.
Submitted to: W3C Security & Usability Workshop

 

Abstract

A growing number of solutions have been put forward to prevent or minimize phishing.  These solutions utilize a wide variety of techniques, such as introducing new protocols, new trust indicators, new user models, or additional hardware tokens.  We make some observations about the barriers to adoption and propose five key principles of an anti-phishing solution.

 

Assumptions

To frame the problem, we make the following assumptions about phishing attacks.

The unmotivated user is not willing or not able to put in the effort to distinguish trusted services from untrusted services. Trust indicators in browsers are currently subtle, requiring users to parse URI syntax. Many users mistake the presence of HTTPS as a sign that a web site is legitimate. Also, the trust indicators in the browser are easily spoofed.

The adversarial attacker is capable of creatively countering any static security measure with a response. Phishing attacker can forge links, impersonate domains, spoof browser chrome, and create simulated browsers. Attackers can implement or spoof HTTPS.

Further, in a successful phishing attack, the user trusts a phishing site and is willing to pass authenticating credentials to the phishing site.  The attacker can replay these credentials to the server.  HTTPS may be used by the attacker and/or service provider, but since the user trusts the attacker, HTTPS does not protect against this man-in-the-middle attack.

We assume phishing attackers haven't compromised the OS or browser. If the attacker has already compromised the OS or browser, phishing attacks, which gain the cooperation of the user, are unnecessary.

Service adoption for an anti-phishing technology faces two challenges to adoption, the users and the service providers.

By definition, the unmotivated user won't expend effort on anti-phishing. Various anti-phishing proposals require some action by the user, for example setting a site-specific secret or carrying a hardware token. Some solutions ask users to memorize longer passwords or secondary passwords. Users have to learn to use the new devices correctly and willing to expend the effort. Users want to roam between computers with no extra effort. Unmotivated users will not adopt complex anti-phishing solutions.

Further, service providers are cost-sensitive. Service providers recognize the financial impacts of phishing. The service provider is motivated, but significant barriers exist to the adoption of any new authentication technology. They have collectively invested billions of dollars in stateless HTTP infrastructure. Solutions that suggest new stateful protocols require big investments to upgrade existing stateless infrastructure. Software development is also very expensive. Solutions which employ new popup windows or otherwise modify the existing login processes face tremendous challenges. Any solution which does not seamlessly integrate with the existing HTML FORM tag further requires UI redesign and product implications. The costs to service providers must not be understated.

 

Five key principles

We propose five key principles must be met of any comprehensive solution to anti-phishing.  Any anti-phishing solution which does not satisfy these principles will ultimately leave major gaps that can be exploited by adversaries.

1.      Trusted user interface for authentication must be based on a secret, since all user interface is spoofable.

Threat: Suppose a new trust logo is added to browser chrome to indicate safety.  Alice comes to a web site such as paypal.com and sees the Paypal logo in browser chrome.  The attacker just screen caps the logo and can spoof the browser chrome with Paypal logo using existing phishing techniques.

Adding more trust indicators or more obvious trust indicators misses the point that an attacker can spoof every part of a user interface, including browser chrome and copy new trust indicators.  Some proposals include new UI elements such as new anti-phishing trust icons, company logos in browser chrome, or new authentication popup windows.  All of these miss the point that an attacker is capable of spoofing the entire user interface.

We differentiate static trust indicators from trusted UI. Static trust indicators not based on some secret known only to the user can be trivially spoofed as part of a simulated browser. Trusted UI is based on some unspoofable secret known only to the user. Some examples of trusted UI include personalized images and skins. Security Skins [Dhamija] is an example implementation of a trusted UI.

2.      A trusted channel can’t be trusted, since an attacker can use a trusted channel.

Threat:  Alice receives a notice regarding her Paypal account.  She clicks a link, sees the HTTPS lock icon, and believes her data is safe.  She willingly provides her password.  Meanwhile, the attacker spoofs Paypal, implements HTTPS, and receives the cleartext password over HTTPS from Alice.

When a user is misled by a phishing site, she willing submits confidential information to the site.  HTTPS protects against man-in-the-middle attacks on the cryptographic protocol.  However, when a user is misled into using HTTPS to connect to a spoofed web site, the secure HTTPS channel can be used for a human man-in-the-middle attack.  The attacker receives the authentication credential and any other confidential information submitted by the user despite the use of a secure channel.

In order to protect the authenticating credentials against human man-in-the-middle attacks, strong cryptography must be an element of any solution.  The SRP protocol [Wu] is an example of a protocol that protects the authentication secret from theft even if an attacker is a man-in-the-middle.

3.      The client must also authenticate the server, since an unauthenticated server can easily ask for more confidential information.

Threat: Alice uses an anti-phishing technology to login to a Paypal.com phishing site.  Attacker ignores the authentication credentials, and then asks for the real objective: credit card numbers and identity theft information.

Mutual authentication is another key element of an anti-phishing technology. The password is not the objective of many phishing attacks. Attackers display the login dialog as a way to make the spoof site appear legitimate. The attacker never checks the password provided during login. The login box is effectively a misleading trust indicator.

The server must prove its identity to the client.  Without a proof of mutual authentication, the client can not know that it’s authenticated to the legitimate Paypal.com.

4.      A cleartext password must not be revealed during any phase of authorization, since an attacker will fool the user into completing any standard process.

Threat:  Suppose that Paypal rolls out SRP to protect theft of authentication credentials and uses a cleartext password during account setup.  The adversary responds by spoofing account setup instead of account signin.

Authentication mechanisms may be strong during one phase of authentication but have weaknesses in other phases.  For example, Secure Remote Password [Wu] depend on a shared secret agreed upon during account setup.  The shared secret, called the verifier, is a cleartext equivalent credential, meaning that a small number of possible password inputs results in a small number of possible verifier outputs.  An attacker steals a verifier, then the attacker can dictionary attack to compute the cleartext password over a small set of possible password values.

Many studies have found that users do not choose strong passwords.  Password hashes are easily dictionary attacked.  At Google, we’ve found that 50% of password hashes can be dictionary attacked in under 1 CPU-second [Google].  Further, if the attacker can construct a pre-computed dictionary of hashes, the incremental cost of retrieving cleartext passwords from hashes is near zero.  So, we must assume that an attacker who steals a cleartext equivalent credential can trivially obtain a cleartext password.

We propose that the server must never receive a cleartext equivalent credential during any phase of authentication.

5.      The anti-phishing solution must integrate with existing password-based authentication, since users are trained to use passwords.

Threat:  Suppose that a strong authentication technology is rolled out that just requires users to carry a USB key on her keychain.  Alice installs and learns to use a new USB key on her computer.  When using a computer without the keychain, she can’t login.

A previously under-accounted for element of any solution is cost to the service provider and to the user.  The solution must be capable of re-using existing UI to minimize adoption costs. The solution must be cheap to deploy for the service provider.

The solution must be seamless to the user, fitting into the existing expectations of how authentication is performed and integrating with existing sign-in HTML FORMs.  The user expects to use a small password for authentication and type it into a web page form on any computer at any time.

 

Conclusions

We have discussed some barriers to adoption of existing anti-phishing solutions and have proposed five key principles for an anti-phishing solution.

References

  • [Dhamija] The Battle Against Phishing: Dynamic Security Skins, Rachna Dhamija and J. D. Tygar, in Proceedings of the Symposium on Usable Privacy and Security (SOUPS), July 2005.
  • [Google] Internal statistical analysis on uniqueness of user-chosen passwords for a sample of Google accounts.
  • [Wu] T. Wu, "The Secure Remote Password Protocol", in Proceedings of the 1998 Internet Society Network and Distributed System Security Symposium, San Diego, CA, Mar 1998, pp. 97-111.