Shared Bookmarks

This page serves as a shared bibliography for working group members. Feel free to add, classify, reorganize, and annotate!

See also: ACTION-20

General Usability Design Principles

  1. The {Design, Psychology, Psychopathology} of Everyday Things, Don Norman. Chapter 1. This text is not available online, thefollowing concepts are the ones applicable to the working group.

    affordance
    an aspect of an object which makes it obvious how the object is to be used
    conceptual model
    a user's understanding of what something does and how it works
  2. Ten Usability Heuristics, by Jacob Nielsen Here are the ones Mez thinks will be particularly useful for us: Match between system and the real world The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order. Flexibility and efficiency of use Accelerators -- unseen by the novice user -- may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions. Aesthetic and minimalist design Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. Help users recognize, diagnose, and recover from errors Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.

  3. "Humane Interface, The: New Directions for Designing Interactive Systems", Chapter 2-3, by Jef Raskin
    • At any given moment, a user has only a single locus of attention, a feature or an object in the physical world or an idea about which you are intently and actively thinking.

    • The user does not have complete control over what their "locus of attention" is. The environment can shift it, for example a loud noise.
    • Things outside the user's locus of attention go unnoticed. Humans are wired to ignore things that aren't their current locus of attention.
    • A detail in the user's locus of attention is only in short term memory and will be forgotten quickly once the detail is no longer the user's locus of attention.
    • Persistent use of any interface will cause the user to develop habits. Reactions to things outside the user's locus of attention are largely determined by these habits. User interfaces should leverage habit formation to shape the user's workflow.

  4. "Designing the User Interface", Chapter 2, by Ben Shneiderman
    • Know thy user All design should begin with an understanding of the intended users. This includes population profiles that reflect age, gender, physical abilities, education, cultural or ethnic background, training, motivation, goals, and personality. An example of creating profiles would be the division of users into novice or first-time users, knowledgeable intermittent users, and expert frequent users.

    • Create task profilesAfter drawing out user proifiles, designers should formally write down user tasks.

    • Strive for consistency. Consistency should be a goal for the appearance of data displayed, the language used, and the sequences of actions required.
    • Offer informative feedback. For every user action, there should be system feedback.
    • Design dialogs that yield closure. When an action has been completed it should be clear to the user that the action was successful and that they can consider their task completed and can move on to the next one.
    • Offer error prevention and simple error handling. Whenever possible, design the system so the user can not make serious errors.
    • Reduce short-term memory load. Users should not be expecting to remember information across a number of screens.
    • When displaying data to the user, the format should be familiar to the operator and should be related to the tasks required of the user.
    • Present data only if they assist the operator.
    • Techniques to get the user's attention
      • intensity: only use two levels of color intensity
      • marking: underline, enclose in a box, point to with an arrow
      • size: use up to 4 sizes
      • Inverse video: use inverse coloring
      • blinking: use blinking displays in limited areas with great care
      • color blinking: use changes in color with great care and in limited areas.
      • audio: use soft tones for regular positive feedback, and harsh sounds for rare emergency conditions

Usable Security Design Guidelines

  1. Why Johnny can't encrypt, Alma Whitten and John D Tygar

    • The user must be aware of the task they are to perform.

      • We are focusing on presenting the user with information they can use to make decisions regarding the security of a web site. The user must be aware of what information they should use to make this decision, this includes being aware of where to look for this information.
    • The user must be able to figure out how to perform the task.

      • They must know what the information presented to them means, and know what the relevance of the information is.
    • Security is always a secondary goals, it is never the main focus of a user.
      • The user can't be expected to actively seek information about security, the information should be presented in a way that is easily accessible, and understood at a glance.
    • The user should be given feedback when the state of the security of a page is changed.

      • The feedback should be relevant, and should be given in a way that won't annoy the user.
  2. Alma Whitten and J. D. Tygar, “Safe Security Staging”, CHI 2003 Workshop on Human-Computer Interaction and Security Systems, Ft. Lauderdale, Florida. Safe staging. Safe staging is “a user interface design that allows the user freedom to decide when to progress to the next stage, and encourages progression by establishing a context in which it is a conceptually attractive path of least resistance.” Metaphor tailoring. Metaphor tailoring starts with a conceptual model specification of the security related functionality, enumerates the risks of usability failures in that model, and uses those risks to explicitly drive visual metaphors.

  3. Mary Ellen Zurko, Charlie Kaufman, Katherine Spanbauer, Chuck Bassett, Did You Ever Have To Make Up Your Mind? What Notes Users Do When Faced With A Security Decision, Proceedings of 18th Annual Computer Security Applications Conference, Las Vegas, Nevada, December 2002. False positive warnings rapidly dilute warning usability. "the common software practice of warning users of danger but letting them click on something to proceed anyway" does not provide security in a large portion of the user population. The same overall issue comes up in Wu, Miller and Garfinkle, and in Gutmann, below.

  4. Paul DiGioia, Paul Dourish, “Social Navigation as a Model for Usable Security”, Proceedings of the 2005 Symposium On Usable Privacy and Security, Pittsburgh, Pennsylvania, pp. 101 – 108. Integrated security aligns security with user actions and tasks so that the most common tasks and repetitive actions are secure by default. Provides information about security state and context in a form that is useful and understandable to the user, in a non-obtrusive fashion. Although actually about privacy, which I (Mez) contend is easier than security, since it's information the user understands in the first place.

  5. Andrew S. Patrick, Pamela Briggs, and Stephen Marsh, "Designing Systems That People Will Trust", Security and Usability: Designing Secure Systems that People Can Use, ed. Lorrie Faith Cranor and Simson Garfinkel.
    • The trust design guidelines tie directly in to the kinds of things phishing attacks do or can do to successfully get users to trust them (and overlook, ignore, or explain away security context information that contradicts their trustworthienss). Some examples from the chart on page 95 (including at least one that would be hard for a phisher) [wiki formatting issue; I attempted to number them with the numbers from the table but so far have failed]:
    • Maximize the consistency, familiarity, or predictability of an interaction both in terms of process and visually.
    • Include seals of approval such as TRUSTe.
    • Provide explanations, justifying the advice or information given.
    • Provide independent peer evaluation such as refernces from past and current users and independent message boards.
    • Provide clearly stated security and privacy statements, and also rights to compensation and return.
    • Offer a personalized service that takes account of each client's needs and preferences and reflects its social identity.

  6. Phishing Tips and Techniques, Peter Gutmann

    • Humans evaluate alternatives serially, generating options one at a time and accepting the first that works (Singular Evaluation Approach)
    • Habituation and user conditioning mean that people don't read dialogs or think twice about having to enter their passwords
    • People are very susceptible to confirmation bias, and are likely to accept an invalid yet plausible conclusion that supports their hypothesis
    • Having a user change their behaviour due to the absence of a stimulus is hard, and goes against evolutionary training

    • Users don't understand that it's easy to forge web content, relating it to the difficulty of forging real-world content

Usability Studies about Internet Security

Browser Security and Phishing Studies

  1. Why Phishing Works, Rachna Dhamija, J.D. Tygar, Marti Hearst

    • User characteristics that are commonly exploited by phishers:
      • Lack of knowledge.
        • Users don't know what the URL format is, they don't know ebay-security.com has nothing to do with ebay.com
        • They don't realize things can be spoofed (like the sender of an email)
        • Don't know what the different versions of the lock icon mean
        • Don't know the difference between the browser chrome and the content, and what controls the info in each
      • Visual Deception
        • URL deception. e.g. paypal with a one instead of an L
        • Images of a hyperlink to a site, that are a hyperlink to another site
        • Images that mimic windows
        • Borderless pop-ups that belong to another site
        • Recreating look and feel of the legitimate site
      • Bounded attention
        • Users don't pay attention to security or security indicators
        • Users don't notice the absence of security indicators
    • Participants were asked to distinguish legitimate websites from spoofed phishing websites. They were asked to assume the role of someone who had clicked on a link in email and arrived at the website in question.
    • Results: Despite the heightened security awareness and educated users, the study found that some phishing websites were able to fool a large fraction of participants.
      • 23% of participants in our study did not look at the address bar, status bar, or any SSL security indicators (HTTPS, lock icon)
      • Popup warnings about fraudulent certificates were ineffective: 15 out of 22 participants proceeded without hesitation when presented with warnings.
      • Participants proved vulnerable across the board to phishing attacks- neither education, age, sex, previous experience, hours of computer use showed a statistically significant correlation with vulnerability to phishing. (This doesn't mean differences don't exist, just that they were not found in this sample)
      • Study showed that users don't have a clear understanding of what the security cues mean, or where they should be located (chrome or content). Also showed users could easily be tricked by pages that had a professional feel and animations - users relied more on the content of a page than the security indicators, users don't pay attention to indicators in the periphery.
  2. Do Security Toolbars Actually Prevent Phishing Attacks? by Wu, Miller and Garfinkel

    • Methodology: Wu created dummy accounts in the name of John Smith at various e-commerce websites. The participants were asked to play the role of John Smith's personal assistant, to process email messages on his behalf, and to protect his passwords. Wu's study design also featured a tutorial, where users were trained on how to use the anti-phishing toolbar.

    • Results found that participants were fooled 34% of the time. Even when asked to focus on the toolbars, many participants ignored them when webpages looked convincing enough.
    • Users do not understand phishing attacks, or don't realize how sophisticated phishers can be.
    • Users entered information at spoofed sites where the content looked professional, or similar to what they had seen before. A user can either think a phishing site doesn't have the resources to recreate aspects of the site (videos, images) or wouldn't bother to look professional. Some users think they should only be weary of sites with spelling errors, or simple content.
    • Users ignore warning signs, or reason them away. In the case of a url that contains the real site's name, but isn't in the right format. User's gave a number of reasons why this might be the case. Example: Maybe www.mytarget.com is the real site, and Target is using it because www.target.com was taken.
    • Users don't pay attention to toolbars, or don't look at them at all. Users from the study admitted to not looking at the toolbar 25% of the time.
    • The toolbars were used more effectively when a tutorial was given on the toolbar's use.
    • Users rely on the content of a web page to make security decisions. Buttressed by Why Phishing Works and Patrick, Briggs, and Marsh, above, and Whalen and Inkpen below.

    • Even when a user initially checks for security information, they don't check the information continually throughout a session.
    • A few common phishing attacks
      1. Using a similar name. Example: paypal with a one instead of an L
      2. Using an IP address instead of the domain name in a link.
      3. Hijacking a server of a legitimate company and hosting a phishing site.
      4. Displaying the real site in a browser, with a borderless pop-up window to the phishing site requesting personal information
  3. Gathering Evidence: Use of Visual Security Cues in Web Browsers, Tara Whalen and Kori M. Inkpen

    • Study asked participants to perform common online browsing tasks, some of which required participants to login to an account and make purchases. An eyetracker was used to reveal whether security indicators were checked by participants. This includes looking for the lock, looking for https, clicking on the lock - checking the certificate.
    • Methodology: Participants used login and credit card information created for the study; they were asked to treat the data as their own and to keep it confidential. In the second half of the study, participants were specifically instructed to behave securely.

    • They found that unless instructed to behave securely, many participants did not check whether a page was secure.
    • Results suggested people also use the security statements made by a company on their web page to judge whether a site is secure.
    • Most of the users weren't aware the lock could be clicked for more information. Of those who knew they lock could be clicked, only 2/16 knew what a certificate was but only 1 of them was able to extract useful information from the certificate.
    • Using the eyetracker, when asked to behave securely, users did look for the lock and did look for https. They even did this at about the same time, suggesting users know the two are linked in some way. The eyetracker data also showed the users looking in the lower left and right hand corners of the browser for the lock icon - suggests a standard is needed across browsers.
    • Users don't continue to look for security cues after log-in is complete.
  4. "Decision Strategies and Susceptibility to Phishing" Julie Downs, Mandy Holbrook, Lorrie Faith Cranor

    • Methodology: users drawn from a random cross section of the Pittsburg PA population. Users were questioned & observed while responding to various simulated possible phishing scenarios. Browsers used were MSIE, Firefox, Netscape, & Safari. Their report will be very valuable to the work of WSC.

    • Most participants [85%] had seen lock images on a web site, and knew that this was meant to signify security, but most had only a limited understanding of what how to interpret locks, e.g., “I think that it means secured, it symbolizes some kind of security, somehow.” Few knew that the lock icon in the chrome (i.e., in the browser’s border rather than the page content) indicated that the web site was using encryption or that they could click on the lock to examine the certificate. Indeed, only 40% of those who were aware of the lock realized that the lock had to be within the chrome of the browser.
    • Only about a third [35%] had noticed a distinction between "http://" and "https://" URLs. Of those some did not think that the “s” indicated anything. But those who were aware of the security connotation of this cue tended to take it as a fairly reliable indication that it is safe to enter information. For those people this extra security was often enough to get them beyond their initial trepidations about sharing sensitive information, e.g., “I feel funny about putting my credit card number in, but they say it is a secure server and some of them say ‘https’ and someone said that it means it’s a secure server.”

    • About half [55%] had noticed a URL that was not what they expected or looked strange. For some, this was a reason to be wary of the website. For others, it was an annoyance, but no cause for suspicion. The other half [45%} appeared to completely ignore the address bar and never noticed even the most suspicious URLs.
    • Participants appeared to be especially uncertain what to make of certificates. Many respondents specifically said that they did not know what certificates were, and made inferences about how to respond to any "mysterious message" mentioning certificates. Some inferred that certificates were a "just a formality". Some used previous experience as their basis for ignoring it, e.g., “I have no idea [what it means], because it’s saying something about a trusted website or the certificate hasn’t, but I think I’ve seen it on websites that I thought were trustworthy.”
    • Almost half [42%] recognized the self-signed certificate warning message as one they'd seen before. A third [32%] always ignored this warning, a fourth [26%] consistently avoided entering sites when this warning was displayed, and the rest responded inconsistently.
    • When asked about warnings generally, only about half of participants recalled ever having seen a warning before trying to visit a web site. Their recollections of what they were warned about were sometimes vague, e.g., “sometimes they say cookies and all that,” or uncertain, e.g., “Yeah, like the certificate has expired. I don’t actually know what that means.” When they remembered warnings about security, they often dismissed them with logical reasoning, e.g., “Oh yeah, I have [seen warnings], but funny thing is I get them when I visit my [school] websites, so I get told that this may not be secure or something, but it’s my school website so I feel pretty good about it.”
    • Only half of participants had heard the term "phishing". The other half couldn't guess what it meant. Most participants had heard the term "spyware" but a number of those believed it was something good that protects one's computer from spies.
  5. What Do They Indicate? Evaluating Security and Privacy Indicators, Lorrie Faith Cranor

    • Criteria for evaluating indicators:

    • Does the indicator behave correctly when not under attack? Does the correct indicator appear at the correct time without false positives or false negatives?
    • Does the indicator behave correctly when under attack? Is the indicator resistant to attacks designed to deceive the software into displaying an inappropriate indicator?
    • Can the indicator be spoofed, obscured, or otherwise manipulated so that users are deceived into relying on an indicator provided by an attacker rather than one provided by their system?
    • Do users notice the indicator?
    • Do the users know what the indicator means?
    • Do users know what they are supposed to do when they see the indicator?
    • Do they actually do it?
    • Do they keep doing it?
    • How does the indicator interact with other indicators that may be installed on a user's computer?
  6. The Emperor's New Security Indicators Stuart Schechter, Rachna Dhamija, Andy Ozment, and Ian Fischer.

    • Measures the efficacy of HTTPS indicators and site-authentication images
    • Reveals problems with the use of role playing in security usability studies
    • Provides a study design in which participants appear to put their own credentials at risk
  7. An Evaluation of Extended Validation and Picture-in-Picture Phishing Attacks Collin Jackson (Stanford University), Dan Simon (Microsoft Research), Desney Tan (Microsoft Research, Adam Barth (Stanford University)

    • Evaluates the ability for IE7 with Extended Validation certs to help users detect phishing attacks (homograph attacks where the URL is obscured and Picture in Picture Attacks where images of chrome are copied into the webpage content.
    • Result: PIP attacks were very effective in fooling users with or without EV certs. Training with IE7 help documentation actually reduced users' ability to detect attacks.
  8. Social Phishing Tom Jagatic, Nathaniel Johnson, Markus Jakobsson, and Filippo Menczer, to appear in Communications of the ACM.

    • A social network was used for extracting information about social relationships. The researchers used this information to send phishing email to students on a university campus that appeared to come from a close friend.
    • Results: 72% of users responded to phishing email that was from a friend's address, while only 16% of users responded in the control group to phishing email from an unknown address.
  9. Designing Ethical Phishing Experiments: A study of (ROT13) rOnl auction query features M. Jakobsson and J. Ratkiewicz, WWW2006.

    • Jakobsson and Ratkiewicz used the features of an online auction website to send simulated phishing emails to that site's members. The phishing email only appeared to be a phishing attempt; to respond to the message the recipient had to provide their credentials to the real auction site. Researchers could learn that users logged into the auction site if they received a response to their message, without having to collect user credentials.
    • Experiments revealed that, on average, 11% of users logged into the auction site to respond to the illegitimate messages.
  10. The Human Factor in Phishing, Markus Jakobsson, To appear in Privacy & Security of Consumer Information '07

    • discusses the importance of understanding psychological aspects of phishing, and review some recent findings.
    • critiques some commonly used security practices and suggest and review alternatives, including educational approaches.
  11. Designing and Conducting Phishing Experiments, P. Finn and M. Jakobsson. To appear in IEEE Technology and Society Magazine, Special Issue on Usability and Security

    • This paper talks about ethical issues and IRB approval for phishing studies.
    • The authors argue that the "informed consent" requirements should be waived for phishing studies.
  12. Do consumers understand the role of privacy seals in e-commerce?, Trevor T. Moores and Gurpreet Dhillon. Communications of The ACM, Volume 48, Issue 3 (March 2005).

    • This paper discusses a user study where participants were supposed to recognize web seals (e.g. TRUSTe, BBBOnline, etc.). The study found that 42% recognized TRUSTe, 29% recognized BBBOnline, 15% (incorrectly) recognized the "Web Shield" logo which was made up for this study, and 8% recognized CPA WebTrust.

    • "This finding suggests that any official-looking graphic placed on a Web site has an equal chance of persuading the consumer that the site is trustworthy, regardless of any relation between the graphic and the actual Web assurance seals."
    • Most users unaware how a site gets a seal.

Empirical Studies about Password Behavior, Phishing and Privacy Concerns

  1. A Large Scale Study of Web Password Habits D. Florencio and C. Herley, WWW 2007, Banff.

  2. An Empirical Analysis of the Current State of Phishing Attack and Defense, Tyler Moore and Richard Clayton, Computer Laboratory, University of Cambridge

  3. An Empirical Approach to Understanding Privacy Valuation, Luc Wathieu and Allan Friedman, Harvard Business Review

    • Contrary to some research, the chief privacy concern appears based on data use, not data itself.
    • There is consumer demand for social control that focuses on data use.
    • Sophisticated consumers care about economic context and indirect economic effects.

Email Encryption Studies

  1. Why Johnny Can't Encrypt: A Usability Evaluation of PGP 5.0 Alma Whitten and J.D. Tygar

    • an early usability study that asked participants to use email encryption software.
    • participants were asked to assume the role of a political campaign coordinator who was tasked with sending sensitive email messages to other campaign volunteers.
    • The study concluded that a majority of participants could not successfully sign and encrypt a message using PGP 5.0.
  2. Johnny 2: A User Test of Key Continuity Management with S/MIME and Outlook Express Garfinkel and Miller

    • They used the same scenario as Whitten and Tygar to test a Key Continuity Management (KCM) email encryption interface.
    • In addition, they also simulated an escalating series of attacks (spoofed messages that appeared to come from other campaign members)
    • Results: they found that the interface was successful in preventing some attacks and not others, users were confused by security messages that appeared in the chrome versus those that appeared in the content of the email messages.

Studies about Education and Training

Studies about Warning Messages

  1. Empirical Studies on Software Notices to Inform Policy Makers and Usability Designers by Jens Grossklags and Nathan Good and Stopping Spyware at the Gate:. A User Study of Privacy, Notice and Spyware by Nathan Good, Rachna Dhamija, Jens Grossklags and Deirdre Mulligan

    • Both of these studies show that users click through EULAs and security warning dialogs
  2. You've Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings by Serge Egelman, Lorrie Faith Cranor, and Jason Hong.

    • A study on web browser phishing warning message. The researchers found that 97% of participants fell for at least one spear phishing attack, but 79% were protected by warnings which interrupted them, whereas those shown warnings which did not interrupt them were no better off than those who did not see any warnings. They also found that none of the 60 participants noticed the EV SSL indicators, but several participants noticed URLs, but only after seeing the warning messages.

Browser Security and Anti-phishing Proposals

  1. Petname Tool Firefox extension, Tyler Close

    • Need to distinguish sites the user has a relationship with from sites that are strangers. The user types in a petname to mark the creation of a new relationship.
    • Need to give each relationship a name, so as to distinguish the relationships from one another. It's not enough to know there is a relationship, the user needs to know which one is currently relevant. For example, don't want the "joke of the day" site impersonating the online banking site.
    • The display of the currently relevant relationship needs to be always on and always in the same place, to prevent spoofing and to provide consistency.
    • A user's set of petnames is a local namespace for their online relationships. This local namespace is under the exclusive control of the user. This user control ensures the namespace meets the user's requirements for distinct names. By exclusively using this local namespace, the petname tool denies the spoofer the opportunity to inject confusing names into the user's namespace.
    • The petname for a relationship is always readily updated by the user, to enable the easy integration of new experience or new recommendations. For example, if a friend refers the user to a site the user has already petnamed, the user may update their petname to reflect this recommendation.
    • By allowing the user to choose their own names, the petname tool maximizes the ability of a name to act as a memory aid. Upon seeing their chosen petname, a user is better able to remember the nature of their relationship with the site. This feature is similar to the way users are allowed to choose their own filenames for files stored on their computer, or their own tag names for categories of email.
    • By only allowing the user to create petnames, the petname tool helps the user better understand the origins of trust and bolsters the user's memory of trust decisions. For example, "I decided to trust this site when I gave it a petname based on my friend's recommendation."
  2. PASSPET: CONVENIENT PASSWORD MANAGEMENT AND PHISHING PROTECTION Ka-Ping Yee (University of California, Berkeley) and Kragen Sitaker.

    • Builds on Petnames, does password management, has its own password input (for password generation to sites).
  3. WEB WALLET: PREVENTING PHISHING ATTACKS BY REVEALING USER INTENTIONS Min Wu, Robert C. Miller, and Greg Little (Massachusetts Institute of Technology) Specific interaction for passwords to train users, plus in task integration.

  4. Haidong Xia and Jose Carlos Brustoloni, “Hardening Web Browsers Against Man-In-The-Middle Eavesdropping Attacks”, The 14th International World Wide Web Conference, ACM Press, Japan, 2005, pp. 489 – 498. "

    • Example of redoing all browser SSL dialogs to give users actionable steps to take. Includes blocking certain actions as well, which I (Mez) am less certain is a good idea. But it's a thorough reworking to try to ensure each action and reaction is motivated.
  5. Ye, Smith, Anthony: "Trusted Paths for Browsers", ACM Transactions on Information and System Security 8(2) (2005) 153-186

  6. Spoofguard Dan Boneh, John Mitchell, Robert Ledesma, Neil Chou, Yuka Teraguchi

Ethical Hacking

The tools of a hackers trade can either be used to exploit a computer resource as in Malicious Hacking, or be used to support Ethical Hacking to help designers, developers and IA staff figure out what exposure a computer resource has and how to protect it.

The tools presented are broken out into to two catagories, Browser privacy testing, where a client connects to a server that analyses the browser to determine what software and versions are running. The second catagory of tools are Exploit Tools that are used by a Server. Once the Server queries the User Agent to know what versions of software are running, it can select from its toolbox the most appropriate application that will provide the best results.

Browser (User Agent) Privacy Testing

User Agents can give away information and privacy details that enable programmatic hacking scripts to run when ever a user enters a malicious site. The link below are to sites that can run tests on browsers. The user of corporate or local firewalls can alter results

PC flank http://www.pcflank.com/

Browser Hawk http://www.cyscape.com/showbrow.aspx

Scanit Browser test http://bcheck.scanit.be/bcheck/

Expoit and Testing tools

Metaspoit http://www.metasploit.com/

Secure Web Development Practices

  1. Best Practices for Secure Web Development by Razvan Peteanu

    • Similar to our own document - several practical recommendations with some motivation for each
  2. The World Wide Web Security FAQ by Lincoln Stein

    • Dated, having been written in 2002, but many sections are still relevant

Other

  1. Standards, Usable Security, and Accessibility: Can we constrain the problem any further? Mary Ellen Zurko, Kenny Johar

Overview of accessibility approaches and challenges, around the time of wsc-ui last call.