(A sub-section of the NoteIndex) [Maritza is drafting this section. See discussion in email. Inputs from others welcome.]
This section will list usability and security principles we will aim to conform to in our presentation recommendations. Mez has boldfaced an initial set in the SharedBookmarks.
Design Principles
This list of design principles includes ideas from the more general area of human-computer interaction as well as ideas that have been suggested specifically for designing usable security.
- Know who your user is. Create user profiles according to level of experience, education, cultural differences, etc. [Shneiderman]
- Create task profiles (use cases) representative of the tasks the user will complete [Shneiderman].
- Aesthetics and minimalist design. Dialogs should not display information which is irrelevant or rarely needed. However, the user should be able to conveniently access more information as required by their level of experience. An example would be accessing more information for debugging purposes [Nielson].
- Flexibility and efficiency. Accelerators should be available for advanced users which will allow them to speed up interaction [Nielson].
- UI's should leverage the habits users form in their design [Raskin].
- Design dialogs that yield closure. When an action has been completed it should be clear to the user that the action was successful and that they can consider their task completed and can move on to the next one [Shneiderman].
- Present data only if they assist the operator [Shneiderman].
- To draw the user's attention, only two colors should be used, contrast can be used, bold type or arrows can be used, blinking should be kept to a minimum [Shneiderman].
- If sounds are used, soft and pleasant tones can be used for regular positive feedback. Harsh sounds should be used for rare emergencies only [Shneiderman].
- Consistency. The cues should be displayed consistently in location and across sites and browsers in an attempt to prevent spoofing and confusion of the user [Shneiderman].
- False positives and negatives should be kept to a minimum to avoid degrading the user's level of confidence in the security cue [Zurko].
- False positive warnings rapidly dilute warning usability. An example would be presenting the user with too many pop-ups and allowing them to get in the habit of clicking through without reading.
- The system should speak the user's language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order [Nielson].
- Provide explanations, justifying the advice or information given [Patrick].
Integrated security aligns security with user actions and tasks so that the most common tasks and repetitive actions are secure by default. Provides information about security state and context in a form that is useful and understandable to the user, in a non-obtrusive fashion [DiGiogia].
- When possible, safe staging should be used. Safe staging is “a user interface design that allows the user freedom to decide when to progress to the next stage, and encourages progression by establishing a context in which it is a conceptually attractive path of least resistance.”
- Metaphor tailoring starts with a conceptual model specification of the security related functionality, enumerates the risks of usability failures in that model, and uses those risks to explicitly drive visual metaphors [Whitten].
- If a feature or cue is included in the design with the intention of improving some aspect of usability (learnability, better functionality through more information ...) it must be clear to the user the feature is available, and the action or process expected of the user must be clear by the way the feature is presented [Norman].
- The visual cues presented to the user must represent the state/action of the system in a way that is consistent with the actual state/action of the system to allow the user to create an accurate conceptual model [Norman].
- The user must be aware of the task they are to perform [Whitten].
- The user must be able to figure out how to perform the task [Whitten].
- The user should be given feedback when the security state of a page is changed [Whitten].
- Absent any understandable scoping indicators, users will assume that any and all security context information displayed applies to everything in the browser.
Characteristics of the Typical User that Affect Design
The following list includes characteristics as observed in prior user studies. The referenced user studies were conducted with a limited user group and with the data gathered may not be exactly representative of the average user, however, until a more widely deployed user study is conducted, this is the data we have available.
(Tyler: let me know if you want this in full sentences instead of bullets. If you think we should include stats from the studies I might be able to extract some numbers from the paper where applicable)
- Security is a secondary goal, it is usually not the user's main focus [Whitten]
- Most users lack the knowledge that would help them make security decisions on the internet.
- Users may be capable of paralleling security concept to real-world concepts but may be unaware of the details behind security protocols and concepts.[Dhamija]
- Some users do not understand the meaning of current security cues, and the difference between the web content and the browser chrome [Dhamija]
- A user has only a single locus of attention, a feature or an object in the physical world or an idea about which you are actively thinking [Raskin]
- Users can be visually deceived easily [Dhamija]
- Users have bounded attention [Dhamija]
- Some users ignore warning signs, or reason them away [Wu]
- Some users rely on the content of a web page to make security decisions [Wu]
- Users are not empowered to request that a site fix its security problems and therefore are forced to decide whether to take the risk in order to complete the task
Suggested User Study to Better Identify the Typical User
[This subsection is from Michael McCormick's email. This seems like a good place for this info if we plan on including it in the note. If anyone knows of a better place for it ... ]
Given the limited amount of data available regarding the typical user's knowledge of security as it relates to the internet, it may be helpful to conduct a user study with a larger and more diverse participant pool with the following objectives:
- Validate or refine our understanding of who the average web user is and how much she understands security
- Validate or refine our assumptions about how current security cues are understood and used, and how well that works today
- Test user reaction to proposed WSC solutions using UI prototypes of new security indicators, messages, etc.