(A sub-section of the NoteIndex) [Mez drafted this section; inputs from others welcome.]

Assumptions

The Working Groups' charter, goals, and scope center around making existing security context information from servers, protocols, authorities, and web interaction history useable and reliable for common human web tasks. Changes outside the scope of this working group would counter some subset of attacks and make the job of robustly presenting security context information easier. Obvious examples include cryptographic protocols that do not make the user's password available to the server they are authenticating with. As efforts progress towards a more secure infrastructure, modifications may be needed to our recommendations. We will note places that would be impacted by infrastructure progress in our recommendations.

[perhaps some should be moved to Out of Scope section, or some introduction.]

Making security usable in general is still a nascent area for research. (see some references, including the O'Reilly book.) There are a limited number of worked examples in deployed products to learn from. There are a larger number of attempts with unclear results to learn from. There are no worked examples of standards of usable security to emulate. Thus it is incumbent upon us to make clear how we will support and validate our recommendations. Traditional standards efforts do so from a combination of previous deployment experience, applying engineering design expertise, implementation, and (interoperability) testing. Our recommendations will be validated from a similiar combination, in the areas of both security and usability. Any one of these verification methods can be argued with, based on expertise and assumptions embodied in the others. By necessity, the process of verifying such recommendations is best effort, and continually informed by experience, research, and feedback from both the working group and the community, which will occur at every stage of progress.

[what about security testing/verification/assumptions? Tyler was going to touch on it in Problems, but probably belongs here.]

Results from real world use

Successful deployment and use of techniques to render security context information robust and usable is the most traditional from of verification, and the hardest to come by ("the proof is in the pudding"). The most secure solution possible is often ironically referred to by security experts as the "secure brick"; since it does nothing, it cannot be used for ill (although some wag has noted it can be lifted and thrown through a window). Current user interfaces, both web and non-web, provide some level of usability and security. We will document and integrate what works well in deployment from solutions today.

Research and expertise

Experience, skills, and best practices are often embodied in published papers (research or otherwise) and recognized domain experts. The membership of this working group includes experts in web user agents (particularly browsers), web applications, security, and usability. There is obviously a much wider world of experts as well.

Early on, we are identifying foundational principles that will drive, apply to, or verify our recommendations (see NoteDesignPrinciples; to include all items bolded in SharedBookmarks). As part of forming our recommendations, we will use accepted design techniques from usability and security. One example is the use of personas [need ref]. Personas identify the categories of users targetted by creating a rich but fictional representative for each of those categories. Explicit tasks, understanding, and reactions can then be posited for that catgory of users as part of the story of that persona.

As part of discussing our potential recommendations, we will apply expert review to those recommendations. This will take the form of explicit expert review passes on the topics of both security and usability by working group experts, as well as application of documented and accepted techniques for expert review of those domains. During formal review cycles we will explicitly target external experts for review from both those domains.

It is clear that users do learn from being exposed to new forms of security context information. For example, users are generally aware of the padlock icon, and that it somehow relates to security (see the CMU study at SOUPS 06, for example). On the other hand, it is also clear that users do not learn the specifics and nuances of the meaning of security context information, particularly when those specifics are more tied to the technology than the user model. Our baseline assumption on user education is that it happens over time, based on both direct experience and general awareness, but that a straight line cannot be assumed from the content of what has been attempted to be taught to the user, and what the user understands, and what the user does (see the ECL study).

[do we have any foundational security principles that are useful at this stage? defense in depth? What about accepted secure design techniques? Need to identify usability and security design techniques as part of the Note. Also need references for accepted techniques for expert review of usability and/or security. Security - include potential attacks as expert review.]

[Mez todo - go through our reference list to find what we have.]

Implementation and testing

For the traditional implementation and interoperability testing cycle of a standards working group, we will implement our recommendations and test them on users. Testing may be structured lab testing (see Johnny2) or contextual testing (ref to the infamous Indiana social phishing experiment needed). Our user tests will include some level of attacks or "tiger teaming" explicitly targetted at subverting the recommendations themselves.