RE: ACTION-335 logotypes and ISSUE-96 discussion

The questiuon in my mind is the scenario in which we test for. Do we consider interactions over a few hours in a testing lab or interactions over several months in the field?
 
The problem is that it is much easier to test in artificial conditions than in the field. That creates a bias in favor of disruptive user interfaces. I can guarantee you that a UI which gives the user an electric shock through the key board when attempting to visit a phishing site will be (1) vastly more effective at detering visits to phishing sites than all current techniques (including PI bar) and (2) completely unacceptable to end users.
 
We have to break the problem down:
 
1) Process Security - can we establish processes (CA, black/white listing &ct.) that generate sufficiently reliable indicata?
 
2) Technical security - can we distribute the indicata in a form that cannot be spoofed?
 
3) Presentation security - can we display the indicata in a form that cannot be spoofed?
 
4) Actional security - can we cause the user to notice and act on the indicata?
 
 
We do not need to solve all of these to 100% in order to create a system that causes a measurable reduction in risk.
 
The reason that we tend to obsess at 100% is that cryptography allows us to be pretty good at some aspects of technical security. In fact with any cryptographic security scheme we might adopt I can be pretty well 100% confident that the cryptography is sufficiently secure as to not be worth even thinking about attacking. And even if flaws are found in the cryptographic layer the chances of an exploit are very small. SSL 1.0 and WEP were abysmal, neither withstood serious examination. But today SSL 3.0 and WPA are the best security we have in deployment. 
 
That is not to excuse sloppy crypto, there are real costs with bad crypto even if you do not see an attack. Explaining to your auditors why you use a broken crypto scheme costs money year on year.
 
 
Clearly to provide value we have to hit all the bases, but not necessarily for 100% of the user population. I think we have to expect to phase in better security as follows:
 
 
Phase 1: Secure the internet for the suspicious, informed user.
 
Sufficiency: 90% Process, 95% Technical, 80% Presentation, 25% Actional
 
We don't even need to inform the user 100% of the time, all we really need to do is to inform the user when it counts, when they are following a phishing email lure. We are already seeing this with EV for the small number of sites that deploy. Often these are companies that have to use email as part of their business even though they are a phishing target. 
 
At this point the fact that there might be weaknesses in Process or Presentation is not going to have a major effect on effectiveness, the system is not worth attacking.
 
Phase 2: Inform more users
 
Sufficiency: 95% Process, 99% Technical, 90% Presentation, 50% Actional
 
The biggest shortcomming with the current security situation in my view is that it is not educatable. I don't know how to tell people to take care in a manner that they can apply reliably. Informing and educating the user has to be part of the solution but first we have to get to a point where we can educate them.
 
One of the main reasons I want secure logos is that we have logos on the Web today, they are not secure and they drown out the security indicata we do provide. 
 
 
I don't just want to foil attacks here, I want to provide an environment where confidence is justified.
 
________________________________

From: public-wsc-wg-request@w3.org on behalf of Ian Fette
Sent: Mon 12/11/2007 7:50 PM
To: Serge Egelman
Cc: W3C WSC Public
Subject: Re: ACTION-335 logotypes and ISSUE-96 discussion


I worry about over-reliance on prior user studies without delving into the details. I know I've seen studies that show particular implementations to have particular flaws that rendered the particular implementation ineffective. I don't know that I've seen studies that say the same thing about the entire concept. 

I would say that if we agreed on a set of conditions that we wanted to test for "success", given another set of controls (including implementations), and that experimental setup had already been tested, then we should not waste time re-running that same experiment. However, if there are sufficient changes (either in implementations, concepts, base assumptions, or overall experimental design) then I think we have to take that into consideration. 

In short, I don't want to waste people's time by re-running experiments needlessly, but if we see problems with prior experiments I don't want to rely on their results in an overly-broad manner.

-Ian


On Nov 12, 2007 3:55 PM, Serge Egelman <egelman@cs.cmu.edu> wrote:


	I agree completely.  However, if testing or prior work show that the
	perfect world scenario (where all good sites use EV certs) is completely
	flawed (i.e. EV logos will never be noticed by users, be susceptible to
	spoofing, etc.), then there's little value in testing the realistic
	scenario.  Agreed?
	
	serge
	

	Ian Fette wrote:
	> This action (ACTION-335) was to provide discussion topics for ISSUE-96. 
	> I only really have one point, and I will try to state it more clearly
	> than at the meeting.
	>
	> To me, the effectiveness of any of the logotype proposals (or the EV
	> proposals, for that matter) depends greatly upon the adoption of these 
	> technologies by sites. We can do really cool flashy things when we get
	> an EV cert, or an EV-cert with a logo, but right now the only two sites
	> I can find using an EV cert are PayPal and VeriSign. Therefore, I wonder 
	> how habituated people would become in practice, if they never (or
	> rarely) saw the EV/logotype interface stuff in use.
	>
	> My proposal is that any usability testing of the EV and/or logotype
	> things in the spec not only reflect how users would behave in a land
	> where everyone is using EV-certs and life is happy, but rather also test
	> a more realistic case. That is, look at what the adoption is presently 
	> and/or what we can reasonably expect it to be at time of last call, and
	> do usability testing in an environment that reflects that adoption rate
	> - i.e. some percentage of sites using EV certs, some percentage also 
	> using logos, and another percentage still using "normal" SSL certs. My
	> worry is that we may be thinking "EV certs will solve X,Y, and Z", but
	> that may only be the case if users are used to seeing them on the 
	> majority of sites, and should that not end up being the case, we need to
	> look at the usability and benefit in that scenario as well.
	>
	> I think this is what the ACTION wanted, i.e. for me to state this point 
	> more explicitly. I am going to therefore assume that my work on this
	> action is complete, unless I hear otherwise.
	>
	> -Ian
	
	
	--
	/*
	PhD Candidate
	Vice President for External Affairs, Graduate Student Assembly 
	Carnegie Mellon University
	
	Legislative Concerns Chair
	National Association of Graduate-Professional Students
	*/
	

Received on Tuesday, 13 November 2007 15:58:52 UTC