This is an archived snapshot of W3C's public bugzilla bug tracker, decommissioned in April 2019. Please see the home page for more details.
I think, the crypto api is useless, if a cross site attack can overwrite the functions. e.g window.crypto.getRandomValues = function (buf) { for (var index =0; index < buf.length; index++) buf[index] = 4; return buf; } That is not the randomness we want. It is no problem, when someone can add new properties to windows.crypto, but existing properties may not be overwritten.
Franz, I suspect that if you have cross-site code injection like that you're screwed even if window.crypto and window.crypto.getRandomValues() are marked unforgeable. To make this concrete, can you cite some actual example code that uses crypto.getRandomValues()? I will bet it's vulnerable to this sort of attack even if getRandomValues is guaranteed to be doing the right thing.
To take another step back, are you trying to protect against malice or against incompetence?
Hi Boris, I want to protect against malice. For example https://github.com/openpgpjs/openpgpjs uses crypto.getRandomValues() for key generation (RSA and AES) and encryption. If an attacker can manipulate window.crypto.getRandomValues() via e.g. cross-site code injection , all generated keys and encrypted documents are broken, even if openpgpjs puts all own code in a closure, which cannot be manipulated. I tested all current browsers. Only IE11 prevents window.crypto (here: window.msCrypto) from being overwritten, but even in IE11 all properties (including getRandomValues()) can be overwritten. Ps: You cannot protect against incompetence. "Only two things are infinite, the universe and human stupidity, and I'm not sure about the former." Albert Einstein
Another very popular sample is https://github.com/digitalbazaar/forge. It is used e.g. by the German ministry for finance for online tax declarations.
Franz, thank you for the code link. Protecting against incompetence (i.e. someone accidentally setting crypto.getRandomValues to a crappy algorithm) is easier than protecting against malice (someone deliberately messing with your execution environment to screw you over). Your proposal would do the former, but not the latter. Looking at the linked code, here are some ways to attack it that work even if crypto.getRandomValues cannot be tampered with: 1) Redefine String.fromCharCode to ignore the passed-in value and instead return whatever the attacker wants. This allows the attacker to control the return values of random.getRandomBytes(). 2) Redefine window.DataView to create objects that the attacker controls. This allows the attacker to control the return values of getSecureRandomUint() and hence getSecureRandom() without affecting getRandomBytes. 3) Redefine Math.pow or Math.abs. This allows the attacker to control the return values of getSecureRandom() without affecting getRandomBytes or getSecureRandomUint. 4) Redefine Math.floor or Math.pow to control getRandomBigInteger. I didn't look at any of the code outside random.js, but I'm fairly certain that there are similar issues in that code too. The only way to make this stuff safe is to either make sure any other injected code runs in a caja-like sandbox or to explicitly freeze all the properties you depend on, which include a lot more than just window.crypto and window.crypto.getRandomValues. So if we just made the latter unforgeable that would mostly give people a false sense of security, as far as I can tell.
Comment 5 applied to the link from comment 3. Looking at the link from comment 4, overriding String.fromCharCodeAt lets the attacker control util.ByteBuffer.prototype.put*, and hence the entropy pool.
As Boris has explained, this does not provide any meaningful security - but worse, gives the appearance of security. If an attacker can inject script, they can alter the operating environment in any number of ways - from manipulating objects (like String.prototype) to altering code that calls getRandomValues and such. Mitigations such as CSP provide meaningful security, and thus should be encouraged (and, in previous discussions, were attempted to be required - but to no effect). The argument for protecting against 'accidental' overwriting is equally troubling, as it suggests an author capable of implementing secure cryptography, but not implementing secure javascript - which is a scenario that is equally doomed to failure independent of the spec action here. There's no question that we want to remove things that will obviously cause harm, but at the same time, the security boundaries should be crisp, and we should only attempt to normatively specify things that will meaningfully improve security (eg: requiring SSL or CSP)
Ok, I see. So the WONTFIX is ok. If a code injection can occure, everything can happen. Bad news...
Yes, I can understand the arguments for just doing nothing ... this little topic wouldn't save the crypto world. On the other hand I'm pestered by many mad restrictions when trying to use a Java applet as a surrogate for browser crypto. Many things seem strange or useless to me. But in the applet domain the mantra is 'every little helps!' ... OK then ... Now I'm a bit astound that just few processor cycles beyond the applet the approach is so completely different! Is this just a clash of cultures or do I just don't get the full picture?
Note that dealing with XSS attacks is within the domain of the WebAppSec WG, as pointed out in https://www.w3.org/Bugs/Public/show_bug.cgi?id=25721.