CSS Shaders Security

From Effects Task Force
Revision as of 15:01, 19 April 2012 by Vhardy (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Introduction: Why do CSS shaders introduce new security concerns?

This page summarizes the issues that have been raised about CSS shaders security and collects ideas (or lack of) for addressing the issues.

Issue description:

  • CSS shaders get access to the rendering of web content. That web content may possibly come from a different domain than the shader and could display user agent data (e.g., different color on a visited link).
  • It follows that CSS shaders have access to two types of sensitive rendering information: user agent data (e.g., visited links) and cross domain data (e.g., bank statement).
  • It follows that if CSS shaders are able to 'communicate' with the attacker, information leak happens.

Since CSS shaders can communicate with the attacker by taking more or less time to execute, information is leaked.

The various solutions that have been discussed fall into three categories:

  1. restrict the rendering to remove all sensitive information. Propositions in that domain include CORS and anonymous rendering (i.e., rendering 'disconnected' from the user agent's data [e.g., dictionary, visited links, user agent stylesheet]).
  2. remove access to rendered content altogether. The approach here is to completely cut-out access to the rendering while preserving (most of) the functionality shaders provide.
  3. restrict the time channel to prevent the shader to communicate with the page scripts (the only known way to leak information for a shader). In this category: remove conditionals and other operations that would make the shader's execution time vary depending on texture values, or simply make the shader run in a constant amount of time.
  4. restrict the shaders so that they cannot change behavior depending on the protected information value. In this category: remove conditionals, loops and texture lookups in shader code.

Following the mailing lists feedback / discussions (see [6]) and meetings with Adam Bart, Vincent Hardy and Dean Jackson (see [2]), the editors are proposing the following measures to address the security issues.

Proposed Method: disallow access to rendered content and combine with blending

The security issues described here are all rising from access to rendered content, in the form of a texture, by shaders. If shaders did not have access to the rendered content, the set of security issues becomes the same as those encountered by WebGL shaders (see [8] as well for shared concerns between CSS shaders and WebGL shaders).

The proposed method is to disallow access to rendered content from shaders. Let's examine the consequences for each type of shader.

No access to rendered content texture in vertex shader

Access to texture is optional in OpenGL ES 2.0 for vertex shaders and the proposed method explicitly disallows access to the rendered content texture in the context of CSS shaders. In other words, no texture uniform for the rendered content from the fragment shader should be allowed by the implementation.

Vertex shaders retain a lot of their usefulness even if they do not have access to the rendered texture. For example, the code samples illustrating shaders in the original proposal (https://dvcs.w3.org/hg/FXTF/raw-file/tip/custom/index.html#recommendation) do not make use of textures in the vertex shader code.

No access to rendered content texture in fragment shader

Most fragment shaders involve modifying the input texture's color values (e.g., sepia toning, blur, shadow effect). However, a lot of common use cases involve computing a value that is combined with the texture color in a uniform way.

The proposal is to remove direct texture access from the fragment shaders but require that the implementation combines the output of the fragment shader with the rendered content in a predefined or controllable way. This way, a fragment shader can still compute the lighting that should be applied to an object and the implementation performs the multiply operation with the original texture (see note below on possible methods for combining the result of a fragment shader with the rendered texture).

For example, if we did not have security constraints, a simple lighting effect could be implemented like so:

<pre<noinclude></noinclude>> // UNSECURE SHADER WITH DIRECT TEXTURE ACCESS precision mediump float; varying vec2 v_texCoord; varying float v_lighting; uniform sampler2D u_texture;

void main() {

   vec4 color = texture2D(u_texture, v_texCoord);
   color = vec4((color.xyz * v_lighting), color.w);
   color = clamp(color, 0.0, 1.0);
   gl_FragColor = color;

} </pre<noinclude></noinclude>>

In this example, the original color value is multiplied by a v_lighting vector to produce a new value.

Now that we remove texture access, this shader becomes:


// NO UNSECURE TEXTURE ACCESS IN SHADER precision mediump float; varying float v_lighting;

void main() {

   vec3 color = clamp(v_lighting, 0.0, 1.0);
   gl_FragColor = vec4(color, 1.0);

} </pre<noinclude></noinclude>>

The implementation will then combine the result of the fragment shader with the shaded texture. Implementations can do this in different ways. A possible option is to use shader rewriting with a technology such as ANGLE and produce the following shader:

<pre<noinclude></noinclude>> // RE-WRITTEN SHADER WITH MULTIPLY INSERT precision mediump float; varying vec2 v_texCoord; varying float v_lighting; uniform sample2D u_textureXYZ; // possibly randomly generated texture uniform name.

void main() {

   vec3 color = clamp(v_lighting, 0.0, 1.0);
   vec3 tc = texture2D(u_textureXYZ, v_texCoord);
   vec4 orig_gl_FragColor = vec4(v_lighting, 1.0);
   gl_FragColor = orig_gl_FragColor * tc;

} </pre<noinclude></noinclude>>

Options for combining the result of the fragment shader with the rendered texture

The above suggests that by default, the result of the fragment shader should be multiplied with the fragment value. It might be interesting to allow other options for example allow a blend mode selection for combining the shader output and the original texture. For example, we could allow the selection of other blend modes such s 'overlay' or 'screen'.

Access to other textures

Access to other textures from CSS shaders should be considered. The security measures should be the same as for WebGL access to textures and the same usage of CORS should apply in that context.

Alternate Methods that were considered

Method A: Implementations should make shaders run in a fixed amount of time

For each shader on a page, the user agent would determine a maximum time allocation for each run of the shader (Maximum Shader Excecution Time, MSET). This could be done, for example, by first applying the shaders to non-sensitive data (e.g., blank textures) and measure the amount of processing time taken on several iterations, computing an average and allocating, for example, 120% of the average.

Once the maximum time allocation for the shader has been determined, shaders are applied to content. If a shader executes in less that MSET, then the implementation must wait until MSET has ellapsed. If a shader has not executed once MSET has ellapsed, then the rendering thread/process must still proceed and behave as if the shader had completed its operation. In this last case, the user agent could apply measures identical to the ones applied for denial of service attacks to terminate the shader's execution (see [5]).

Important Note

When using this method, it is very important that the MSET be computed on neutral, non-sensitive data. Otherwise, an attack could still be mounted. For example, let's imagine that shaderA executes in a fixed amount of time, no matter what the input is. Then shaderB executes in the same amount of time as shaderA for all pixel values, except full red. For full red, shaderB takes 10x more time to complete. With these two shaders an attacker can create a page with a link to a bank web site. The visited link style applies a red background. The non visited background is white. First, the page will apply shaderA and measure the (constant) execution time for shaderA. Then, the page will change the shader to be shaderB and measure the (contant) execution time for shaderB. If the execution time is (roughly) 10x, then the bank web site was visited. If the execution time is roughly the same as for shaderA, then the site was not visited. In both cases, protected information was leaked. This is because the MSET was not computed on neutral content but on sensitive content.

Issues with Method A

This method has several limitations. First, it is not possible to interrupt shader execution of shaders in all GPU/OS combination. There is work underway to improve driver feature to allow watch dogs, but this is not widely deployed.

The second issue with this method is that even though it may prevent the shader to communicate through the time channel, it could still produce a result that varies, visually, depending on the input texture. This can be a security issue as well which the proposed method addresses by removing access to the rendered texture completely.

Method B: Cross Origin Domains Restrictions

Even if an attacker cannot use the so-called 'time channel', Method A does not prevent the attacker from applying effects to a cross domain resource with malevolent intentions. For example, by altering the rendering of a bank statement, an attacker could mislead the victim. Note that this threat is already present in regular Web content. For example, it is possible to overlay content on top of a bank statement and equally mislead a victim.

User agents should apply cross domain restrictions to content that is subject to a shader. There are two options that should be discussed by the working group:

  1. use Cross Origin Resource Sharing to allow/disallow a shader to apply to content. If there is any mismatch between the origin(s) of a shader (there might be multiple because the fragment and vertex shaders can be fetched separately) and the origin of any of the shaded content, shading will be disabled unless the mismatch happens on elements whose domain allows queries from the shader's domain(s) through CORS.
  2. simply disallow shading on content if there is any domain mismatch between the shader domain(s) and the shaded content domains.

Issues with Method B

This method by itself only addresses the issues of cross origin access and does not address the issue of user agent information leak (such as visited links). Note that CORS is part of the proposed security measures (see above, "Proposed Method", but in the context of access to images coming from a different domain.

Method C (Fallback): Anonymous Rendering and CORS

In case a user agent is not able to implement method A, then it should at a minimum do the following:

  1. Implement method B. This will prevent undesired access to cross domain content by malicious shaders.
  2. Implement anonymous rendering for the following:
    1. visited links. When an element is rendered prior to shading the 'visited' link style should not be applied.
    2. text fields, editable content, text areas. When element displaying editable content is rendered prior to shading, they should be rendered with no spelling or other user agent hints regarding the element's content.
    3. file input fields. When file input fields are rendered prior to shading, any text content revealing file paths an names must be removed.

Note that this is likely a partial protection against attacks as there may be other user agent information leaked through rendering that have not yet been identified. Therefore, implementors are strongly encouraged to implement method A and B instead of method C.

For this method, it is important to note that there are two types of user agent information that can normally leak through rendering:

  1. browser agent information, such as browser history.
  2. platform / os information: for example, if the dictionary is implemented using the operating system's support, then the content of the dictionary is exposed, to some extent, through browser rendering.

Some implementations have isolated the browser agent information so that it is possible to isolate or control the amount of user browser agent information exposed in the rendering (e.g., WebKit has a notion of 'empty client'). However, it seems less common for user agents to also isolate their platform dependencies so that they can also be controlled. Ideally, implementations should control all the information exposed through rendering and assess whether or not exposing that information to a shader is creating a security threat (in which case shading would be disabled).

Issues with Method C

This method has at least two issues.

First, it is a 'whack a mole' exercise. The number of things that would need to be changed to make an anonymous rendering is not clearly identified, and it is unclear if a finite and definite set can be identified. Then, it makes of an odd user experience. For example, from an end-user perspective, the color of visited links would change depending on whether or not a shader is applied. This is not a good user experience.

Method D: Limiting Shader Functionality

There have been discussions about limiting the shader syntax to prevent attackers from modifying the shader behavior based on the content (e.g., make the shader take a lot longer to execute for particular color values in the input content).

Issues with Method D

While this method may prove to work, it raises concerns:

  • the specification effort to further restrict the shader constructs or runtime behavior seems significant and a topic suitable for research.
  • as with method C, it is possible that not all aspects leading to timing variations in a shader's execution would be covered.

Method E: rendering on a separate thread

It has been suggested that user agents could implement their rendering on a separate thread and not reflect shader execution changes in requestAnimationFrame.

Issues with Method E

This method raises concerns:

  1. requestAnimationFrame is now disconnected from the actual rendering, which defeats its purpose
  2. There's a risk that the user agent would be unable to prevent the timing signal from leaking out of that thread. For example, if attacker-supplied scripts (e.g., a WebWorkers) were scheduled on the same CPU core, the attacker could measure how much CPU time the rendering thread took to execute. Similarly, the attacker can measure cache contention, which will be higher when the rendering thread is consuming more resources. Examples of both of these kinds of attacks have been shown to be practical in related settings.

Method F: user permission and other methods to acquire trust

There are precedents (such as the Calendar API where a given functionality is both useful and a security concern. It is possible (as is the case with the calendar API) to limit the use of the functionality through different mechanisms:

  1. require user permission. It would be possible to explicitly ask the user to allow (or not), the use of shaders on a web site.
  2. allows pre-arranged trust (for example in a Widget runtime).

See the caledar API security consideration section for more details on how this could apply.

Issues with Method F

While possible in theory, most browsers are reluctant to prompt the user for allowing particular features. While the calendar precedent is something most users understand, the threat of shaders will likely be obscure to most users. In addition, pre-arranged trust is limited to a subset of use cases and does not help with general web content deployment.

Method G: re-writing shaders

It might be possible to automatically rewrite shaders to run with a constant number of instructions. This method requires research and validation.

Issues with Method G

This method has not been detailed enough to be considered a real contender to solve the security issues.

However, even with a method to rewrite shaders to run in a constant number of instructions, hardware implementations do not guarantee that each instruction will take the same number of cycles for all input values. For example, multiplying a NaN value may take more cycles on some hardware implementations. Texture cache access time may also vary depending on the texture coordinate.

Method H: Tainting shader code branches

The timing attack for shaders arise because of the combination of:

  1. texture access. This gives the shader access to the color value of the rendering.
  2. ability to modify the shader behavior based on color values derived from texture access.

Method H proposes to analyze the shaders source code (with a technology such as WebKit's Angle project to detect all texture access and taint all code branches which derive from texture access. If any conditional statement that uses a 'tainted' value as an argument is found, the shader would not be allowed to run.

Specifically, the result of a "texture2d()" function call or any derived value cannot be used as a condition in a "if", "while" or "for" statement or as a parameter of or as a parameter of any function that can have longer execution path based on the input tainted variables. For built in library functions (such as Math functions), it means that if an implementation of the library has functions whose execution time depends on parameter value, the call should cause an error if invoked with a tainted parameter value. For user functions, it means that it is an error to use a 'tainted' value in 'if', 'while' and 'for' statements.

Examples of invalid shader code fragments:

 vec4 texColor = texture2D(u_texture, v_texCoord); // texColor is tainted
 if (texColor.r > -0.5) {...}  // Error: cannot 'if' on an expression using a tainted value
vec4 textColor = texture2D(u_texture, v_texCoord); // texColor is tainted
float r = texColor.r; // r is tainted, derived from tainted value
if (r > 0.5) {..} // Error: cannot 'if' on expression using a tainted value
float r = texture2D(u_texture, v_texCoord).r; // r is tainted
float v = pow(2.0, 60.0); // No error, the input values are not tainted
float pr = pow(r, 20.0); // Error: throw an error if the execution time of pow depends 
                                 // on the value of its 'x' argument
float pr2 = pow(20.0, r); // same thing with pow's 'y' argument
vec4 textColor = texture2D(u_texture, v_texCoord); // texColor is tainted
float r = texColor.r; // r is tainted, derived from tainted value
for (int i = 0; i < 200, i++) { // ok, no tainted value in for expressions
    if (r > 0.5) {..} // Error: cannot 'if' on expression using a tainted value
float myFunction (float v) {
      if (v > 0.5) {
             // ... this takes a long time
      } else {
             // ... this does _not_ take a long time

    return result;

void main () {
      float r = texColor.r; // r is tainted
      float v = myFunction (30); // v is not tainted
      // In this previous call, ven though the execution time of myFunction varies depending 
      // on the input, the call is made with untainted variables.
      float v2 = myFunction(r); 
           // Error: myFunction has conditional branches that make its execution time depend
           // on the input values. Since the 'r' input is tainted, this yields an error.

In addition, vertex shaders, in this solution, would not be allowed access to texture data. This is an optional feature in OpenGL ES 2.0, and in this method, it would be totally disallowed.

Issues with Method H

While very promising (see the work done on http://code.google.com/p/mvujovic/wiki/ShaderControlFlowAnalysis shader flow control analysis], further research has shown that execution time of simple arithmetic operations or built-in functions may vary dramatically depending on input values. For example, arithmetic operations on Inf and NaN on Intel CPUs, can be significantly slower than on other values. As a consequence, it would be trivial to implement a shader that does not use conditional branches and still varies its execution time depending on the input texture color values.


[1] Context Article "WebGL - A New Dimension for Browser Exploitation" - http://www.contextis.com/resources/blog/webgl/
[2] Adam Barth Article "Timing Attacks on CSS Shaders" - http://www.schemehostport.com/2011/12/timing-attacks-on-css-shaders.html
[3] WebGL Timing Attack Proof of Concept - http://www.contextis.co.uk/resources/blog/webgl/poc/index.html
[4] Cross Origin Resource Sharing Specification - http://www.w3.org/TR/cors/
[5] WebGL's defense agains Denial of Service attacks - http://www.khronos.org/registry/webgl/specs/latest/#4.4
[6] CSS Shaders security discussions threads:

[7] Calendar API security consideration - http://dev.w3.org/2009/dap/calendar/#security-and-privacy-considerations [8] Discussion on Denial Of Service - http://lists.w3.org/Archives/Public/public-fx/2012JanMar/0143.html