[whatwg] Canvas v5 API additions

I just added a bunch of things to the <canvas> 2D API:

 - Path primitives:
     var p = new Path();
     p.rect(0,0,100,100);
     context.fill(p);

 - Ellipses:
     // arcs from center point, similar to arc()
     context.ellipse(x, y, width/2, height/2, angle, 0, Math.PI*2);
     context.stroke();
     // arc corners
     context.arcTo(x1, y1, x2, y2, width/2, height/2, angle);

  - SVG path description syntax
     context.stroke(new Path('M 100,100 h 50 v 50 h 50'));

  - Dashed lines
     context.setLineDash([3,1,0,1]); // --- . --- . --- .
     context.moveTo(100,100);
     context.lineTo(200,300);
     context.stroke();

 - Text on a path:
     var p1 = new Path('M 100 350 q 150 -300 300 0');
     var p2 = new Path();
     var styles = new DrawingStyle();
     styles.font = '20px sans-serif';
     p2.addText('Hello World', styles, null, p1);
     context.fill(p2);

  - Hit testing:
     context.beginPath();
     context.rect(10,10,100,100);
     context.fill();
     context.addHitRegion({ id: 'The First Button' });
     context.beginPath();
     context.rect(120,10,100,100);
     context.fill();
     context.addHitRegion({ id: 'The Second Button' });
     canvas.onclick = function (event) {
       if (event.region)
         alert('You clicked ' + event.region);
     });

  - Region discovery for AT users
     context.rect(10,100,100,50);
     context.fill();
     context.addHitRegion({
       id: 'button',
       control: canvas.getElementsByTagName('button')[0],
     });
     context.beginPath();
     context.rect(0,0,100,50);
     context.textAlign = 'center';
     context.textBaseline = 'top';
     context.fillText('My Game', 50, 0, 100);
     context.addHitRegion({
       id: 'header',
       label: 'My Game',
       role: 'heading',
     });
     // now user can discover (via touch on a touch device, or via the 
     // virtual cursor in a traditional desktop AT) that there's a heading 
     // at the top of the canvas that says "My Game" and a button lower 
     // down that looks (to the AT) exactly like the canvas element's
     // first child <button>.

  - Automatic cursor control
     context.addHitRegion({
       path: new Path('M 10 10 h 20 v 20 h -20 z'),
       cursor: 'url(fight.png)',
     });
     context.addHitRegion({
       path: new Path('M 50 30 h 20 v 20 h -20 z'),
       cursor: 'url(quaff.png)',
     });

  - APIs that take SVGMatrix objects for transforms
     // transform a path when you add it to another path
     var duck = new Path('M 0 0 c 40 48 120 -32 160 -6 c 0 0 5 4 10 '+ 
                         '-3 c 10 -103 50 -83 90 -42 c 0 0 20 12 30 7 c '+ 
                         '-2 12 -18 17 -40 17 c -55 -2 -40 25 -20 35 c '+
                         '30 20 35 65 -30 71 c -50 4 -170 4 -200 -79z');
     var identity = new SVGMatrix(); // constructor will be in SVG2 DOM
     var threeDucks = new Path();
     threeDucks.addPath(duck, identity.translate(0,0));
     threeDucks.addPath(duck, identity.translate(100,0));
     threeDucks.addPath(duck, identity.translate(200,0));

     // set the transform using an SVGMatrix
     cantext.currentTransform = context.currentTransform.flipX();

  - many more metrics from measureText()
     var metrics = context.measureText('Hello World');
     var bL = metrics.actualBoundingBoxLeft;
     var bR = metrics.actualBoundingBoxRight;
     var bU = metrics.actualBoundingBoxAscent;
     var bD = metrics.actualBoundingBoxDescent;
     context.fillStyle = 'black';
     context.fillRect(x-bL, y-bU, bL+bR, bU+bD);
     context.fillStyle = 'white';
     context.fillText(x, y, 'Hello World');
     // there are also values for all the baselines

  - ability to transform a pattern
     var identity = new SVGMatrix();
     context.fillStyle = context.createPattern(img, 'repeat');
     context.fillStyle.setTransform(identity.rotate(angle));
     context.fillRect(0, 0, canvas.width, canvas.height);

  - various other additions
     // reset the clip region
     context.resetClip();
     // reset the transform
     context.resetTransform();


I include below responses to relevant feedback and to proposals that I 
have not added but do not intend to add or do not understand well enough 
to add. If I haven't replied to your feedback on canvas, that means it's 
about a feature I haven't added yet but do not see immediate problems 
with. (If you have sent feedback regarding bugs in canvas, or regarding 
some of the canvas features already in the spec and not asking for a new 
feature, but have not received a reply, then I may have missed it. Let me 
know and I'll see if it's been misfiled.)

On Sun, 4 Apr 2010, Saurabh Jain wrote:
> 
> I have a proposal for adding generic image collision API in Canvas. 
> Given two images HTMLImageElement objects and there respective x, y, 
> clip width and clip height the API call will let you know if there's any 
> non-transparent pixel (any opaque or translucent pixel) in one image's 
> clipping region that overlaps a non-transparent pixel in another image's 
> clipping region. The clipping region defined by this API call is for 
> local use only for this purpose.
> 
> This API will be useful for game programmers. I am author of India's 
> first book on J2ME and have been developing mobile games since 2002. I 
> have through personal experience observed that this pixel level 
> collision at native level is required for game developers to build games 
> easily. What I am referring to is a kind of generic pixel level 
> collision. People are free to develop there own complex collision 
> mechanisms for complex games but small teams composed of new game 
> developers find image collision detection implementation the most 
> difficult concept in the whole game development process to grasp.
> 
> Also pixel level checking for two 100 pixel x 100pixel images will 
> involve lot of execution time if done at JavaScript level since up to 
> 10000 checks may have to be performed. Native browser support can speed 
> things a lot. Similar thing happened in J2ME where before MIDP 2.0 
> people had hard time to do pixel level collision both due to programming 
> complexity and execution issues.

If you mean path collision detection, I haven't added this, but the path 
objects we have now have lain the groundwork for this in the future: one 
can easily imagine adding a feature to this API that takes two Path 
objects and tells you if they intersect. If this is a feature that other 
people ask for, then I imagine we will add this in the next canvas feature 
update.

On the other hand, if you specifically meant collision detection for 
alpha-channel sprites, then I haven't added that yet either, but am 
unsure whether it makes much sense to add. Does anyone else (especially 
implementors) have any opinions on this matter?


On Thu, 8 Apr 2010, Saurabh Jain wrote:
> 
> As of now 3D is not there in Canvas in official WHATWG specification. 
> When 3D comes the 3D game developer can be told that 2D collisions and 
> 3D collisions are different. We can have ray tracking in 3D when it 
> comes similar to what 3D API is there in J2ME (JSR 184).

3D canvas graphics is now handled as part of WebGL.


On Thu, 8 Apr 2010, Stefan Haustein wrote:
> 
> Why not add a general BitBlt library on top of typed arrays (1) instead?
> 
> This could also be useful for other applications, e.g. web-based photo 
> editors...

getImageData() is now based on typed arrays.


On Tue, 6 Apr 2010, Mathieu 'p01' Henri wrote:
> 
> However, a extra readonly attribute float "height" to the "TextMetrics" 
> interface should be fairly trivial to implement for browser vendors and 
> would greatly help web developers layout text in Canvas.

On Tue, 6 Apr 2010, Greg Brown wrote:
> 
> It would certainly help - might it also be possible to add a similar 
> "baseline" (or "ascent") attribute? If so, I would be very happy, since 
> this would cover most (if not all) of my use cases.  :-)

I have now extended TextMetrics to cover a range of vertical metrics.



On Sun, 2 May 2010, Charles Pritchard wrote:
> 
> At present, it's quite difficult to get the binary code for a jpg image; 
> you must first draw it to a Canvas and export it as a png.

Canvas supports toDataURL('image/jpeg') and toBlob(..., 'image/jpeg').


On Wed, 28 Jul 2010, David Flanagan wrote:
>
> The Canvas API has a setTransform() method to set an arbitrary 
> transformation matrix, but has no corresponding getTransform() method to 
> query the current transformation matrix or even to use the CTM to 
> transform a point in the current coordinate system to the default 
> coordinate system.

You can now get a (live) SVGMatrix object representing the current 
transformation matrix.


> [snip great description of use cases]

Thanks for those! They are very helpful in examining proposals.


> From a programmer usability perspective, perhaps adding methods like 
> transformPoints(), transformBoundingBox(), and transformDimensions() 
> would be more helpful.  But I'm not sure what the optimal set of such 
> methods would be.

I haven't added these yet, since as with you I'm not really sure what the 
right API would be here.


On Fri, 29 Jul 2011, Charles Pritchard wrote:
>
> Having spoken to several developers, I think that we're clear to add a 
> new method to the canvas 2d api; though there is some question about how 
> it should be added.
> 
> The method would take a DOMString using SVG path semantics.

I have added support for this.


On Fri, 5 Aug 2011, Charles Pritchard wrote:
>
> The current description of draw*FocusRing does not have "inform the 
> user" in the correct priority. It currently follows cases where the user 
> should be informed, but the steps have been aborted.

Fixed.


On Thu, 18 Aug 2011, Charles Pritchard wrote:
>
> Following is a change proposal for the canvas 2d spec

For the record, it's more productive to explain use cases that are not 
handled by the existing API than to propose changes to the API.

Please see this FAQ entry and the one that follows it:
http://wiki.whatwg.org/wiki/FAQ#Is_there_a_process_for_adding_new_features_to_a_specification.3F


On Sat, 27 Aug 2011, Charles Pritchard wrote:
>
> Currently, Canvas 2d has a drawFocusRing API, enabling authors (at least 
> in IE9, currently) to create elements inside of the Canvas subtree, set 
> onfocus handlers, and let the OS know that a path/rectangle within the 
> canvas image currently has focus (and that focus event, is of course, on 
> the selected element).
> 
> The API only works for tracking a single element at one time. We need a 
> more broad method for notifying the OS about multiple regions within an 
> image, as cycling through drawFocusRing is both inefficient and not 
> quite the appropriate use of the method semantics.
>
> Something like: setClickableElement(elementInSubtree), which would use 
> the current path, and the element passed as the first argument, and 
> share that information with the OS (more specifically, the UAs 
> accessibility tree).

I've added something along these lines (though more flexible) to the spec.


On Wed, 31 Aug 2011, David Geary wrote:
>
> I?ve implemented some polygon objects for my book that I can drag around in
> a canvas. I detect mouse clicks in the polygons with the isPointInPath()
> method. Here?s a simple code snippet that detects mouse clicks in a set of
> polygons (dnd code is too lengthy for this purpose):
> 
> context.canvas.onmousedown = function (e) {
>    var loc = windowToCanvas(context.canvas, e);
> 
>    polygons.forEach( function (polygon) {  // polygons is an array of
> polygon objects
>       polygon.createPath();  // my polygons have a createPath() method
> 
>       if (context.isPointInPath(loc.x, loc.y)) {
>          alert('mouse clicked in polygon');
>       }
>    });
> }
> 
> Notice that I have to recreate each path for every polygon. 
> Polygon.createPath() is implemented with 
> beginPath()...moveTo()...repeated calls to lineTo()...and 
> finally...closePath().
> 
> It seems to me that it would be great if the Canvas context provided two 
> new methods: Path getPath(), which would return a path object 
> representing the context?s current path, and setPath(Path), which would 
> set the current path.

I haven't quite added what you suggest, but I have added Path primitives, 
which does more or less the same thing and can be passed to 
isPointInPath(), and also added a mechanism that more directly addresses 
your underlying request, namely a way to define hit regions and have them 
be directly reported in the mousedown event rather than having to go 
through each path in turn.


On Thu, 6 Oct 2011, Brian Ernesto wrote:
> 
> We have been using a script called html2canvas that utilizes the CANVAS 
> tag to render screen captures of HTML blocks via the toDataURL() method. 
> It does this by recreating and drawing each element to a CANVAS tag, but 
> it can be very slow on large pages.
> 
> It occurred to us today that it would really be useful to have every 
> element in the DOM have the method toDataURL() (e.g. 
> document.getElementById('myDiv').toDataURL() ). This would be a powerful 
> way of capturing image data of any element from the whole HTML level 
> down to a lone DIV. This would allow use in things such as animations, 
> graphical annotations, and various UI effects.

A mechanism to draw parts of the page to a canvas is one a lot of people 
would very much like to have. Unfortunately it's an incredibly difficult 
problem to solve in a way that doesn't introduce security flaws.


On Fri, 28 Oct 2011, Ashley Gullen wrote:
> 
> The retro (pixellated style) is still very popular in 2D gaming, and 
> many new games are still being made with this style.  (Have a quick 
> scroll through http://www.tigsource.com/ and notice how many retro games 
> are still hitting headlines.)  I've noticed most browsers today are 
> using bilinear filtering for scaling.  Compared to point filtering (aka 
> nearest neighbour) this makes retro style games look blurry.  See here: 
> http://www.scirra.com/images/point-vs-linear.png

I've added a feature that lets you disable image smoothing in drawImage().


On Fri, 28 Oct 2011, Tab Atkins Jr. wrote:
> 
> The CSS Working Group is interested in this topic more generally for all 
> images or videos.  There was previously a property in the CSS3 Image 
> Values spec for this; it's currently punted to the level 4 spec, but you 
> can find it in an older WD like 
> <http://www.w3.org/TR/2011/WD-css3-images-20110712/#image-rendering>.
> 
> Currently there are two values defined: 'auto' means "do what you want, 
> preserving image quality", which will generally make pixel art blurry; 
> and 'crisp-edges' which means "do what you want, but don't blur edges or 
> colors", which is expected to often use nearest-neighbor, but is allowed 
> to use more complex pixel-art scaling algorithms.  There's been a 
> persistent request to explicitly ask for nearest-neighbor, so I expect 
> that will be addressed at some point with a value named 'pixelated' or 
> something.
> 
> Once this property is implemented, browsers should respect it for 
> scaling <canvas>, so you won't need anything explicit in the API.

I added the control for drawImage() anyway, so that you don't have to rely 
on scaling canvas (which might result in a higher-res backing store or 
might not) to get the effect, and so that you can have some images (e.g. 
backgrounds) smoothed and others (e.g. foreground sprites) not.


On Fri, 28 Oct 2011, Boris Zbarsky wrote:
> 
> There is experimental support for this in Gecko at the moment via the 
> boolean mozImageSmoothingEnabled property on the 2d context.  At the 
> time this was added there was talk about getting more feedback from this 
> group; not sure whether that ever happened.

This is what I modelled the spec's proposal on.


On Sat, 29 Oct 2011, Ashley Gullen wrote:
> 
> I had a quick go with setting ctx.mozImageSmoothingEnabled = false.  It 
> works great with drawImage.  However it does not appear to affect 
> repeated patterns. This makes for quite a strange effect where the 
> game's sprites are pixellated but the tiled backgrounds are bilinear 
> filtered.  So it's half way there.

This matches what the spec has done; you can make it also affect patterns 
if you like by first drawing the image scaled to another canvas and then 
using that as the pattern (so the pattern doesn't have to be scaled).

If you just have everything scaled, though, you might be better off just 
drawing everything to the canvas without any scaling and then just scaling 
the whole thing once per frame, either by stretching the canvas and 
relying on CSS to turn off smoothing, or by using drawImage().


On Wed, 2 Nov 2011, Chris Jones wrote:
>
> Most 2d graphics libraries support stroking paths with a "dash pattern" 
> of on/off strokes.  Canvas should have it for completeness.

Completeness is definitely not a compelling reason to add something to the 
Web platform. :-)


> [proposal...]
> Note: a similar spec has been implemented in Gecko as 
> |mozDash/mozDashOffset|[2], however with mozDash as a sequence 
> attribute, which Cameron McCormack pointed out is forbidden by 
> WebIDL[3].  In the API above, the attribute is converted into explicit 
> setDash()/getDash() methods.

I have taken this into account in the design.


On Wed, 2 Nov 2011, Chris Jones wrote:
>
> An important canvas use case that arose during the development of 
> pdf.js[1] is implementing a "fill-to-current-clip" operation.  In PDF, 
> the shading fill operator, "sh", requires this.  In the cairo library, 
> cairo_paint() is this operation.  The operation can be implemented by 
> tracking the current transform matrix (CTM) in script, then at paint 
> time, inverting the CTM and filling the canvas bounds transformed by the 
> inverse CTM.  However, tracking the CTM from script is cumbersome and 
> not likely to be performant.  It's also not always possible for script 
> to track the CTM; for example, if an external library is passed a canvas 
> to draw to, the library doesn't know the initial CTM.  Another use case 
> that requires the CTM is creating a minimal temporary surface for a fill 
> operation that's bounded by user-space coordinates, for which canvas 
> doesn't have native support for creating the contents of what's to be 
> filled.  This case also arose in pdf.js.
> 
> To that end, we propose a canvas interface that allows querying the CTM.  
> The concrete proposal is
> 
>   interface CanvasRenderingContext2D {
>     //...
>     attribute SVGMatrix currentTransform;  // default is the identity matrix
>     //...
>   };
> 
> The first use case above is satisfied by |inverseCtm = 
> context.currentTransform.inverse();|.  This attribute also serves as a 
> convenience attribute for updating the CTM.  The |currentTransform| 
> attribute tracks updates made to the CTM through 
> |setTransform()/scale()/translate()|.

I've added this to the spec as proposed here.


On Thu, 22 Dec 2011, David Karger wrote:
> 
> Firefox and chrome inconsistently handle "destination-in" compositing; I 
> suspect this may be due to a missing specification in the standard.  
> The inconsistency happens when I use the drawImage method to draw one 
> canvas onto another while the globalCompositionOperation is set to 
> "destination-in"  . Under destination in, pixels in the destination 
> canvas should be left alone where the source canvas has a set pixel and 
> cleared where the source canvas has a cleared/transparent pixel.
> 
> Both browsers do this properly inside the range of the source canvas. 
> But if the source canvas has smaller dimensions than the destination 
> canvas, they inconsistently handle parts of the destination canvas 
> _outside_ the source canvas: firefox clears those pixels while chrome 
> leaves them alone.  I believe the standard isn't clear on what should 
> happen in this case.  I'd say that firefox's behavior is more consistent 
> with the intent of "destination-in", but obviously cross-platform 
> consistency is the most important consideration.

On Thu, 22 Dec 2011, Darin Adler wrote:
> 
> It sounds like the Chrome behavior you are describing is a symptom of 
> this WebKit bug, fixed in the WebKit source code on 2011-10-26

I agree with Darin's assessment here. Let me know if you still think the 
spec should be changed.


On Fri, 13 Jan 2012, Jeremy Apthorp wrote:
> 
> I'd like to draw non-antialiased lines in a <canvas>. Currently it seems 
> that the only way to do this is to directly access the pixel data.
> 
> Is there a reason there's no way to turn off antialiasing?

What's the use case?


On Sat, 28 Jan 2012, Bronislav Klu?~Mka wrote:
> On 27.1.2012 20:02, Ian Hickson wrote:
> > On Thu, 20 Oct 2011, Bronislav Klu???~Mka wrote:
> > > Would it be possible to extend canvas specification to include 
> > > scroll bar functionality? To add scroll bar, to manage scroll bar 
> > > (total size, page size). Creating control based on canvas that needs 
> > > scrollbar at this point is unnecessarily difficult at this point.
> >
> > It is expected that the component model feature being discussed in the 
> > public-webapps at w3.org mailing list will be how you make widgets on the 
> > platform.
> > 
> > You wouldn't want to put the scrollbar in the canvas itself, since 
> > then it wouldn't follow platform native conventions, for example. 
> > Instead, you would create a widget which uses overflow:scroll with an 
> > element of the right height or width to create scrollbars, and then 
> > you would react to scroll events to repaint the canvas.
>
> How does scrollbar on div, p, textarea etc. follow platform native 
> conventions but scroll bar on canvas would not?

I presumed you meant a scrollbar drawn by the author. If that's not what 
you mean, I don't understand your proposal. Could you elaborate?


> > On Thu, 20 Oct 2011, Bronislav Klu???~Mka wrote:
> > > b) how about creating user controls using canvas? (rich controls are
> > > better doing this way, one has pixel perfect control, full browser
> > > compatibility) like document viewer, rich listview/treeview control...
> >
> > Generally speaking, canvas isn't intended for anything but the 
> > simplest of interactive controls.
>
> Generally speaking, widgets are intended to be HTML container, which 
> makes them far inferior to possibilities of canvas... simpler, but 
> inferior (as is any HTML container at this point).

I'm not sure I follow. What is the use case you had in mind?


On Tue, 31 Jan 2012, Charles Pritchard wrote:
> 
> I'd like to see scrollTop and scrollLeft supported for the Canvas 
> element. They would simply fire an onscroll event on the element, and do 
> nothing else. Many Canvas UIs use only one visible canvas layer, or 
> otherwise, one main canvas layer.
> 
> It'd be quite handy to be able to use the scrollTop and scrollLeft 
> setters, and as an author, hook into canvas.onscroll to identify when 
> updates ought to be painted.
> 
> Currently, authors can create a large canvas, and place it in a div:
> <div style="overflow: hidden">
> <canvas>This can is larger than the div</canvas>
> </div>
> 
> It would be great to be able to respond to scroll events, such as
> Element.scrollIntoViewIfNeeded and/or ctx.scrollPathIntoView,
> and simplify the model in the future.
> 
> <div>
> <canvas onscroll="repaint(scrollLeft,scrollTop)">This canvas is the same size
> as the div and responds to onscroll</canvas>
> </div>
> 
> The values for scrollLeft and scrollTop would be in css pixels, as they 
> are for other elements, and when set, they would trigger an onscroll 
> event, as usual.

On Sat, 4 Feb 2012, Anne van Kesteren wrote:
> 
> That would require special casing <canvas> in 
> http://dev.w3.org/csswg/cssom-view/#scroll-an-element which I'm not sure 
> is a good idea. Why don't you just dispatch a synthetic scroll event?

On Sat, 4 Feb 2012, Charles Pritchard wrote:
> 
> Does the scroll event carry x/y information?
> 
> I agree, this is a special case for canvas -- the precedent set by 
> <input type=text> is that form controls may have separate scroll 
> semantics.
> 
> My proposal is not thought-out all the way, but I'm hoping it can float.
> 
> The idea here is to enable scroll with limited height/width canvas layer 
> to work well.
> 
> This is going to be useful for context.scrollPathIntoView as well as 
> simply running Element.scrollIntoView on elements within the Canvas 
> sub-tree.
> 
> Currently, the scroll* attributes in Canvas are read only and set to 
> zero. So, I think there is room to support them in the future.

I don't really understand the use case here. Could you elaborate?

-- 
Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Received on Wednesday, 28 March 2012 14:41:12 UTC