Copyright © 2013 W3C® (MIT, ERCIM, Keio, Beihang), All Rights Reserved. W3C liability, trademark and document use rules apply.
This specification defines the 2D Context for the HTML
canvas
element. The 2D Context provides
objects, methods, and properties to draw and manipulate
graphics on a canvas
drawing surface.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.
If you wish to make comments regarding this document in a manner that is tracked by the W3C, please submit them via using our public bug database. If you cannot do this then you can also e-mail feedback to public-html-comments@w3.org (subscribe, archives), and arrangements will be made to transpose the comments to our public bug database. All feedback is welcome.
Work on extending this specification typically proceeds through extension specifications which should be consulted to see what new features are being reviewed.
The bulk of the text of this specification is also available in the WHATWG HTML Living Standard, under a license that permits reuse of the specification text.
The working groups maintains a list of all bug reports that the editors have not yet tried to address and a list of issues for which the chairs have not yet declared a decision. You are very welcome to file a new bug for any problem you may encounter. These bugs and issues apply to multiple HTML-related specifications, not just this one.
Implementors should be aware that this specification is not stable. Implementors who are not taking part in the discussions are likely to find the specification changing out from under them in incompatible ways. Vendors interested in implementing this specification before it eventually reaches the Candidate Recommendation stage should join the aforementioned mailing lists and take part in the discussions.
This is a work in progress! For the latest updates from the HTML WG, possibly including important bug fixes, please look at the editor's draft instead.
Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
The latest stable version of the editor's draft of this specification is always available on the W3C HTML git repository.
The W3C HTML Working Group is the W3C working group responsible for this specification's progress. This specification is the 29 October 2013 Working Draft. This specification is intended to become a W3C Recommendation.
Work on this specification is also done at the WHATWG. The W3C HTML working group actively pursues convergence of the HTML specification with the WHATWG living standard, within the bounds of the W3C HTML working group charter. There are various ways to follow this work at the WHATWG:
svn checkout http://svn.whatwg.org/webapps/
This specification is an extension to the HTML5 language. All normative content in the HTML5 specification, unless specifically overridden by this specification, is intended to be the basis for this specification.
This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
DrawingStyle
objectsPath
objectsThis specification is an HTML specification. All the conformance requirements, conformance classes, definitions, dependencies, terminology, and typographical conventions described in the core HTML5 specification apply to this specification. [HTML5]
Interfaces are defined in terms of Web IDL. [WEBIDL]
typedef (HTMLImageElement or HTMLVideoElement or HTMLCanvasElement or CanvasRenderingContext2D or ImageBitmap) CanvasImageSource; enum CanvasFillRule { "nonzero", "evenodd" }; [Constructor(optional unsigned long width, unsigned long height)] interface CanvasRenderingContext2D { // back-reference to the canvas readonly attribute HTMLCanvasElement canvas; // canvas dimensions attribute unsigned long width; attribute unsigned long height; // for contexts that aren't directly fixed to a specific canvas void commit(); // push the image to the output bitmap // state void save(); // push state on state stack void restore(); // pop state stack and restore state // transformations (default transform is the identity matrix) attribute SVGMatrix currentTransform; void scale(unrestricted double x, unrestricted double y); void rotate(unrestricted double angle); void translate(unrestricted double x, unrestricted double y); void transform(unrestricted double a, unrestricted double b, unrestricted double c, unrestricted double d, unrestricted double e, unrestricted double f); void setTransform(unrestricted double a, unrestricted double b, unrestricted double c, unrestricted double d, unrestricted double e, unrestricted double f); void resetTransform(); // compositing attribute unrestricted double globalAlpha; // (default 1.0) attribute DOMString globalCompositeOperation; // (default source-over) // image smoothing attribute boolean imageSmoothingEnabled; // (default true) // colors and styles (see also the CanvasDrawingStyles interface) attribute (DOMString or CanvasGradient or CanvasPattern) strokeStyle; // (default black) attribute (DOMString or CanvasGradient or CanvasPattern) fillStyle; // (default black) CanvasGradient createLinearGradient(double x0, double y0, double x1, double y1); CanvasGradient createRadialGradient(double x0, double y0, double r0, double x1, double y1, double r1); CanvasPattern createPattern(CanvasImageSource image, [TreatNullAs=Emptytring]DOMString repetition); // shadows attribute unrestricted double shadowOffsetX; // (default 0) attribute unrestricted double shadowOffsetY; // (default 0) attribute unrestricted double shadowBlur; // (default 0) attribute DOMString shadowColor; // (default transparent black) // rects void clearRect(unrestricted double x, unrestricted double y, unrestricted double w, unrestricted double h); void fillRect(unrestricted double x, unrestricted double y, unrestricted double w, unrestricted double h); void strokeRect(unrestricted double x, unrestricted double y, unrestricted double w, unrestricted double h); // path API (see also CanvasPathMethods) void beginPath(); void fill(optional CanvasFillRule fillRule = "nonzero"); void fill(Path path, optional CanvasFillRule fillRule = "nonzero"); void stroke(); void stroke(Path path); void drawSystemFocusRing(Element element); void drawSystemFocusRing(Path path, Element element); boolean drawCustomFocusRing(Element element); boolean drawCustomFocusRing(Path path, Element element); void scrollPathIntoView(); void scrollPathIntoView(Path path); void clip(optional CanvasFillRule fillRule = "nonzero"); void clip(Path path, optional CanvasFillRule fillRule = "nonzero"); void resetClip(); boolean isPointInPath(unrestricted double x, unrestricted double y, optional CanvasFillRule fillRule = "nonzero"); boolean isPointInPath(Path path, unrestricted double x, unrestricted double y, optional CanvasFillRule fillRule = "nonzero"); boolean isPointInStroke(unrestricted double x, unrestricted double y); boolean isPointInStroke(Path path, unrestricted double x, unrestricted double y); // text (see also the CanvasDrawingStyles interface) void fillText(DOMString text, unrestricted double x, unrestricted double y, optional unrestricted double maxWidth); void strokeText(DOMString text, unrestricted double x, unrestricted double y, optional unrestricted double maxWidth); TextMetrics measureText(DOMString text); // drawing images void drawImage(CanvasImageSource image, unrestricted double dx, unrestricted double dy); void drawImage(CanvasImageSource image, unrestricted double dx, unrestricted double dy, unrestricted double dw, unrestricted double dh); void drawImage(CanvasImageSource image, unrestricted double sx, unrestricted double sy, unrestricted double sw, unrestricted double sh, unrestricted double dx, unrestricted double dy, unrestricted double dw, unrestricted double dh); // hit regions void addHitRegion(optional HitRegionOptions options); void removeHitRegion(DOMString id); // pixel manipulation ImageData createImageData(double sw, double sh); ImageData createImageData(ImageData imagedata); ImageData createImageDataHD(double sw, double sh); ImageData getImageData(double sx, double sy, double sw, double sh); ImageData getImageDataHD(double sx, double sy, double sw, double sh); void putImageData(ImageData imagedata, double dx, double dy); void putImageData(ImageData imagedata, double dx, double dy, double dirtyX, double dirtyY, double dirtyWidth, double dirtyHeight); void putImageDataHD(ImageData imagedata, double dx, double dy); void putImageDataHD(ImageData imagedata, double dx, double dy, double dirtyX, double dirtyY, double dirtyWidth, double dirtyHeight); }; CanvasRenderingContext2D implements CanvasDrawingStyles; CanvasRenderingContext2D implements CanvasPathMethods; [NoInterfaceObject] interface CanvasDrawingStyles { // line caps/joins attribute unrestricted double lineWidth; // (default 1) attribute DOMString lineCap; // "butt", "round", "square" (default "butt") attribute DOMString lineJoin; // "round", "bevel", "miter" (default "miter") attribute unrestricted double miterLimit; // (default 10) // dashed lines void setLineDash(sequence<unrestricted double> segments); // default empty sequence<unrestricted double> getLineDash(); attribute unrestricted double lineDashOffset; // text attribute DOMString font; // (default 10px sans-serif) attribute DOMString textAlign; // "start", "end", "left", "right", "center" (default: "start") attribute DOMString textBaseline; // "top", "hanging", "middle", "alphabetic", "ideographic", "bottom" (default: "alphabetic") attribute DOMString direction; // "ltr", "rtl", "inherit" (default: "inherit") }; [NoInterfaceObject] interface CanvasPathMethods { // shared path API methods void closePath(); void moveTo(unrestricted double x, unrestricted double y); void lineTo(unrestricted double x, unrestricted double y); void quadraticCurveTo(unrestricted double cpx, unrestricted double cpy, unrestricted double x, unrestricted double y); void bezierCurveTo(unrestricted double cp1x, unrestricted double cp1y, unrestricted double cp2x, unrestricted double cp2y, unrestricted double x, unrestricted double y); void arcTo(unrestricted double x1, unrestricted double y1, unrestricted double x2, unrestricted double y2, unrestricted double radius); void arcTo(unrestricted double x1, unrestricted double y1, unrestricted double x2, unrestricted double y2, unrestricted double radiusX, unrestricted double radiusY, unrestricted double rotation); void rect(unrestricted double x, unrestricted double y, unrestricted double w, unrestricted double h); void arc(unrestricted double x, unrestricted double y, unrestricted double radius, unrestricted double startAngle, unrestricted double endAngle, optional boolean anticlockwise = false); void ellipse(unrestricted double x, unrestricted double y, unrestricted double radiusX, unrestricted double radiusY, unrestricted double rotation, unrestricted double startAngle, unrestricted double endAngle, optional boolean anticlockwise = false); }; interface CanvasGradient { // opaque object void addColorStop(double offset, DOMString color); }; interface CanvasPattern { // opaque object void setTransform(SVGMatrix transform); }; interface TextMetrics { // x-direction readonly attribute double width; // advance width readonly attribute double actualBoundingBoxLeft; readonly attribute double actualBoundingBoxRight; // y-direction readonly attribute double fontBoundingBoxAscent; readonly attribute double fontBoundingBoxDescent; readonly attribute double actualBoundingBoxAscent; readonly attribute double actualBoundingBoxDescent; readonly attribute double emHeightAscent; readonly attribute double emHeightDescent; readonly attribute double hangingBaseline; readonly attribute double alphabeticBaseline; readonly attribute double ideographicBaseline; }; dictionary HitRegionOptions { Path? path = null; CanvasFillRule fillRule = "nonzero"; DOMString id = ""; DOMString? parentID = null; DOMString cursor = "inherit"; // for control-backed regions: Element? control = null; // for unbacked regions: DOMString? label = null; DOMString? role = null; }; interface ImageData { readonly attribute unsigned long width; readonly attribute unsigned long height; readonly attribute double resolution; readonly attribute Uint8ClampedArray data; }; [Constructor(optional Element scope)] interface DrawingStyle { }; DrawingStyle implements CanvasDrawingStyles; [Constructor, Constructor(Path path), Constructor(DOMString d)] interface Path { void addPath(Path path, SVGMatrix? transformation); void addPathByStrokingPath(Path path, CanvasDrawingStyles styles, SVGMatrix? transformation); void addText(DOMString text, CanvasDrawingStyles styles, SVGMatrix? transformation, unrestricted double x, unrestricted double y, optional unrestricted double maxWidth); void addPathByStrokingText(DOMString text, CanvasDrawingStyles styles, SVGMatrix? transformation, unrestricted double x, unrestricted double y, optional unrestricted double maxWidth); void addText(DOMString text, CanvasDrawingStyles styles, SVGMatrix? transformation, Path path, optional unrestricted double maxWidth); void addPathByStrokingText(DOMString text, CanvasDrawingStyles styles, SVGMatrix? transformation, Path path, optional unrestricted double maxWidth); }; Path implements CanvasPathMethods;
getContext
('2d')Returns a CanvasRenderingContext2D
object that is permanently bound to a particular canvas
element.
CanvasRenderingContext2D
( [ width, height ] )Returns an unbound CanvasRenderingContext2D
object with an implied bitmap with
the given dimensions in CSS pixels (300x150, if the arguments are omitted).
canvas
Returns the canvas
element, if the rendering context was obtained using the
getContext()
method.
width
height
Return the dimensions of the bitmap, in CSS pixels.
Can be set, to update the bitmap's dimensions. If the rendering context is bound to a canvas, this will also update the canvas' intrinsic dimensions.
commit
()If the rendering context is bound to a canvas
, display the current frame.
A CanvasRenderingContext2D
object can be obtained in two ways: the getContext()
method on a canvas
element (which
invokes the 2D context creation algorithm), and the CanvasRenderingContext2D()
constructor.
A CanvasRenderingContext2D
object has a scratch bitmap and can be bound
to an output bitmap. These are initialized when the object is created, and can be
subsequently adjusted when the rendering context is bound or unbound. In some cases, these bitmaps are the same
underlying bitmap. In general, the scratch bitmap is what scripts interact with, and
the output bitmap is what is being displayed. These bitmaps always have the same
dimensions.
Each such bitmap has an origin-clean flag, which can be set to true or false. Initially, when one of these bitmaps is created, its origin-clean flag must be set to true.
These bitmaps also have a hit region list, which is described in a later section. Initially, this list is empty. Scratch bitmaps also have a list of pending interface actions, which can contain instructions to draw the user's attention to a location on the bitmap, and instructions to scroll to a location on the bitmap. Initially, this list is also empty.
The CanvasRenderingContext2D
2D rendering context represents a flat linear
Cartesian surface whose origin (0,0) is at the top left corner, with the coordinate space having
x values increasing when going right, and y values
increasing when going down. The x-coordinate of the right-most edge is equal to
the width of the rendering context's scratch bitmap in CSS pixels; similarly, the
y-coordinate of the bottom-most edge is equal to the height of the rendering
context's scratch bitmap in CSS pixels.
The size of the coordinate space does not necessarily represent the size of the actual bitmaps that the user agent will use internally or during rendering. On high-definition displays, for instance, the user agent may internally use bitmaps with two device pixels per unit in the coordinate space, so that the rendering remains at high quality throughout. Anti-aliasing can similarly be implemented using over-sampling with bitmaps of a higher resolution than the final image on the display.
The 2D context creation algorithm, which is passed a target (a
canvas
element), consists of running the following steps:
Create a new CanvasRenderingContext2D
object.
Initialize its canvas
attribute to point to
target.
Let the new CanvasRenderingContext2D
object's output bitmap and
scratch bitmap both be the same bitmap as target's bitmap (so
that they are shared).
Set bitmap dimensions to the
numeric values of target's width
and
height
content attributes.
Return the new CanvasRenderingContext2D
object.
The CanvasRenderingContext2D()
constructor, when
invoked, must run the following steps:
Create a new CanvasRenderingContext2D
object.
Initialize its canvas
attribute to
null.
Let the new CanvasRenderingContext2D
object's scratch bitmap be
a new bitmap.
If the constructor was called with arguments, let width and height be the first and second arguments, respectively. Otherwise, let width and height be 300 and 150, respectively.
Set bitmap dimensions to width and height.
Let the new CanvasRenderingContext2D
object have no output
bitmap.
Return the new CanvasRenderingContext2D
object.
When the user agent is required to commit the scratch bitmap for a rendering context, it must run the following steps:
Let bitmap copy be a copy of the rendering context's scratch bitmap.
Let origin-clean flag copy be a copy of the rendering context's scratch bitmap's origin-clean flag.
Let hit region list copy be a copy of the rendering context's scratch bitmap's hit region list.
Let list of pending interface actions copy be a copy of the rendering context's scratch bitmap's list of pending interface actions.
Empty the scratch bitmap's list of pending interface actions.
If the rendering context has no output bitmap, abort these steps.
Let output bitmap be the rendering context's output bitmap.
Let canvas be the canvas
element to which the rendering
context was most recently bound.
Queue a task associated with canvas' Document
to perform the following substeps:
Overwrite output bitmap with bitmap copy.
Overwrite output bitmap's origin-clean flag with origin-clean flag copy.
Overwrite output bitmap's hit region list with hit region list copy.
Follow the directions in the list of pending interface actions copy.
The algorithm above must use the canvas updating task source (which is just used by this algorithm).
The commit()
method must run the
following steps:
If the rendering context's context bitmap
mode is fixed, throw an
InvalidStateError
exception and abort these steps.
Commit the scratch bitmap for the rendering context.
The scratch bitmap is only committed when the commit()
method is
called. (This doesn't matter for canvas
elements in direct-2d mode, since there the scratch
bitmap is also the canvas
element's bitmap so every drawing operation is
immediately drawn.)
When the user agent is to set bitmap dimensions to width and height, it must run the following steps:
Clear the scratch bitmap's hit region list and its list of pending interface actions.
Resize the scratch bitmap to the new width and height and clear it to fully transparent black.
If the rendering context has an output bitmap, and the scratch bitmap is a different bitmap than the output bitmap, then resize the output bitmap to the new width and height and clear it to fully transparent black.
If the rendering context's context bitmap mode is fixed, then run these substeps:
Let canvas be the canvas
element to which the rendering
context's canvas
attribute was initialized.
If the rendering context's context
bitmap mode is fixed and the numeric value of
the canvas' width
content attribute
differs from width, then set canvas' width
content attribute to the shortest possible string
representing width as a valid non-negative integer.
If the rendering context's context
bitmap mode is fixed and the numeric value of
the canvas' height
content attribute
differs from height, then set canvas' height
content attribute to the shortest possible string
representing height as a valid non-negative integer.
Only one square appears to be drawn in the following example:
// canvas is a reference to a <canvas> element var context = canvas.getContext('2d'); context.fillRect(0,0,50,50); canvas.setAttribute('width', '300'); // clears the canvas context.fillRect(0,100,50,50); canvas.width = canvas.width; // clears the canvas context.fillRect(100,0,50,50); // only this square remains
When the user agent is to run the unbinding steps for a rendering context, it must run the following steps:
Clear the scratch bitmap's hit region list and its list of pending interface actions.
Clear the CanvasRenderingContext2D
object's scratch bitmap to a
transparent black.
Set the CanvasRenderingContext2D
object's scratch bitmap's origin-clean flag to true.
Let the CanvasRenderingContext2D
object have no output
bitmap.
When the user agent is to run the binding steps
to bind the rendering context to the canvas
element target, it
must run the following steps:
Clear the scratch bitmap's hit region list and its list of pending interface actions.
Resize the CanvasRenderingContext2D
object's scratch bitmap to
the dimensions of target's bitmap and clear it to fully transparent
black.
Set the CanvasRenderingContext2D
object's scratch bitmap's origin-clean flag to true.
Let the CanvasRenderingContext2D
object's output bitmap be target's bitmap.
The canvas
attribute must return the
value it was initialized to when the object was created.
The width
attribute, on getting, must
return the width of the rendering context's scratch bitmap, in CSS pixels. On
setting, it must set bitmap dimensions
to the new value and the current height of the rendering context's scratch bitmap in
CSS pixels, respectively.
The height
attribute, on getting, must
return the height of the rendering context's scratch bitmap, in CSS pixels. On
setting, it must set bitmap dimensions
to the current width of the rendering context's scratch bitmap in CSS pixels and the
new value, respectively.
Except where otherwise specified, for the 2D context interface, any method call with a numeric argument whose value is infinite or a NaN value must be ignored.
Whenever the CSS value currentColor
is used as a color in the
CanvasRenderingContext2D
API, the "computed value of the 'color' property" for the
purposes of determining the computed value of the currentColor
keyword is
the value described by the appropriate entry in the following list:
canvas
element is being renderedThe "computed value of the 'color' property" for the purposes of determining the computed
value of the currentColor
keyword is the computed value of the 'color'
property on the canvas
element at the time that the color is specified (e.g. when
the appropriate attribute is set, or when the method is called; not when the color is rendered or
otherwise used). [CSSCOLOR]
The "computed value of the 'color' property" for the purposes of determining the computed
value of the currentColor
keyword is fully opaque black. [CSSCOLOR]
In the case of addColorStop()
on
CanvasGradient
, the "computed value of the 'color' property" for the purposes of
determining the computed value of the currentColor
keyword is always fully
opaque black (there is no associated element). [CSSCOLOR]
This is because CanvasGradient
objects are
canvas
-neutral — a CanvasGradient
object created by one
canvas
can be used by another, and there is therefore no way to know which is the
"element in question" at the time that the color is specified.
Similar concerns exist with font-related properties; the rules for those are described in detail in the relevant section below.
The CanvasFillRule
enumeration is used to select the fill rule
algorithm by which to determine if a point is inside or outside a path.
The value "nonzero
" value
indicates the non-zero winding rule, wherein
a point is considered to be outside a shape if the number of times a half-infinite straight
line drawn from that point crosses the shape's path going in one direction is equal to the
number of times it crosses the path going in the other direction.
The "evenodd
" value indicates
the even-odd rule, wherein
a point is considered to be outside a shape if the number of times a half-infinite straight
line drawn from that point crosses the shape's path is even.
If a point is not outside a shape, it is inside the shape.
This section is non-normative.
Although the way the specification is written it might sound like an implementation needs to
track up to four bitmaps per canvas or rendering context — one scratch bitmap,
one output bitmap for the rendering context, one bitmap for the canvas
,
and one bitmap for the actually currently rendered image — user agents can in fact generally
optimise this to only one or two.
The scratch bitmap, when it isn't the same bitmap as the output
bitmap, is only directly observable if it is read, and therefore implementations can,
instead of updating this bitmap, merely remember the sequence of drawing operations that have been
applied to it until such time as the bitmap's actual data is needed (for example because of a call
to commit()
, drawImage()
, or the createImageBitmap()
factory method). In many cases, this will be more memory efficient.
The bitmap of a canvas
element is the one bitmap that's pretty much always going
to be needed in practice. The output bitmap of a rendering context, when it has one,
is always just an alias to a canvas
element's bitmap.
Additional bitmaps are sometimes needed, e.g. to enable fast drawing when the canvas is being painted at a different size than its intrinsic size, or to enable double buffering so that the rendering commands from the scratch bitmap can be applied without the rendering being updated midway.
Each CanvasRenderingContext2D
rendering context maintains a stack of drawing
states. Drawing states consist of:
strokeStyle
, fillStyle
, globalAlpha
, lineWidth
, lineCap
, lineJoin
, miterLimit
, lineDashOffset
, shadowOffsetX
, shadowOffsetY
, shadowBlur
, shadowColor
, globalCompositeOperation
, font
, textAlign
, textBaseline
, direction
, imageSmoothingEnabled
.The current default path and the rendering context's bitmaps are not
part of the drawing state. The current default path is persistent, and can only be
reset using the beginPath()
method. The bitmaps
depend on whether and how the rendering context is bound to a canvas
element.
save
()Pushes the current state onto the stack.
restore
()Pops the top state on the stack, restoring the context to that state.
The save()
method must push a copy of the
current drawing state onto the drawing state stack.
The restore()
method must pop the top
entry in the drawing state stack, and reset the drawing state it describes. If there is no saved
state, the method must do nothing.
When the user agent is to reset the rendering context to its default state, it must clear the drawing state stack and everything that drawing state consists of to initial values.
DrawingStyle
objectsAll the line styles (line width, caps, joins, and dash patterns)
and text styles (fonts) described in the next two sections apply to
CanvasRenderingContext2D
objects and to
DrawingStyle
objects. This section defines the
constructor used to obtain a DrawingStyle
object. This
object is then used by methods on Path
objects to
control how text and paths are rasterised and stroked.
DrawingStyle
( [ element ] )Creates a new DrawingStyle
object, optionally
using a specific element for resolving relative keywords and sizes
in font specifications.
Each DrawingStyle
object can have a styles scope object.
The DrawingStyle()
constructor, when invoked,
must return a newly created DrawingStyle
object. If the constructor was passed an
argument, then the DrawingStyle
object's styles scope object is that
element. Otherwise, if the JavaScript global environment is a document
environment, the object's styles scope object is the Document
object of the active document of the browsing context of the
Window
object on which the interface object of the invoked constructor is found.
Otherwise, the JavaScript global environment is a worker environment,
and the styles scope object is the worker.
lineWidth
[ = value ]lineWidth
[ = value ]Returns the current line width.
Can be set, to change the line width. Values that are not finite values greater than zero are ignored.
lineCap
[ = value ]lineCap
[ = value ]Returns the current line cap style.
Can be set, to change the line cap style.
The possible line cap styles are butt
,
round
, and square
. Other values are
ignored.
lineJoin
[ = value ]lineJoin
[ = value ]Returns the current line join style.
Can be set, to change the line join style.
The possible line join styles are bevel
,
round
, and miter
. Other values are
ignored.
miterLimit
[ = value ]miterLimit
[ = value ]Returns the current miter limit ratio.
Can be set, to change the miter limit ratio. Values that are not finite values greater than zero are ignored.
setLineDash
(segments)setLineDash
(segments)Sets the current line dash pattern (as used when stroking). The argument is a list of distances for which to alternately have the line on and the line off.
getLineDash
()getLineDash
()Returns a copy of the current line dash pattern. The array returned will always have an even number of entries (i.e. the pattern is normalized).
lineDashOffset
lineDashOffset
Returns the phase offset (in the same units as the line dash pattern).
Can be set, to change the phase offset. Values that are not finite values are ignored.
Objects that implement the CanvasDrawingStyles
interface have attributes and methods (defined in this section) that
control how lines are treated by the object.
The lineWidth
attribute gives the width of lines, in coordinate space units. On
getting, it must return the current value. On setting, zero,
negative, infinite, and NaN values must be ignored, leaving the
value unchanged; other values must change the current value to the
new value.
When the object implementing the CanvasDrawingStyles
interface is created, the lineWidth
attribute must
initially have the value 1.0
.
The lineCap
attribute
defines the type of endings that UAs will place on the end of lines.
The three valid values are butt
, round
,
and square
.
On getting, it must return the current value. On setting, if the
new value is one of the literal strings butt
,
round
, and square
, then the current value
must be changed to the new value; other values must ignored, leaving
the value unchanged.
When the object implementing the CanvasDrawingStyles
interface is created, the lineCap
attribute must
initially have the value butt
.
The lineJoin
attribute defines the type of corners that UAs will place where two
lines meet. The three valid values are bevel
,
round
, and miter
.
On getting, it must return the current value. On setting, if the
new value is one of the literal strings bevel
,
round
, and miter
, then the current value
must be changed to the new value; other values must be ignored,
leaving the value unchanged.
When the object implementing the CanvasDrawingStyles
interface is created, the lineJoin
attribute must
initially have the value miter
.
When the lineJoin
attribute has the value miter
, strokes use the miter
limit ratio to decide how to render joins. The miter limit ratio can
be explicitly set using the miterLimit
attribute. On getting, it must return the current value. On setting,
zero, negative, infinite, and NaN values must be ignored, leaving
the value unchanged; other values must change the current value to
the new value.
When the object implementing the CanvasDrawingStyles
interface is created, the miterLimit
attribute must
initially have the value 10.0
.
Each CanvasDrawingStyles
object has a dash
list, which is either empty or consists of an even number of
non-negative numbers. Initially, the dash list must be
empty.
When the setLineDash()
method is invoked, it must run the following steps:
Let a be the argument.
If any value in a is not finite (e.g. an Infinity or a NaN value), or if any value is negative (less than zero), then abort these steps (without throwing an exception; user agents could show a message on a developer console, though, as that would be helpful for debugging).
If the number of elements in a is odd, then let a be the concatentation of two copies of a.
Let the object's dash list be a.
When the getLineDash()
method is
invoked, it must return a sequence whose values are the values of the object's dash
list, in the same order.
It is sometimes useful to change the "phase" of the dash pattern,
e.g. to achieve a "marching ants" effect. The phase can be set using
the lineDashOffset
attribute. On getting, it must return the current value. On setting,
infinite and NaN values must be ignored, leaving the value
unchanged; other values must change the current value to the new
value.
When the object implementing the CanvasDrawingStyles
interface is created, the lineDashOffset
attribute must initially have the value 0.0
.
This canvas
element has a rectangle that is stroked
with dashed lines. The lineDashOffset
attribute is updated to create a "marching ants" effect.
<!DOCTYPE html> <html> <head> <title>Dash lines</title> </head> <body> <canvas id="newCanvas" width="400" height="350">This browser or document mode doesn't support canvas</canvas> <script type="text/javascript"> var Offset = 0; // Starting value for ant offset var dashList = [12, 3, 3, 3]; // Create a dot/dash sequence function draw() { var canvas = document.getElementById("newCanvas"); if (canvas.getContext) { var context = canvas.getContext("2d"); context.clearRect(0, 0, canvas.width, canvas.height); // set dashList for the dash sequence context.setLineDash(dashList); // Set the current offset context.lineDashOffset = Offset; context.lineJoin = "round"; context.lineWidth = "3"; context.strokeStyle = "green"; context.strokeRect(5, 5, 300, 250); } } function dodashes() { Offset++; if (Offset > 20) // Reset offset after total of dashList values { antOffset = 0; } draw(); setTimeout(dodashes, 30); // Set a leisurely march of 30ms } dodashes(); // Start the demo </script> </body> </html>
When a user agent is to trace a path,
given an object style that implements the
CanvasDrawingStyles
interface, it must run the following
algorithm. This algorithm returns a new path.
Let path be a copy of the path being traced.
Remove from path any subpaths containing no lines (i.e. subpaths with just one point).
Replace each point in each subpath of path other than the first point and the last point of each subpath by a join that joins the line leading to that point to the line leading out of that point, such that the subpaths all consist of two points (a starting point with a line leading out of it, and an ending point with a line leading into it), one or more lines (connecting the points and the joins), and zero or more joins (each connecting one line to another), connected together such that each subpath is a series of one or more lines with a join between each one and a point on each end.
Add a straight closing line to each closed subpath in path connecting the last point and the first point of that subpath; change the last point to a join (from the previously last line to the newly added closing line), and change the first point to a join (from the newly added closing line to the first line).
If the styles dash list is empty, jump to the step labeled convert.
Let pattern width be the concatenation of all the entries of the styles dash list, in coordinate space units.
For each subpath subpath in path, run the following substeps. These substeps mutate the subpaths in path in vivo.
Let subpath width be the length of all the lines of subpath, in coordinate space units.
Let offset be the value of the styles lineDashOffset
, in
coordinate space units.
While offset is greater than pattern width, decrement it by pattern width.
While offset is less than pattern width, increment it by pattern width.
Define L to be a linear coordinate line defined along all lines in subpath, such that the start of the first line in the subpath is defined as coordinate 0, and the end of the last line in the subpath is defined as coordinate width.
Let position be zero minus offset.
Let index be 0.
Let current state be off (the other states being on and zero-on).
Dash on: Let segment length be the value of the styles dash list's indexth entry.
Increment position by segment length.
If position is greater than subpath width, then end these substeps for this subpath and start them again for the next subpath; if there are no more subpaths, then jump to the step labeled convert instead.
If segment length is non-zero, let current state be on.
Increment index by one.
Dash off: Let segment length be the value of the styles dash list's indexth entry.
Let start be the offset position on L.
Increment position by segment length.
If position is less than zero, then jump to the step labeled post-cut.
If start is less than zero, then let start be zero.
If position is greater than subpath width, then let end be the offset subpath width on L. Otherwise, let end be the offset position on L.
Jump to the first appropriate step:
Do nothing, just continue to the next step.
Cut the line on which end finds itself short at end and place a point there, cutting the subpath that it was in in two; remove all line segments, joins, points, and subpaths that are between start and end; and finally place a single point at start with no lines connecting to it.
The point has a directionality for the purposes of drawing line caps (see below). The directionality is the direction that the original line had at that point (i.e. when L was defined above).
Cut the line on which start finds itself into two at start and place a point there, cutting the subpath that it was in in two, and similarly cut the line on which end finds itself short at end and place a point there, cutting the subpath that it was in in two, and then remove all line segments, joins, points, and subpaths that are between start and end.
If start and end are the same point, then this results in just the line being cut in two and two points being inserted there, with nothing being removed, unless a join also happens to be at that point, in which case the join must be removed.
Post-cut: If position is greater than width, then jump to the step labeled convert.
If segment length is greater than zero, let positioned-at-on-dash be false.
Increment index by one. If it is equal to the number of entries in the styles dash list, then let index be 0.
Return to the step labeled dash on.
Convert: This is the step that converts the path to a new path that represents its stroke.
Create a new path that
describes the edge of the areas that would be covered if a straight line of length equal to the
styles lineWidth
was swept
along each path in path while being kept at an angle such that the line is
orthogonal to the path being swept, replacing each point with the end cap necessary to satisfy
the styles lineCap
attribute as
described previously and elaborated below, and replacing each join
with the join necessary to satisfy the styles
lineJoin
type, as
defined below.
Caps: Each point has a flat edge perpendicular
to the direction of the line coming out of it. This is them
augmented according to the value of the styles
lineCap
. The
butt
value means that no additional line cap is
added. The round
value means that a semi-circle with
the diameter equal to the styles lineWidth
width must
additionally be placed on to the line coming out of each point.
The square
value means that a rectangle with the
length of the styles lineWidth
width and the
width of half the styles lineWidth
width, placed
flat against the edge perpendicular to the direction of the line
coming out of the point, must be added at each point.
Points with no lines coming out of them must have two caps placed back-to-back as if it was really two points connected to each other by an infinitesimally short straight line in the direction of the point's directionality (as defined above).
Joins: In addition to the point where a join occurs, two additional points are relevant to each join, one for each line: the two corners found half the line width away from the join point, one perpendicular to each line, each on the side furthest from the other line.
A filled triangle connecting these two opposite corners with a
straight line, with the third point of the triangle being the join
point, must be added at all joins. The lineJoin
attribute controls
whether anything else is rendered. The three aforementioned values
have the following meanings:
The bevel
value means that this is all that is
rendered at joins.
The round
value means that a filled arc connecting
the two aforementioned corners of the join, abutting (and not
overlapping) the aforementioned triangle, with the diameter equal
to the line width and the origin at the point of the join, must be
added at joins.
The miter
value means that a second filled
triangle must (if it can given the miter length) be added at the
join, with one line being the line between the two aforementioned
corners, abutting the first triangle, and the other two being
continuations of the outside edges of the two joining lines, as
long as required to intersect without going over the miter
length.
The miter length is the distance from the point where the join
occurs to the intersection of the line edges on the outside of the
join. The miter limit ratio is the maximum allowed ratio of the
miter length to half the line width. If the miter length would
cause the miter limit ratio (as set by the style miterLimit
attribute) to
be exceeded, this second triangle must not be added.
Subpaths in the newly created path must wind clockwise, regardless of the direction of paths in path.
Return the newly created path.
font
[ = value ]font
[ = value ]Returns the current font settings.
Can be set, to change the font. The syntax is the same as for the CSS 'font' property; values that cannot be parsed as CSS font values are ignored.
Relative keywords and lengths are computed relative to the font
of the canvas
element.
textAlign
[ = value ]textAlign
[ = value ]Returns the current text alignment settings.
Can be set, to change the alignment. The possible values are and their meanings are given
below. Other values are ignored. The default is start
.
textBaseline
[ = value ]textBaseline
[ = value ]Returns the current baseline alignment settings.
Can be set, to change the baseline alignment. The possible values and their meanings are
given below. Other values are ignored. The default is alphabetic
.
direction
[ = value ]direction
[ = value ]Returns the current directionality.
Can be set, to change the directionality. The possible values and their meanings are given
below. Other values are ignored. The default is inherit
.
Objects that implement the CanvasDrawingStyles
interface have attributes (defined
in this section) that control how text is laid out (rasterized or outlined) by the object. Such
objects can also have a font style source object. For
CanvasRenderingContext2D
objects whose context bitmap mode is fixed, this is their canvas
element; for other
CanvasRenderingContext2D
objects, if the JavaScript global environment
is a document environment, the object's font style source object is the
Document
object of the active document of the browsing
context of the Window
object on which the interface object of the
CanvasRenderingContext2D
object is found; otherwise the JavaScript global
environment is a worker environment and the font style source
object is the worker. For DrawingStyle
objects, it's the styles scope
object.
The font
IDL
attribute, on setting, must be parsed the same way as the 'font'
property of CSS (but without supporting property-independent style
sheet syntax like 'inherit'), and the resulting font must be
assigned to the context, with the 'line-height' component forced to
'normal', with the 'font-size' component converted to CSS pixels,
and with system fonts being computed to explicit values. If the new
value is syntactically incorrect (including using
property-independent style sheet syntax like 'inherit' or
'initial'), then it must be ignored, without assigning a new font
value. [CSS]
Font family names must be interpreted in the context of the font style source
object when the font is to be used; any fonts embedded using @font-face
or loaded using the FontLoader
that are visible to the
font style source object must therefore be available once they are loaded. If a font
is used before it is fully loaded, or if the font style source object does not have
that font in scope at the time the font is to be used, then it must be treated as if it was an
unknown font, falling back to another as described by the relevant CSS specifications. [CSSFONTS] [CSSFONTLOAD]
On getting, the font
attribute must return the serialized form of the current font of the context
(with no 'line-height' component). [CSSOM]
For example, after the following statement:
context.font = 'italic 400 12px/2 Unknown Font, sans-serif';
...the expression context.font
would
evaluate to the string "italic 12px "Unknown Font", sans-serif
". The
"400" font-weight doesn't appear because that is the default
value. The line-height doesn't appear because it is forced to
"normal", the default value.
When the object implementing the CanvasDrawingStyles
interface is created, the
font of the context must be set to 10px sans-serif. When the 'font-size' component is set to
lengths using percentages, 'em' or 'ex' units, or the 'larger' or 'smaller' keywords, these must
be interpreted relative to the computed value of the 'font-size' property of the font style
source object at the time that the attribute is set, if it is an element. When the
'font-weight' component is set to the relative values 'bolder' and 'lighter', these must be
interpreted relative to the computed value of the 'font-weight' property of the font style
source object at the time that the attribute is set, if it is an element. If the computed
values are undefined for a particular case (e.g. because the font style source object
is not an element or is not being rendered), then the relative keywords must be
interpreted relative to the normal-weight 10px sans-serif default.
The textAlign
IDL attribute, on
getting, must return the current value. On setting, if the value is one of start
, end
, left
, right
, or center
, then the value must be changed to the new
value. Otherwise, the new value must be ignored. When the object implementing the
CanvasDrawingStyles
interface is created, the textAlign
attribute must initially have the value start
.
The textBaseline
IDL attribute, on
getting, must return the current value. On setting, if the value is one of top
, hanging
, middle
, alphabetic
, ideographic
, or bottom
, then the value must be changed to the
new value. Otherwise, the new value must be ignored. When the object implementing the
CanvasDrawingStyles
interface is created, the textBaseline
attribute must initially have the value
alphabetic
.
The direction
IDL attribute, on
getting, must return the current value. On setting, if the value is one of ltr
, rtl
, or inherit
, then the value must be changed to the new
value. Otherwise, the new value must be ignored. When the object implementing the
CanvasDrawingStyles
interface is created, the direction
attribute must initially have the value inherit
.
The textAlign
attribute's allowed keywords are
as follows:
start
Align to the start edge of the text (left side in left-to-right text, right side in right-to-left text).
end
Align to the end edge of the text (right side in left-to-right text, left side in right-to-left text).
left
Align to the left.
right
Align to the right.
center
Align to the center.
The textBaseline
attribute's allowed keywords correspond to alignment points in the
font:
The keywords map to these alignment points as follows:
top
hanging
middle
alphabetic
ideographic
bottom
The direction
attribute's allowed keywords are
as follows:
ltr
Treat input to the text preparation algorithm as left-to-right text.
rtl
Treat input to the text preparation algorithm as right-to-left text.
inherit
Default to the directionality of the canvas
element or Document
as appropriate.
The text preparation algorithm is as follows. It takes as input a string text, a CanvasDrawingStyles
object target, and an
optional length maxWidth. It returns an array of glyph shapes, each positioned
on a common coordinate space, a physical alignment whose value is one of
left, right, and center, and an inline box. (Most callers of this
algorithm ignore the physical alignment and the inline box.)
If maxWidth was provided but is less than or equal to zero, return an empty array.
Replace all the space characters in text with U+0020 SPACE characters.
Let font be the current font of target, as given by that object's font
attribute.
Apply the appropriate step from the following list to determine the value of direction:
direction
attribute has the value "ltr
"direction
attribute has the value "rtl
"Document
and that Document
has a root element childForm a hypothetical infinitely-wide CSS line box containing a single inline box containing the text text, with all the properties at their initial values except the 'font' property of the inline box set to font, the 'direction' property of the inline box set to direction, and the 'white-space' property set to 'pre'. [CSS]
If maxWidth was provided and the hypothetical width of the inline box in the hypothetical line box is greater than maxWidth CSS pixels, then change font to have a more condensed font (if one is available or if a reasonably readable one can be synthesized by applying a horizontal scale factor to the font) or a smaller font, and return to the previous step.
The anchor point is a point on the inline
box, and the physical alignment is one of the
values left, right, and center. These
variables are determined by the textAlign
and textBaseline
values as
follows:
Horizontal position:
textAlign
is
left
textAlign
is
start
and direction is
'ltr'textAlign
is
end
and direction is
'rtl'textAlign
is
right
textAlign
is
end
and direction is
'ltr'textAlign
is
start
and direction is
'rtl'textAlign
is
center
Vertical position:
textBaseline
is top
textBaseline
is hanging
textBaseline
is middle
textBaseline
is alphabetic
textBaseline
is ideographic
textBaseline
is bottom
Let result be an array constructed by iterating over each glyph in the inline box from left to right (if any), adding to the array, for each glyph, the shape of the glyph as it is in the inline box, positioned on a coordinate space using CSS pixels with its origin is at the anchor point.
Return result, physical alignment, and the inline box.
Each object implementing the CanvasPathMethods
interface has a path. A path has a list of zero or more subpaths.
Each subpath consists of a list of one or more points, connected by
straight or curved lines, and a flag indicating whether the subpath
is closed or not. A closed subpath is one where the last point of
the subpath is connected to the first point of the subpath by a
straight line. Subpaths with only one point are ignored when
painting the path.
When an object implementing the CanvasPathMethods
interface is created, its path
must be initialized to zero subpaths.
moveTo
(x, y)moveTo
(x, y)Creates a new subpath with the given point.
closePath
()closePath
()Marks the current subpath as closed, and starts a new subpath with a point the same as the start and end of the newly closed subpath.
lineTo
(x, y)lineTo
(x, y)Adds the given point to the current subpath, connected to the previous one by a straight line.
quadraticCurveTo
(cpx, cpy, x, y)quadraticCurveTo
(cpx, cpy, x, y)Adds the given point to the current subpath, connected to the previous one by a quadratic Bézier curve with the given control point.
bezierCurveTo
(cp1x, cp1y, cp2x, cp2y, x, y)bezierCurveTo
(cp1x, cp1y, cp2x, cp2y, x, y)Adds the given point to the current subpath, connected to the previous one by a cubic Bézier curve with the given control points.
arcTo
(x1, y1, x2, y2, radiusX [, radiusY, rotation ])arcTo
(x1, y1, x2, y2, radiusX [, radiusY, rotation ])Adds an arc with the given control points and radius to the current subpath, connected to the previous point by a straight line.
If two radii are provided, the first controls the width of the arc's ellipse, and the second controls the height. If only one is provided, or if they are the same, the arc is from a circle. In the case of an ellipse, the rotation argument controls the clockwise inclination of the ellipse relative to the x-axis.
Throws an IndexSizeError
exception if the given
radius is negative.
arc
(x, y, radius, startAngle, endAngle [, anticlockwise ] )arc
(x, y, radius, startAngle, endAngle [, anticlockwise ] )Adds points to the subpath such that the arc described by the circumference of the circle described by the arguments, starting at the given start angle and ending at the given end angle, going in the given direction (defaulting to clockwise), is added to the path, connected to the previous point by a straight line.
Throws an IndexSizeError
exception if the given
radius is negative.
ellipse
(x, y, radiusX, radiusY, rotation, startAngle, endAngle [, anticlockwise] )ellipse
(x, y, radiusX, radiusY, rotation, startAngle, endAngle [, anticlockwise] )Adds points to the subpath such that the arc described by the circumference of the ellipse described by the arguments, starting at the given start angle and ending at the given end angle, going in the given direction (defaulting to clockwise), is added to the path, connected to the previous point by a straight line.
Throws an IndexSizeError
exception if the given
radius is negative.
rect
(x, y, w, h)rect
(x, y, w, h)Adds a new closed subpath to the path, representing the given rectangle.
The following methods allow authors to manipulate the paths of objects implementing the
CanvasPathMethods
interface.
For CanvasRenderingContext2D
objects, the points passed to the methods, and the
resulting lines added to current default path by these methods, must be transformed
according to the current transformation matrix
before being added to the path.
The moveTo(x, y)
method must
create a new subpath with the specified point as its first (and
only) point.
When the user agent is to ensure there is a subpath
for a coordinate (x, y) on a
path, the user agent must check to
see if the path has any subpaths,
and if it does not, then the user agent must create a new subpath
with the point (x, y) as its
first (and only) point, as if the moveTo()
method had been
called.
The closePath()
method must do nothing if the object's path has no subpaths.
Otherwise, it must mark the last subpath as closed, create a new
subpath whose first point is the same as the previous subpath's
first point, and finally add this new subpath to the path.
If the last subpath had more than one point in its
list of points, then this is equivalent to adding a straight line
connecting the last point back to the first point, thus "closing"
the shape, and then repeating the last (possibly implied) moveTo()
call.
New points and the lines connecting them are added to subpaths using the methods described below. In all cases, the methods only modify the last subpath in the object's path.
The lineTo(x, y)
method must
ensure there is a subpath for (x, y) if the object's path
has no subpaths. Otherwise, it must connect the last point in the
subpath to the given point (x, y) using a straight line, and must then add the given
point (x, y) to the
subpath.
The quadraticCurveTo(cpx, cpy, x,
y)
method must ensure there
is a subpath for (cpx,
cpy), and then must connect the last
point in the subpath to the given point (x, y) using a quadratic Bézier curve with control
point (cpx, cpy), and must
then add the given point (x, y) to the subpath. [BEZIER]
The bezierCurveTo(cp1x, cp1y, cp2x, cp2y, x, y)
method must
ensure there is a subpath for (cp1x, cp1y), and then must
connect the last point in the subpath to the given point (x, y) using a cubic Bézier
curve with control points (cp1x, cp1y) and (cp2x, cp2y). Then, it must add the point (x, y) to the subpath. [BEZIER]
The arcTo(x1, y1, x2,
y2, radiusX, radiusY, rotation)
method must first ensure there is a subpath for (x1, y1).
Then, the behavior depends on the arguments and the last point in
the subpath, as described below.
Negative values for radiusX or radiusY must cause the implementation to throw an
IndexSizeError
exception. If radiusY is omitted, user agents must act as if it had
the same value as radiusX.
Let the point (x0, y0) be the last point in the subpath, transformed by the inverse of the current transformation matrix (so that it is in the same coordinate system as the points passed to the method).
If the point (x0, y0) is equal to the point (x1, y1), or if the point (x1, y1) is equal to the point (x2, y2), or if both radiusX and radiusY are zero, then the method must add the point (x1, y1) to the subpath, and connect that point to the previous point (x0, y0) by a straight line.
Otherwise, if the points (x0, y0), (x1, y1), and (x2, y2) all lie on a single straight line, then the method must add the point (x1, y1) to the subpath, and connect that point to the previous point (x0, y0) by a straight line.
Otherwise, let The Arc be the shortest arc given by circumference of the ellipse that has radius radiusX on the major axis and radius radiusY on the minor axis, and whose semi-major axis is rotated rotation radians clockwise from the positive x-axis, and that has one point tangent to the half-infinite line that crosses the point (x0, y0) and ends at the point (x1, y1), and that has a different point tangent to the half-infinite line that ends at the point (x1, y1) and crosses the point (x2, y2). The points at which this ellipse touches these two lines are called the start and end tangent points respectively. The method must connect the point (x0, y0) to the start tangent point by a straight line, adding the start tangent point to the subpath, and then must connect the start tangent point to the end tangent point by The Arc, adding the end tangent point to the subpath.
The arc(x, y, radius,
startAngle, endAngle, anticlockwise)
and ellipse(x,
y, radiusX, radiusY, rotation, startAngle, endAngle, anticlockwise)
methods draw arcs.
The arc()
method is
equivalent to the ellipse()
method in the case
where the two radii are equal. When the arc()
method is invoked, it must
act as if the ellipse()
method had been invoked with the radiusX and
radiusY arguments set to the value of the radius argument, the rotation
argument set to zero, and the other arguments set to the same values
as their identically named arguments on the arc()
method.
When the ellipse()
method is invoked, it must proceed as follows. First, if the
object's path has any subpaths, then the method must add a straight
line from the last point in the subpath to the start point of the
arc. Then, it must add the start and end points of the arc to the
subpath, and connect them with an arc. The arc and its start and end
points are defined as follows:
Consider an ellipse that has its origin at (x, y), that has a major-axis radius radiusX and a minor-axis radius radiusY, and that is rotated about its origin such that its semi-major axis is inclined rotation radians clockwise from the x-axis. The points at startAngle and endAngle along this circle's circumference, measured in radians clockwise from the ellipse's semi-major axis, are the start and end points respectively.
If the anticlockwise argument is false and endAngle-startAngle is equal to or greater than 2π, or, if the anticlockwise argument is true and startAngle-endAngle is equal to or greater than 2π, then the arc is the whole circumference of this ellipse.
Otherwise, the arc is the path along the circumference of this ellipse from the start point to the end point, going anti-clockwise if the anticlockwise argument is true, and clockwise otherwise. Since the points are on the ellipse, as opposed to being simply angles from zero, the arc can never cover an angle greater than 2π radians.
Negative values for radiusX or radiusY must cause the implementation to throw an
IndexSizeError
exception.
The rect(x, y, w, h)
method must create a new subpath
containing just the four points (x, y), (x+w,
y), (x+w, y+h),
(x, y+h), with those four points connected by straight
lines, and must then mark the subpath as closed. It must then create
a new subpath with the point (x, y) as the only point in the subpath.
Path
objectsPath
objects can be used to declare paths that are
then later used on CanvasRenderingContext2D
objects. In
addition to many of the APIs described in earlier sections,
Path
objects have methods to combine paths, and to add
text to paths.
Path
()Creates a new empty Path
object.
Path
(path)Creates a new Path
object that is a copy of the argument.
Path
(d)Creates a new path with the path described by the argument, interpreted as SVG path data. [SVG]
addPath
(path, transform)addPathByStrokingPath
(path, styles, transform)Adds to the path the path given by the argument.
In the case of the stroking variants, the line styles are taken from the styles argument, which can be either a DrawingStyle
object or a CanvasRenderingContext2D
object.
addText
(text, styles, transform, x, y [, maxWidth ])addText
(text, styles, transform, path [, maxWidth ])addPathByStrokingText
(text, styles, transform, x, y [, maxWidth ])addPathByStrokingText
(text, styles, transform, path [, maxWidth ])Adds to the path a series of subpaths corresponding to the given text. If the arguments give a coordinate, the text is drawn horizontally at the given coordinates. If the arguments give a path, the text is drawn along the path. If a maximum width is provided, the text will be scaled to fit that width if necessary.
The font, and in the case of the stroking variants, the line styles, are taken from the styles argument, which can be either a DrawingStyle
object or a CanvasRenderingContext2D
object.
The Path()
constructor,
when invoked, must return a newly created Path
object.
The Path(path)
constructor,
when invoked, must return a newly created Path
object, to which the subpaths of the
argument are added. (In other words, it returns a copy of the argument.)
The Path(d)
constructor must run the following
steps:
Parse and interpret the d argument according to the SVG specification's rules for path data, thus obtaining an SVG path. [SVG]
The resulting path could be empty. SVG defines error handling rules for parsing and applying path data.
Let (x, y) be the last point in the SVG path.
Create a new Path
object and add all the
subpaths in the SVG path, if any, to that Path
object.
Create a new subpath in the Path
object with
(x, y) as the only point in
the subpath.
Return the Path
object as the constructed
object.
The addPath(b, transform)
method, when invoked on a Path
object a, must run the following steps:
If the Path
object b has no subpaths, abort these
steps.
Create a copy of all the subpaths in b. Let this copy be known as c.
Transform all the coordinates and lines in c by the transform matrix transform, if it is not null.
Let (x, y) be the last point in the last subpath of c.
Add all the subpaths in c to a.
Create a new subpath in a with (x, y) as the only point in the subpath.
The addPathByStrokingPath(b, styles, transform)
method,
when invoked on a Path
object a, must run the following
steps:
If the Path
object b has no subpaths, abort these
steps.
Create a copy of all the subpaths in b. Let this copy be known as c.
Transform all the coordinates and lines in c by transformation matrix transform, if it is not null.
Let a new list of subpaths d be the result of tracing c, using the styles argument for the line styles.
Let (x, y) be the last point in the last subpath of d.
Add all the subpaths in d to a.
Create a new subpath in a with (x, y) as the only point in the subpath.
The addText()
and addPathByStrokingText()
methods each
come in two variants: one rendering text at a given coordinate, and one rendering text along a
given path. In both cases, the methods take a CanvasDrawingStyles
object argument for
the text and (if appropriate) line styles to use, an SVGMatrix
object transform (which can be null), and a maximum width can optionally be provided.
When one of the addText()
and addPathByStrokingText()
variants that take as
argument an (x, y) coordinate is invoked, the method must
run the following algorithm:
Run the text preparation algorithm, passing it text, the
CanvasDrawingStyles
object argument, and, if the maxWidth
argument was provided, that argument. Let glyphs be the result.
Move all the shapes in glyphs to the right by x CSS pixels and down by y CSS pixels.
Let glyph subpaths be a path describing the shapes given in glyphs, with each CSS pixel in the coordinate space of glyphs mapped to one coordinate space unit in glyph subpaths. Subpaths in glyph subpaths must wind clockwise, regardless of how the user agent's font subsystem renders fonts and regardless of how the fonts themselves are defined.
Transform all the coordinates and lines in glyph subpaths by the transformation matrix transform, if it is not null.
If the method is addPathByStrokingText()
, replace glyph subpaths by the result of tracing glyph subpaths, using the CanvasDrawingStyles
object argument for the
line styles.
Let (xfinal, yfinal) be the last point in the last subpath of glyph subpaths.
Add all the subpaths in glyph subpaths to the Path
object.
Create a new subpath in the Path
object with (xfinal, yfinal) as the only point in
the subpath.
When one of the addText()
and addPathByStrokingText()
variants that take as
argument a Path
object is invoked, the method must run the following algorithm:
Let target be the Path
object on which the method was
invoked.
Let path be the Path
object that was provided in the
method's arguments.
Run the text preparation algorithm, passing it text, the
CanvasDrawingStyles
object argument, and, if the maxWidth
argument was provided, that argument. Let glyphs be the resulting array, and
physical alignment be the resulting alignment value.
Let width be the aggregate length of all the subpaths in path, including the distances from the last point of each closed subpath to the first point of that subpath.
Define L to be a linear coordinate line for of all the subpaths in path, with additional lines drawn between the last point and the first point of each closed subpath, such that the first point of the first subpath is defined as point 0, and the last point of the last subpath, if the last subpath is not closed, or the second occurrence first point of that subpath, if it is closed, is defined as point width.
Let offset be determined according to the appropriate step below:
Move all the shapes in glyphs to the right by offset CSS pixels.
For each glyph glyph in the glyphs array, run these substeps:
Let dx be the x-coordinate of the horizontal center of the bounding box of the shape described by glyph, in CSS pixels.
If dx is negative or greater than width, skip the remainder of these substeps for this glyph.
Recast dx to coordinate spaces units in path. (This just changes the dimensionality of dx, not its numeric value.)
Find the point p on path (or implied closing lines in path) that corresponds to the position dx on the coordinate line L.
Let θ be the clockwise angle from the positive x-axis to the side of the line that is tangential to path at the point p that is going in the same direction as the line at point p.
Rotate the shape described by glyph clockwise by θ about the point that is at the dx coordinate horizontally and the zero coordinate vertically.
Let (x, y) be the coordinate of the point p.
Move the shape described by glyph to the right by x and down by y.
Let glyph subpaths be a list of subpaths describing the shape given in glyph, with each CSS pixel in the coordinate space of glyph mapped to one coordinate space unit in glyph subpaths. Subpaths in glyph subpaths must wind clockwise, regardless of how the user agent's font subsystem renders fonts and regardless of how the fonts themselves are defined.
Transform all the coordinates and lines in glyph subpaths by the transformation matrix transform, if it is not null.
If the method is addPathByStrokingText()
, replace glyph subpaths by the result of tracing glyph subpaths, using the CanvasDrawingStyles
object argument for
the line styles.
Let (xfinal, yfinal) be the last point in the last subpath of glyph subpaths. (This coordinate is only used if this is the last glyph processed.)
Add all the subpaths in glyph subpaths to target.
Create a new subpath in the Path
object with (xfinal, yfinal) as the only point in
the subpath.
Each CanvasRenderingContext2D
object has a current transformation matrix,
as well as methods (described in this section) to manipulate it. When a
CanvasRenderingContext2D
object is created, its transformation matrix must be
initialized to the identity transform.
The transformation matrix is applied to coordinates when creating the current default
path, and when painting text, shapes, and Path
objects, on
CanvasRenderingContext2D
objects.
Most of the API uses SVGMatrix
objects rather than this API. This API
remains mostly for historical reasons.
The transformations must be performed in reverse order.
For instance, if a scale transformation that doubles the width is applied to the canvas, followed by a rotation transformation that rotates drawing operations by a quarter turn, and a rectangle twice as wide as it is tall is then drawn on the canvas, the actual result will be a square.
currentTransform
[ = value ]Returns the transformation matrix, as an SVGMatrix
object.
Can be set, to change the transformation matrix.
scale
(x, y)Changes the transformation matrix to apply a scaling transformation with the given characteristics.
rotate
(angle)Changes the transformation matrix to apply a rotation transformation with the given characteristics. The angle is in radians.
translate
(x, y)Changes the transformation matrix to apply a translation transformation with the given characteristics.
transform
(a, b, c, d, e, f)Changes the transformation matrix to apply the matrix given by the arguments as described below.
setTransform
(a, b, c, d, e, f)Changes the transformation matrix to the matrix given by the arguments as described below.
resetTransform
()Changes the transformation matrix to the identity transform.
The currentTransform
, on
getting, must return the last object that it was set to. On setting, its value must be changed to
the new value, and the transformation matrix must be updated to match the matrix described by the
new value. When the CanvasRenderingContext2D
object is created, the currentTransform
attribute must be set a newly
created SVGMatrix
object. When the transformation matrix is mutated by the methods
described in this section, the last SVGMatrix
object to which the attribute has been
set must be mutated in a corresponding fashion.
The scale(x, y)
method must add the scaling transformation described by the
arguments to the transformation matrix. The x argument represents the scale
factor in the horizontal direction and the y argument represents the scale
factor in the vertical direction. The factors are multiples.
The rotate(angle)
method must add the rotation transformation described by the argument to the transformation
matrix. The angle argument represents a clockwise rotation angle expressed in
radians.
The translate(x, y)
method must add the translation transformation described by the
arguments to the transformation matrix. The x argument represents the
translation distance in the horizontal direction and the y argument represents
the translation distance in the vertical direction. The arguments are in coordinate space
units.
The transform(a, b, c, d, e, f)
method must replace the current transformation matrix with the
result of multiplying the current transformation matrix with the matrix described by:
a | c | e |
b | d | f |
0 | 0 | 1 |
The arguments a, b, c, d, e, and f are sometimes called m11, m12, m21, m22, dx, and dy or m11, m21, m12, m22, dx, and dy. Care should be taken in particular with the order of the second and third arguments (b and c) as their order varies from API to API and APIs sometimes use the notation m12/m21 and sometimes m21/m12 for those positions.
The setTransform(a, b, c, d, e, f)
method must reset the current transform to the identity matrix, and
then invoke the transform(a, b, c, d, e, f)
method with the same arguments.
The resetTransform()
method must
reset the current transform to the identity matrix.
Several methods in the CanvasRenderingContext2D
API take the union type
CanvasImageSource
as an argument.
This union type allows objects implementing any of the following interfaces to be used as image sources:
HTMLImageElement
(img
elements)HTMLVideoElement
(video
elements)HTMLCanvasElement
(canvas
elements)CanvasRenderingContext2D
ImageBitmap
The ImageBitmap
interface can be created from a number of other
image-representing types, including ImageData
.
When a user agent is required to check the usability of the image
argument, where image is a CanvasImageSource
object, the
user agent must run these steps, which return either good, bad, or
aborted:
If the image argument is an HTMLImageElement
object that
is in the broken state, then throw an
InvalidStateError
exception, return aborted, and abort these steps.
If the image argument is an HTMLImageElement
object that
is not fully decodable, or if the image
argument is an HTMLVideoElement
object whose readyState
attribute is either HAVE_NOTHING
or HAVE_METADATA
, then return bad and abort these
steps.
If the image argument is an HTMLImageElement
object with
an intrinsic width or intrinsic height (or both) equal to zero, then return bad and abort
these steps.
If the image argument is an HTMLCanvasElement
object with
either a horizontal dimension or a vertical dimension equal to zero, then return bad and
abort these steps.
Return good.
When a CanvasImageSource
object represents an HTMLImageElement
, the
element's image must be used as the source image.
Specifically, when a CanvasImageSource
object represents an animated image in an
HTMLImageElement
, the user agent must use the default image of the animation (the
one that the format defines is to be used when animation is not supported or is disabled), or, if
there is no such image, the first frame of the animation, when rendering the image for
CanvasRenderingContext2D
APIs.
When a CanvasImageSource
object represents an HTMLVideoElement
, then
the frame at the current playback position when the method with the argument is
invoked must be used as the source image when rendering the image for
CanvasRenderingContext2D
APIs, and the source image's dimensions must be the intrinsic width and intrinsic height of the media resource
(i.e. after any aspect-ratio correction has been applied).
When a CanvasImageSource
object represents an HTMLCanvasElement
, the
element's bitmap must be used as the source image.
When a CanvasImageSource
object represents a CanvasRenderingContext2D
, the
object's scratch bitmap must be used as the source image.
When a CanvasImageSource
object represents an element that is being
rendered and that element has been resized, the original image data of the source image
must be used, not the image as it is rendered (e.g. width
and
height
attributes on the source element have no effect on how
the object is interpreted when rendering the image for CanvasRenderingContext2D
APIs).
When a CanvasImageSource
object represents an ImageBitmap
, the
object's bitmap image data must be used as the source image.
The image argument is not origin-clean if it is an
HTMLImageElement
or HTMLVideoElement
whose origin is not
the same as the entry script's origin,
or if it is an HTMLCanvasElement
whose bitmap's origin-clean flag is false, or if it is a
CanvasRenderingContext2D
object whose scratch bitmap's origin-clean flag is false.
fillStyle
[ = value ]Returns the current style used for filling shapes.
Can be set, to change the fill style.
The style can be either a string containing a CSS color, or a
CanvasGradient
or CanvasPattern
object. Invalid values are ignored.
strokeStyle
[ = value ]Returns the current style used for stroking shapes.
Can be set, to change the stroke style.
The style can be either a string containing a CSS color, or a
CanvasGradient
or CanvasPattern
object. Invalid values are ignored.
The fillStyle
attribute represents the color or style to use inside shapes, and
the strokeStyle
attribute represents the color or style to use for the lines around
the shapes.
Both attributes can be either strings, CanvasGradient
s, or
CanvasPattern
s. On setting, strings must be parsed as CSS <color> values and the color assigned, and
CanvasGradient
and CanvasPattern
objects must be assigned themselves. [CSSCOLOR] If the value is a string but cannot be parsed as a CSS
<color> value, then it must be ignored, and the attribute must retain its previous
value.
If the new value is a CanvasPattern
object that is marked as not origin-clean, then the scratch
bitmap's origin-clean flag must be set to
false.
When set to a CanvasPattern
or
CanvasGradient
object, the assignment is
live, meaning that changes made to the object after the
assignment do affect subsequent stroking or filling of shapes.
On getting, if the value is a color, then the serialization of the color
must be returned. Otherwise, if it is not a color but a
CanvasGradient
or CanvasPattern
, then the
respective object must be returned. (Such objects are opaque and
therefore only useful for assigning to other attributes or for
comparison to other gradients or patterns.)
The serialization of a color for a color value is a string, computed as follows: if
it has alpha equal to 1.0, then the string is a lowercase six-digit hex value, prefixed with a "#"
character (U+0023 NUMBER SIGN), with the first two digits representing the red component, the next
two digits representing the green component, and the last two digits representing the blue
component, the digits being lowercase ASCII hex digits. Otherwise, the color value
has alpha less than 1.0, and the string is the color value in the CSS rgba()
functional-notation format: the literal string rgba
(U+0072 U+0067 U+0062
U+0061) followed by a U+0028 LEFT PARENTHESIS, a base-ten integer in the range 0-255 representing
the red component (using ASCII digits in the shortest form possible), a literal
U+002C COMMA and U+0020 SPACE, an integer for the green component, a comma and a space, an integer
for the blue component, another comma and space, a U+0030 DIGIT ZERO, if the alpha value is
greater than zero then a U+002E FULL STOP (representing the decimal point), if the alpha value is
greater than zero then one or more ASCII digits representing the fractional part of
the alpha, and finally a U+0029
RIGHT PARENTHESIS. User agents must express the fractional part of the alpha value, if any, with
the level of precision necessary for the alpha value, when reparsed, to be interpreted as the same
alpha value.
When the context is created, the fillStyle
and strokeStyle
attributes
must initially have the string value #000000
.
When the value is a color, it must not be affected by the transformation matrix when used to draw on bitmaps.
There are two types of gradients, linear gradients and radial
gradients, both represented by objects implementing the opaque
CanvasGradient
interface.
Once a gradient has been created (see below), stops are placed along it to define how the colors are distributed along the gradient. The color of the gradient at each stop is the color specified for that stop. Between each such stop, the colors and the alpha component must be linearly interpolated over the RGBA space without premultiplying the alpha value to find the color to use at that offset. Before the first stop, the color must be the color of the first stop. After the last stop, the color must be the color of the last stop. When there are no stops, the gradient is transparent black.
addColorStop
(offset, color)Adds a color stop with the given color to the gradient at the given offset. 0.0 is the offset at one end of the gradient, 1.0 is the offset at the other end.
Throws an IndexSizeError
exception if the offset is out of range. Throws a
SyntaxError
exception if the color cannot be parsed.
createLinearGradient
(x0, y0, x1, y1)Returns a CanvasGradient
object that represents a
linear gradient that paints along the line given by the
coordinates represented by the arguments.
createRadialGradient
(x0, y0, r0, x1, y1, r1)Returns a CanvasGradient
object that represents a
radial gradient that paints along the cone given by the circles
represented by the arguments.
If either of the radii are negative, throws an
IndexSizeError
exception.
The addColorStop(offset,
color)
method on the CanvasGradient
interface adds a
new stop to a gradient. If the offset is less than 0 or greater than 1 then an
IndexSizeError
exception must be thrown. If the color cannot be
parsed as a CSS <color> value, then a SyntaxError
exception must
be thrown. Otherwise, the gradient must have a new stop placed, at offset offset relative to the whole gradient, and with the color obtained by parsing color as a CSS <color> value. If multiple stops are added at the same offset
on a gradient, they must be placed in the order added, with the first one closest to the start of
the gradient, and each subsequent one infinitesimally further along towards the end point (in
effect causing all but the first and last stop added at each point to be ignored).
The createLinearGradient(x0, y0, x1, y1)
method takes four arguments that represent the start point (x0, y0) and end point (x1, y1) of the gradient. The method must return a linear CanvasGradient
initialized with the specified line.
Linear gradients must be rendered such that all points on a line perpendicular to the line that crosses the start and end points have the color at the point where those two lines cross (with the colors coming from the interpolation and extrapolation described above). The points in the linear gradient must be transformed as described by the current transformation matrix when rendering.
If x0 = x1 and y0 = y1, then the linear gradient must paint nothing.
The createRadialGradient(x0, y0, r0,
x1, y1, r1)
method takes six arguments, the
first three representing the start circle with origin (x0, y0) and radius r0, and the last three representing the end circle
with origin (x1, y1) and
radius r1. The values are in coordinate space
units. If either of r0 or r1
are negative, an IndexSizeError
exception must be
thrown. Otherwise, the method must return a radial
CanvasGradient
initialized with the two specified
circles.
Radial gradients must be rendered by following these steps:
If x0 = x1 and y0 = y1 and r0 = r1, then the radial gradient must paint nothing. Abort these steps.
Let x(ω) = (x1-x0)ω + x0
Let y(ω) = (y1-y0)ω + y0
Let r(ω) = (r1-r0)ω + r0
Let the color at ω be the color at that position on the gradient (with the colors coming from the interpolation and extrapolation described above).
For all values of ω where r(ω) > 0, starting with the value of ω nearest to positive infinity and ending with the value of ω nearest to negative infinity, draw the circumference of the circle with radius r(ω) at position (x(ω), y(ω)), with the color at ω, but only painting on the parts of the bitmap that have not yet been painted on by earlier circles in this step for this rendering of the gradient.
This effectively creates a cone, touched by the two circles defined in the creation of the gradient, with the part of the cone before the start circle (0.0) using the color of the first offset, the part of the cone after the end circle (1.0) using the color of the last offset, and areas outside the cone untouched by the gradient (transparent black).
The resulting radial gradient must then be transformed as described by the current transformation matrix when rendering.
Gradients must be painted only where the relevant stroking or filling effects requires that they be drawn.
Patterns are represented by objects implementing the opaque
CanvasPattern
interface.
createPattern
(image, repetition)Returns a CanvasPattern
object that uses the given image
and repeats in the direction(s) given by the repetition argument.
The allowed values for repetition are repeat
(both directions), repeat-x
(horizontal only), repeat-y
(vertical only), and no-repeat
(neither). If the repetition argument is empty, the value repeat
is used.
If the image isn't yet fully decoded, then nothing is drawn. If the image is a canvas with no data, throws an InvalidStateError
exception.
setTransform
(transform)Sets the transformation matrix that will be used when rendering the pattern during a fill or stroke painting operation.
To create objects of this type, the createPattern(image, repetition)
method is used. When the method is invoked, the user agent
must run the following steps:
Let image be the first argument and repetition be the second argument.
Check the usability of the image argument. If this returns aborted, then an exception has been thrown and the method doesn't return anything; abort these steps. If it returns bad, then return null and abort these steps. Otherwise it returns good; continue with these steps.
If repetition is the empty string, let it be "repeat
".
If repetition is not a case-sensitive match for one of
"repeat
", "repeat-x
", "repeat-y
", or "no-repeat
", throw a SyntaxError
exception and abort these steps.
Create a new CanvasPattern
object with the image image
and the repetition behavior given by repetition.
If the image argument is not origin-clean, then mark the
CanvasPattern
object as not
origin-clean.
Return the CanvasPattern
object.
Modifying the image used when creating a CanvasPattern
object
after calling the createPattern()
method must
not affect the pattern(s) rendered by the CanvasPattern
object.
Patterns have a transformation matrix, which controls how the pattern is used when it is painted. Initially, a pattern's transformation matrix must be the identity transform.
When the setTransform()
method
is invoked on the pattern, the user agent must replace the pattern's transformation matrix with
the one described by the SVGMatrix
object provided as an argument to the method.
When a pattern is to be rendered within an area, the user agent must run the following steps to determine what is rendered:
Create an infinite transparent black bitmap.
Place a copy of the image on the bitmap, anchored such that its top left corner is at the
origin of the coordinate space, with one coordinate space unit per CSS pixel of the image, then
place repeated copies of this image horizontally to the left and right, if the repetition
behavior is "repeat-x
", or vertically up and down, if the repetition
behavior is "repeat-y
", or in all four directions all over the bitmap, if
the repetition behavior is "repeat
".
If the original image data is a bitmap image, the value painted at a point in the area of the
repetitions is computed by filtering the original image data. When scaling up, if the imageSmoothingEnabled
attribute is set to
true, the image must be rendered using nearest-neighbor interpolation. Otherwise, the user agent
may use any filtering algorithm (for example bilinear interpolation or nearest-neighbor). When
such a filtering
algorithm requires a pixel value from outside the original image data, it must instead use the
value from wrapping the pixel's coordinates to the original image's dimensions. (That is, the
filter uses 'repeat' behavior, regardless of the value of the pattern's repetition behavior.)
Transform the resulting bitmap according to the pattern's transformation matrix.
Transform the resulting bitmap again, this time according to the current transformation matrix.
Replace any part of the image outside the area in which the pattern is to be rendered with transparent black.
The resulting bitmap is what is to be rendered, with the same origin and same scale.
If a radial gradient or repeated pattern is used when the transformation matrix is singular, the resulting style must be transparent black (otherwise the gradient or pattern would be collapsed to a point or line, leaving the other pixels undefined). Linear gradients and solid colors always define all points even with singular tranformation matrices.
There are three methods that immediately draw rectangles to the bitmap. They each take four arguments; the first two give the x and y coordinates of the top left of the rectangle, and the second two give the width w and height h of the rectangle, respectively.
The current transformation matrix must be applied to the following four coordinates, which form the path that must then be closed to get the specified rectangle: (x, y), (x+w, y), (x+w, y+h), (x, y+h).
Shapes are painted without affecting the current default
path, and are subject to the clipping region,
and, with the exception of clearRect()
, also shadow effects, global alpha, and global composition
operators.
clearRect
(x, y, w, h)Clears all pixels on the bitmap in the given rectangle to transparent black.
fillRect
(x, y, w, h)Paints the given rectangle onto the bitmap, using the current fill style.
strokeRect
(x, y, w, h)Paints the box that outlines the given rectangle onto the bitmap, using the current stroke style.
The clearRect(x, y, w, h)
method must run the following steps:
Let pixels be the set of pixels in the specified rectangle that also intersect the current clipping region.
Clear the pixels in pixels to a fully transparent black, erasing any previous image.
Clear regions that cover the pixels in pixels on the scratch bitmap.
If either height or width are zero, this method has no effect, since the set of pixels would be empty.
The fillRect(x, y, w, h)
method must paint the specified
rectangular area using the fillStyle
. If either height
or width are zero, this method has no effect.
The strokeRect(x, y, w, h)
method must take the result of tracing the path described below, using
the CanvasRenderingContext2D
object's line styles, and
fill it with the strokeStyle
.
If both w and h are zero, the path has a single subpath with just one point (x, y), and no lines, and this method thus has no effect (the trace a path algorithm returns an empty path in that case).
If just one of either w or h is zero, then the path has a single subpath consisting of two points, with coordinates (x, y) and (x+w, y+h), in that order, connected by a single straight line.
Otherwise, the path has a single subpath consisting of four points, with coordinates (x, y), (x+w, y), (x+w, y+h), and (x, y+h), connected to each other in that order by straight lines.
fillText
(text, x, y [, maxWidth ] )strokeText
(text, x, y [, maxWidth ] )Fills or strokes (respectively) the given text at the given position. If a maximum width is provided, the text will be scaled to fit that width if necessary.
measureText
(text)Returns a TextMetrics
object with the metrics of the given text in the current font.
width
actualBoundingBoxLeft
actualBoundingBoxRight
fontBoundingBoxAscent
fontBoundingBoxDescent
actualBoundingBoxAscent
actualBoundingBoxDescent
emHeightAscent
emHeightDescent
hangingBaseline
alphabeticBaseline
ideographicBaseline
Returns the measurement described below.
The CanvasRenderingContext2D
interface provides the following methods for
rendering text.
The fillText()
and
strokeText()
methods take three or four arguments, text, x, y, and optionally maxWidth, and render the given text at the given (x, y) coordinates ensuring that the text isn't wider
than maxWidth if specified, using the current
font
, textAlign
, and textBaseline
values. Specifically, when the methods are called, the user agent
must run the following steps:
Run the text preparation algorithm, passing it
text, the CanvasRenderingContext2D
object, and, if the maxWidth argument was
provided, that argument. Let glyphs be the
result.
Move all the shapes in glyphs to the right by x CSS pixels and down by y CSS pixels.
Paint the shapes given in glyphs, as transformed by the current transformation matrix, with each CSS pixel in the coordinate space of glyphs mapped to one coordinate space unit.
For fillText()
,
fillStyle
must be
applied to the shapes and strokeStyle
must be
ignored. For strokeText()
, the reverse
holds: strokeStyle
must be applied to the result of tracing the shapes using the
CanvasRenderingContext2D
object for the line styles,
and fillStyle
must
be ignored.
These shapes are painted without affecting the current path, and are subject to shadow effects, global alpha, the clipping region, and global composition operators.
If the text preparation algorithm used a font that has an origin that is not the same as the entry script's origin (even if "using a font" means just checking if that font has a particular glyph in it before falling back to another font), then set the scratch bitmap's origin-clean flag to false.
The measureText()
method takes one
argument, text. When the method is invoked, the user agent must run the text preparation algorithm, pass
a new TextMetrics object with its attributes set as described in the following list.
If doing these measurements requires using a font that has an
origin that is not the same as that of the Document
object that
owns the canvas
element (even if "using a font" means
just checking if that font has a particular glyph in it before
falling back to another font), then the method must throw a
SecurityError
exception.
Otherwise, it must return the new TextMetrics
object.
[CSS]
width
attributeThe width of that inline box, in CSS pixels. (The text's advance width.)
actualBoundingBoxLeft
attributeThe distance parallel to the baseline from the alignment point given by the textAlign
attribute to the left side of the bounding rectangle of the given text, in CSS pixels; positive numbers indicating a distance going left from the given alignment point.
The sum of this value and the next (actualBoundingBoxRight
) can be wider than the width of the inline box (width
), in particular with slanted fonts where characters overhang their advance width.
actualBoundingBoxRight
attributeThe distance parallel to the baseline from the alignment point given by the textAlign
attribute to the right side of the bounding rectangle of the given text, in CSS pixels; positive numbers indicating a distance going right from the given alignment point.
fontBoundingBoxAscent
attributeThe distance from the horizontal line indicated by the textBaseline
attribute to the top of the highest bounding rectangle of all the fonts used to render the text, in CSS pixels; positive numbers indicating a distance going up from the given baseline.
This value and the next are useful when rendering a background that must have a consistent height even if the exact text being rendered changes. The actualBoundingBoxAscent
attribute (and its corresponding attribute for the descent) are useful when drawing a bounding box around specific text.
fontBoundingBoxDescent
attributeThe distance from the horizontal line indicated by the textBaseline
attribute to the bottom of the lowest bounding rectangle of all the fonts used to render the text, in CSS pixels; positive numbers indicating a distance going down from the given baseline.
actualBoundingBoxAscent
attributeThe distance from the horizontal line indicated by the textBaseline
attribute to the top of the bounding rectangle of the given text, in CSS pixels; positive numbers indicating a distance going up from the given baseline.
This number can vary greatly based on the input text, even if the first font specified covers all the characters in the input. For example, the actualBoundingBoxAscent
of a lowercase "o" from an alphabetic baseline would be less than that of an uppercase "F". The value can easily be negative; for example, the distance from the top of the em box (textBaseline
value "top
") to the top of the bounding rectangle when the given text is just a single comma ",
" would likely (unless the font is quite unusual) be negative.
actualBoundingBoxDescent
attributeThe distance from the horizontal line indicated by the textBaseline
attribute to the bottom of the bounding rectangle of the given text, in CSS pixels; positive numbers indicating a distance going down from the given baseline.
emHeightAscent
attributeThe distance from the horizontal line indicated by the textBaseline
attribute to the top of the em square in the line box, in CSS pixels; positive numbers indicating that the given baseline is below the top of the em square (so this value will usually be positive). Zero if the given baseline is the top of the em square; half the font size if the given baseline is the middle of the em square.
emHeightDescent
attributeThe distance from the horizontal line indicated by the textBaseline
attribute to the bottom of the em square in the line box, in CSS pixels; positive numbers indicating that the given baseline is below the bottom of the em square (so this value will usually be negative). (Zero if the given baseline is the top of the em square.)
hangingBaseline
attributeThe distance from the horizontal line indicated by the textBaseline
attribute to the hanging baseline of the line box, in CSS pixels; positive numbers indicating that the given baseline is below the hanging baseline. (Zero if the given baseline is the hanging baseline.)
alphabeticBaseline
attributeThe distance from the horizontal line indicated by the textBaseline
attribute to the alphabetic baseline of the line box, in CSS pixels; positive numbers indicating that the given baseline is below the alphabetic baseline. (Zero if the given baseline is the alphabetic baseline.)
ideographicBaseline
attributeThe distance from the horizontal line indicated by the textBaseline
attribute to the ideographic baseline of the line box, in CSS pixels; positive numbers indicating that the given baseline is below the ideographic baseline. (Zero if the given baseline is the ideographic baseline.)
Glyphs rendered using fillText()
and strokeText()
can spill out
of the box given by the font size (the em square size) and the width
returned by measureText()
(the text
width). Authors are encouraged to use the bounding box values
described above if this is an issue.
A future version of the 2D context API may provide a way to render fragments of documents, rendered using CSS, straight to the canvas. This would be provided in preference to a dedicated way of doing multiline layout.
The context always has a current default path. There is only one current default path, it is not part of the drawing state. The current default path is a path, as described above.
beginPath
()Resets the current default path.
fill
()fill
(path)Fills the subpaths of the current default path or the given path with the current fill style.
stroke
()stroke
(path)Strokes the subpaths of the current default path or the given path with the current stroke style.
drawSystemFocusRing
(element)drawSystemFocusRing
(path, element)If the given element is focused, draws a focus ring around the current default path or the given path, following the platform conventions for focus rings.
drawCustomFocusRing
(element)drawCustomFocusRing
(path, element)If the given element is focused, and the user has configured his system to draw focus rings in a particular manner (for example, high contrast focus rings), draws a focus ring around the current default path or the given path and returns false.
Otherwise, returns true if the given element is focused, and false otherwise. This can thus be used to determine when to draw a focus ring (see the example below).
scrollPathIntoView
()scrollPathIntoView
(path)Scrolls the current default path or the given path into view. This is especially useful on devices with small screens, where the whole canvas might not be visible at once.
clip
()clip
(path)Further constrains the clipping region to the current default path or the given path.
resetClip
()Unconstrains the clipping region.
isPointInPath
(x, y)isPointInPath
(path, x, y)Returns true if the given point is in the current default path or the given path.
isPointInStroke
(x, y)isPointInStroke
(path, x, y)Returns true if the given point would be in the region covered by the stroke of the current default path or the given path, given the current stroke style.
The beginPath()
method must empty the list of subpaths in the context's
current default path so that the it once again has zero
subpaths.
Where the following method definitions use the term intended
path, it means the Path
argument, if one was
provided, or the current default path otherwise.
When the intended path is a Path
object, the
coordinates and lines of its subpaths must be transformed according
to the CanvasRenderingContext2D
object's current transformation
matrix when used by these methods (without affecting the
Path
object itself). When the intended path is the
current default path, it is not affected by the
transform. (This is because transformations already affect the
current default path when it is constructed, so
applying it when it is painted as well would result in a double
transformation.)
The fill()
method must fill all the subpaths of the intended path, using fillStyle
, and using the
fill rule indicated by the fillRule argument. Open subpaths must be implicitly
closed when being filled (without affecting the actual
subpaths).
The stroke()
method
must trace the intended path,
using the CanvasRenderingContext2D
object for the line
styles, and then fill the resulting path using the strokeStyle
attribute,
using the non-zero winding rule.
As a result of how the algorithm to trace a path is defined, overlapping parts of the paths in one stroke operation are treated as if their union was what was painted.
The stroke style is affected by the transformation during painting, even if the intended path is the current default path.
Paths, when filled or stroked, must be painted without affecting
the current default path or any Path
objects, and must be subject to shadow
effects, global
alpha, the clipping region, and global composition
operators. (The effect of transformations is described above
and varies based on which path is being used.)
Zero-length line segments must be pruned before stroking a path. Subpaths with just one point must be ignored.
The drawSystemFocusRing(element)
method, when invoked, must run
the following steps:
If element is not focused or is not a descendant of the element with whose context the method is associated, then abort these steps.
If the user has requested the use of particular focus rings (e.g. high-contrast focus rings), or if the element would have a focus ring drawn around it, then draw a focus ring of the appropriate style along the intended path, following platform conventions.
Some platforms only draw focus rings around
elements that have been focused from the keyboard, and not those
focused from the mouse. Other platforms simply don't draw focus
rings around some elements at all unless relevant accessibility
features are enabled. This API is intended to follow these
conventions. User agents that implement distinctions based on the
manner in which the element was focused are encouraged to classify
focus driven by the focus()
method
based on the kind of user interaction event from which the call
was triggered (if any).
The focus ring should not be subject to the shadow effects, the global alpha, the
global
composition operators, the fillStyle
attribute, the strokeStyle
attribute, or any of the CanvasDrawingStyles
members,
but should be subject to
the clipping region. (The effect of transformations
is described above and varies based on which path is being
used.)
Optionally, run the appropriate step from the following list:
CanvasRenderingContext2D
object's context bitmap mode is fixedInform the user that the focus is at the location given by the intended path. User agents may wait until the next time the event loop reaches its "update the rendering" step to optionally inform the user.
Add instructions to the scratch bitmap's list of pending interface actions that inform the user that the focus is at the location of the bitmap given by the intended path.
The drawCustomFocusRing(element)
method, when invoked, must run
the following steps:
If element is not focused or is not a descendant of the element with whose context the method is associated, then return false and abort these steps.
Let result be true.
If the user has requested the use of particular focus rings (e.g. high-contrast focus rings), then draw a focus ring of the appropriate style along the intended path, and set result to false.
The focus ring should not be subject to the shadow effects, the global alpha, the
global
composition operators, the fillStyle
attribute, the strokeStyle
attribute, or any of the CanvasDrawingStyles
members,
but should be subject to
the clipping region. (The effect of transformations
is described above and varies based on which path is being
used.)
Optionally, run the appropriate step from the following list:
CanvasRenderingContext2D
object's context bitmap mode is fixedInform the user that the focus is at the location given by the intended path. The user agent may wait until the next time the event loop reaches its "update the rendering" step to optionally inform the user.
Add instructions to the scratch bitmap's list of pending interface actions that inform the user that the focus is at the location of the bitmap given by the intended path.
Return result.
User agents should not implicitly close open subpaths in the intended path when drawing the focus ring.
This might be a moot point, however. For example, if the focus ring is drawn as an axis-aligned bounding rectangle around the points in the intended path, then whether the subpaths are closed or not has no effect. This specification intentionally does not specify precisely how focus rings are to be drawn: user agents are expected to honor their platform's native conventions.
The scrollPathIntoView()
method, when invoked, if the CanvasRenderingContext2D
object's context bitmap mode is fixed, must run the following steps; and otherwise, must add
instructions to the scratch bitmap's list of pending interface actions
that run the following steps:
Let the specified rectangle be the rectangle of the bounding box of the intended path.
Let notional child be a hypothetical element that is a rendered child
of the canvas
element whose dimensions are those of the specified
rectangle.
Scroll notional child into view with the align to top flag set.
Optionally, inform the user that the caret or selection (or both)
cover the specified rectangle of the canvas. If the
CanvasRenderingContext2D
object's context bitmap mode was fixed when the method was invoked, the user agent may wait
until the next time the event loop reaches its "update the rendering" step to
optionally inform the user.
"Inform the user", as used in this section, could mean calling a system accessibility API, which would notify assistive technologies such as magnification tools. To properly drive magnification based on a focus change, a system accessibility API driving a screen magnifier needs the bounds for the newly focused object. The methods above are intended to enable this by allowing the user agent to report the bounding box of the path used to render the focus ring as the bounds of the element element passed as an argument, if that element is focused, and the bounding box of the area to which the user agent is scrolling as the bounding box of the current selection.
The clip()
method must create a new
clipping region by calculating the intersection of the current clipping region and the
area described by the intended path, using the fill rule indicated by the fillRule argument. Open subpaths must
be implicitly closed when computing the clipping region, without affecting the actual subpaths.
The new clipping region replaces the current clipping region.
When the context is initialized, the clipping region must be set to the largest infinite surface (i.e. by default, no clipping occurs).
The resetClip()
method must create a new clipping region that is the largest infinite surface. The new clipping region replaces the
current clipping region.
The isPointInPath()
method must return true if the point given by the x and y coordinates passed to the
method, when treated as coordinates in the canvas coordinate space
unaffected by the current transformation, is inside the intended
path as determined by the fill rule indicated by the fillRule argument; and must
return false otherwise. Points on the path itself must be considered
to be inside the path. If either of the arguments is infinite or
NaN, then the method must return false.
The isPointInStroke()
method
must return true if the point given by the x and y
coordinates passed to the method, when treated as coordinates in the canvas coordinate space
unaffected by the current transformation, is inside the path that results from tracing the intended path, using the non-zero winding rule, and using the
CanvasRenderingContext2D
object for the line styles; and must return false otherwise.
Points on the resulting path must be considered to be inside the path. If either of the arguments
is infinite or NaN, then the method must return false.
This canvas
element has a couple of checkboxes. The
path-related commands are highlighted:
<canvas height=400 width=750> <label><input type=checkbox id=showA> Show As</label> <label><input type=checkbox id=showB> Show Bs</label> <!-- ... --> </canvas> <script> function drawCheckbox(context, element, x, y, paint) { context.save(); context.font = '10px sans-serif'; context.textAlign = 'left'; context.textBaseline = 'middle'; var metrics = context.measureText(element.labels[0].textContent); if (paint) { context.beginPath(); context.strokeStyle = 'black'; context.rect(x-5, y-5, 10, 10); context.stroke(); if (element.checked) { context.fillStyle = 'black'; context.fill(); } context.fillText(element.labels[0].textContent, x+5, y); } context.beginPath(); context.rect(x-7, y-7, 12 + metrics.width+2, 14); if (paint && context.drawCustomFocusRing(element)) { context.strokeStyle = 'silver'; context.stroke(); } context.restore(); } function drawBase() { /* ... */ } function drawAs() { /* ... */ } function drawBs() { /* ... */ } function redraw() { var canvas = document.getElementsByTagName('canvas')[0]; var context = canvas.getContext('2d'); context.clearRect(0, 0, canvas.width, canvas.height); drawCheckbox(context, document.getElementById('showA'), 20, 40, true); drawCheckbox(context, document.getElementById('showB'), 20, 60, true); drawBase(); if (document.getElementById('showA').checked) drawAs(); if (document.getElementById('showB').checked) drawBs(); } function processClick(event) { var canvas = document.getElementsByTagName('canvas')[0]; var context = canvas.getContext('2d'); var x = event.clientX; var y = event.clientY; var node = event.target; while (node) { x -= node.offsetLeft - node.scrollLeft; y -= node.offsetTop - node.scrollTop; node = node.offsetParent; } drawCheckbox(context, document.getElementById('showA'), 20, 40, false); if (context.isPointInPath(x, y)) document.getElementById('showA').checked = !(document.getElementById('showA').checked); drawCheckbox(context, document.getElementById('showB'), 20, 60, false); if (context.isPointInPath(x, y)) document.getElementById('showB').checked = !(document.getElementById('showB').checked); redraw(); } document.getElementsByTagName('canvas')[0].addEventListener('focus', redraw, true); document.getElementsByTagName('canvas')[0].addEventListener('blur', redraw, true); document.getElementsByTagName('canvas')[0].addEventListener('change', redraw, true); document.getElementsByTagName('canvas')[0].addEventListener('click', processClick, false); redraw(); </script>
To draw images, the drawImage
method
can be used.
This method can be invoked with three different sets of arguments:
drawImage(image, dx, dy)
drawImage(image, dx, dy, dw, dh)
drawImage(image, sx, sy, sw, sh, dx, dy, dw, dh)
drawImage
(image, dx, dy)drawImage
(image, dx, dy, dw, dh)drawImage
(image, sx, sy, sw, sh, dx, dy, dw, dh)Draws the given image onto the canvas. The arguments are interpreted as follows:
If the image isn't yet fully decoded, then nothing is drawn. If the image is a canvas with no data, throws an InvalidStateError
exception.
When the drawImage()
method is invoked, the user
agent must run the following steps:
Check the usability of the image argument. If this returns aborted, then an exception has been thrown and the method doesn't return anything; abort these steps. If it returns bad, then abort these steps without drawing anything. Otherwise it returns good; continue with these steps.
Establish the source and destination rectangles as follows:
If not specified, the dw and dh arguments must default to the values of sw and sh, interpreted such that one CSS pixel in the image is treated as one unit in the scratch bitmap's coordinate space. If the sx, sy, sw, and sh arguments are omitted, they must default to 0, 0, the image's intrinsic width in image pixels, and the image's intrinsic height in image pixels, respectively. If the image has no intrinsic dimensions, the concrete object size must be used instead, as determined using the CSS "Concrete Object Size Resolution" algorithm, with the specified size having neither a definite width nor height, nor any additional contraints, the object's intrinsic properties being those of the image argument, and the default object size being the size of the scratch bitmap. [CSSIMAGES]
The source rectangle is the rectangle whose corners are the four points (sx, sy), (sx+sw, sy), (sx+sw, sy+sh), (sx, sy+sh).
The destination rectangle is the rectangle whose corners are the four points (dx, dy), (dx+dw, dy), (dx+dw, dy+dh), (dx, dy+dh).
When the source rectangle is outside the source image, the source rectangle must be clipped to the source image and the destination rectangle must be clipped in the same proportion.
When the destination rectangle is outside the destination image (the scratch bitmap), the pixels that land outside the scratch bitmap are discarded, as if the destination was an infinite canvas whose rendering was clipped to the dimensions of the scratch bitmap.
If one of the sw or sh arguments is zero, abort these steps. Nothing is painted.
Paint the region of the image argument specified by the source rectangle on the region of the rendering context's scratch bitmap specified by the destination rectangle, after applying the current transformation matrix to the destination rectangle.
The image data must be processed in the original direction, even if the dimensions given are negative.
When scaling up, if the imageSmoothingEnabled
attribute is set to true, the user agent should attempt to apply a smoothing algorithm to
the image data when it is scaled. Otherwise, the image must be rendered using nearest-neighbor
interpolation.
This specification does not define the precise algorithm to use when scaling an
image down, or when scaling an image up when the imageSmoothingEnabled
attribute is set to true.
When a canvas
or CanvasRenderingContext2D
object is
drawn onto itself, the drawing model requires the source to be copied before the
image is drawn, so it is possible to copy parts of a canvas
or scratch
bitmap onto overlapping parts of itself.
If the original image data is a bitmap image, the value painted at a point in the destination rectangle is computed by filtering the original image data. The user agent may use any filtering algorithm (for example bilinear interpolation or nearest-neighbor). When the filtering algorithm requires a pixel value from outside the original image data, it must instead use the value from the nearest edge pixel. (That is, the filter uses 'clamp-to-edge' behavior.) When the filtering algorithm requires a pixel value from outside the source rectangle but inside the original image data, then the value from the original image data must be used.
Thus, scaling an image in parts or in whole will have the same effect. This does
mean that when sprites coming from a single sprite sheet are to be scaled, adjacent images in
the sprite sheet can interfere. This can be avoided by ensuring each sprite in the sheet is
surrounded by a border of transparent black, or by copying sprites to be scaled into temporary
canvas
elements and drawing the scaled sprites from there.
Images are painted without affecting the current path, and are subject to shadow effects, global alpha, the clipping region, and global composition operators.
If the image argument is not origin-clean, set the scratch bitmap's origin-clean flag to false.
A hit region list is a list of hit regions for a bitmap.
Each hit region consists of the following information:
A set of pixels on the bitmap for which this region is responsible.
A bounding circumference on the bitmap that surrounds the hit region's set of pixels as they stood when it was created.
Optionally, a non-empty string representing an ID for distinguishing the region from others.
Optionally, a reference to another region that acts as the parent for this one.
A count of regions that have this one as their parent, known as the hit region's child count.
A cursor specification, in the form
of either a CSS cursor value, or the string "inherit
" meaning that the
cursor of the hit region's parent, if any, or of the canvas
element, if
not, is to be used instead.
Optionally, either a control, or an unbacked region description.
A control is just a reference to an
Element
node, to which, in certain conditions, the user agent will route events,
and from which the user agent will determine the state of the hit region for the purposes of
accessibility tools. (The control is ignored when it is not a descendant of the
canvas
element.)
An unbacked region description consists of the following:
Optionally, a label.
An ARIA role, which, if the unbacked region description also has a label, could be the empty string.
addHitRegion
(options)Adds a hit region to the bitmap. The argument is an object with the following members:
path
(default null)
Path
object that describes the pixels that form part of the region. If this
member is not provided or is set to null, the current default path is used
instead.fillRule
(default "nonzero
")
id
(default empty string)
MouseEvent
events on the
canvas
(event.region
) and as a way to
reference this region in later calls to addHitRegion()
.parentID
(default null)
cursor
(default "inherit
")
inherit
" means to use the cursor for the parent region (as specified by the
parentID
member), if any, or to use the
canvas
element's cursor if the region has no parent.control
(default null)
canvas
) to which events are to be
routed, and which accessibility tools are to use as a surrogate for describing and interacting
with this region.label
(default null)
role
(default null)
Hit regions can be used for a variety of purposes:
canvas
to automatically submit a form via a button
element.canvas
without
seeing it, e.g. by touch on a mobile device.canvas
to
have different cursors, with the user agent automatically switching between them.removeHitRegion
(id)Removes a hit region (and all its descendants) from the canvas bitmap. The argument is the ID
of a region added using addHitRegion()
.
The pixels that were covered by this region and its descendants are effectively cleared by this operation, leaving the regions non-interactive. In particular, regions that occupied the same pixels before the removed regions were added, overlapping them, do not resume their previous role.
A hit region A is an ancestor region of a hit region B if B has a parent and its parent is either A or another hit region for which A is an ancestor region.
The region identified by the ID ID in a bitmap bitmap is the value returned by the following algorithm (which can return a hit region or nothing):
If ID is null, return nothing and abort these steps.
Let list be the hit region list associated with bitmap.
If there is a hit region in list whose ID is a case-sensitive match for ID, then return that hit region and abort these steps.
Otherwise, return nothing.
The region representing the control control for a bitmap bitmap is the value returned by the following algorithm (which can return a hit region or nothing):
Let list be the hit region list associated with bitmap.
If there is a hit region in list whose control is control, then return that hit region and abort these steps.
Otherwise, return nothing.
The control represented by a region region for a
canvas
element ancestor is the value returned by the following
algorithm (which can return an element or nothing):
If region has no control, return nothing and abort these steps.
Let control be region's control.
If control is not a descendant of ancestor, then return nothing and abort these steps.
Otherwise, return control.
The cursor for a hit region region of a canvas
element ancestor is the value returned by the following algorithm:
Loop: If region has a cursor specification other than "inherit
", then
return that hit region's cursor specification and abort these steps.
If region has a parent, then let region be that hit region's parent, and return to the step labeled loop.
Otherwise, return the used value of the 'cursor' property for the canvas
element, if any; if there isn't one, return 'auto'. [CSSUI]
The region for a pixel pixel on a bitmap bitmap is the value returned by the following algorithm (which can return a hit region or nothing):
Let list be the hit region list associated with bitmap.
If there is a hit region in list whose set of pixels contains pixel, then return that hit region and abort these steps.
Otherwise, return nothing.
To clear regions that cover the pixels pixels on a bitmap bitmap, the user agent must run the following steps:
Let list be the hit region list associated with bitmap.
Remove all pixels in pixels from the set of pixels of each hit region in list.
Garbage-collect the regions of bitmap.
To garbage-collect the regions of a bitmap bitmap, the user agent must run the following steps:
Let list be the hit region list associated with bitmap.
Loop: Let victim be the first hit region in list to have an empty set of pixels and a zero child count, if any. If there is no such hit region, abort these steps.
If victim has a parent, then decrement that hit region's child count by one.
Remove victim from list.
Jump back to the step labeled loop.
Adding a new region and calling clearRect()
are the two ways this clearing algorithm can
be invoked. The hit region list itself is also reset when the rendering context is
reset, e.g. when a CanvasRenderingContext2D
object is bound to or unbound from a
canvas
, or when the dimensions of the bitmap are changed.
When the addHitRegion()
method is
invoked, the user agent must run the following steps:
Let arguments be the dictionary object provided as the method's argument.
If the arguments object's path
member is not null, let source
path be the path
member's value. Otherwise,
let it be the CanvasRenderingContext2D
object's current default
path.
Transform all the coordinates and lines in source path by the current
transform matrix, if the arguments object's path
member is not null.
Let specified pixels be the pixels contained in source
path, using the fill rule indicated by the fillRule
member.
If the arguments object's id
member is the empty string, let it be null
instead.
If the arguments object's id
member is not null, then let previous
region for this ID be the region identified by the ID given by the id
member's value in this scratch bitmap, if
any. If the id
member is null or no such region
currently exists, let previous region for this ID be null.
If the arguments object's parent
member is the empty string, let it be null
instead.
If the arguments object's parent
member is not null, then let parent region be the region identified by the ID given by the parent
member's value in the scratch
bitmap, if any. If the parent
member is
null or no such region currently exists, let parent region be null.
If the arguments object's label
member is the empty string, let it be null
instead.
If any of the following conditions are met, throw a NotSupportedError
exception
and abort these steps.
control
and label
members are both non-null.control
and role
members are both non-null.role
member's value is the empty string, and the label
member's value is either null or the empty
string.control
member is not null but is neither an
a
element that represents a hyperlink, a
button
element, an input
element whose type
attribute is in one of the Checkbox or Radio
Button states, nor an input
element that is a button.If the parent
member is not null but parent region is null, then throw a NotFoundError
exception and abort
these steps.
If any of the following conditions are met, throw a SyntaxError
exception and
abort these steps.
cursor
member is not null but is neither an
ASCII case-insensitive match for the string "inherit
", nor a
valid CSS 'cursor' property value. [CSSUI]role
member is not null but its value is not an
ordered set of unique space-separated tokens whose tokens are all
case-sensitive matches for names of non-abstract WAI-ARIA roles. [ARIA]Let region be a newly created hit region, with its information configured as follows:
The specified pixels
A user-agent-defined shape that wraps the pixels contained in source path. (In the simplest case, this can just be the bounding rectangle; this specification allows it to be any shape in order to allow other interfaces.)
If the arguments object's id
member is not null: the value of the id
member. Otherwise, region has no
id.
If parent region is not null: parent region. Otherwise, region has no parent.
Initially zero.
If parent region is not null: parent region. Otherwise, region has no parent.
If the arguments object's control
member is not null: the value of the control
member. Otherwise, region has no control.
If the arguments object's label
member is not null: the value of the label
member. Otherwise, region
has no label.
If the arguments object's role
member is not null: the value of the role
member (which might be the empty string).
Otherwise, if the arguments object's label
member is not null: the empty string.
Otherwise, region has no ARIA
role.
If the arguments object's cursor
member is not null, then act as if a CSS rule
for the canvas
element setting its 'cursor' property had been seen, whose value was
the hit region's cursor specification.
For example, if the user agent prefetches cursor values, this would cause that
to happen in response to an appropriately-formed addHitRegion()
call.
If the arguments object's control
member is not null, then let previous region for the control be the region representing the
control given by the control
member's
value for this scratch bitmap, if any. If the control
member is null or no such region currently
exists, let previous region for the control be null.
If there is a previous region with this control, remove it from the scratch bitmap's hit region list; then, if it had a parent region, decrement that hit region's child count by one.
If there is a previous region with this ID, remove it, and all hit regions for which it is an ancestor region, from the scratch bitmap's hit region list; then, if it had a parent region, decrement that hit region's child count by one.
If there is a parent region, increment its hit region's child count by one.
Clear regions that cover the pixels in region's set of pixels on this scratch bitmap.
Add region to the scratch bitmap's element's hit region list.
If the hitregion is interactive or requires the use of WAI-ARIA states or properties (in addition to role or label), authors must not use an unbacked region description (as there is no method to attach WAI-ARIA states or properties to an unbacked region description and unbacked region are unable to receive focus and therefore cannot be in a focused state).
In cases where the region is functioning as an
interactive control it must have a control
. It is recommended
that the control() references an html
interactive element
unless html
does not provide the desired features. In which case WAI-ARIA
role, states and properties may be used as allowed in html
.
Examples of regions where a WAI-ARIA role requires keyboard focus or the use of WAI-ARIA states and properties other than label, are a button and a checkbox. A button has a role="button" and it is required to be focusable using the keyboard. A checkbox has a role ="checkbox", requires an aria-checked state and it is required to be focusable using the keyboard. [ARIA]
When the removeHitRegion()
method is invoked, the user agent must run the following steps:
Let region be the region identified by the ID given by the method's argument in the rendering context's scratch bitmap. If no such region currently exists, abort these steps.
If the method's argument is the empty string, then no region will match.
Remove region, and all hit regions for which it is an ancestor region, from the rendering context's scratch bitmap's hit region list; then, if it had a parent region, decrement that hit region's child count by one.
Garbage-collect the regions of the rendering context's scratch bitmap.
The MouseEvent
interface is extended to support hit
regions:
partial interface MouseEvent { readonly attribute DOMString? region; }; partial dictionary MouseEventInit { DOMString? region; };
region
If the mouse was over a hit region, then this returns the hit region's ID, if it has one.
Otherwise, returns null.
The region
attribute on MouseEvent
objects must return the value
it was initialized to. When the object is created, this attribute
must be initialized to null. It represents the hit region's
ID if the mouse was over a hit region when the event was
fired.
When a MouseEvent
is to be fired at a canvas
element by the user
agent in response to a pointing device action, if the canvas
element has a hit
region list, the user agent must instead follow these steps. If these steps say to act
as normal, that means that the event must be fired as it would have had these requirements not
been applied.
If the pointing device is not indicating a pixel on the
canvas
, act as normal and abort these steps.
Let pixel be the pixel indicated by the pointing device.
Let region be the hit
region that is the
region for the pixel pixel on this
canvas
element's bitmap, if any.
If there is no region, then act as normal and abort these steps.
Let id be the region's ID, if any.
If there is an id, then initialize the
event object's region
attribute to id.
Let control be the control represented by region for this canvas
element, if any.
If there is a control, then target the
event object at control instead of the
canvas
element.
Continue dispatching the event, but with the updated event object and target as given in the above steps.
When a user's pointing device cursor is positioned over a canvas
element, user
agents should render the pointing device cursor according to the cursor specification described by
the cursor for the hit region that is the region for the pixel that the pointing device designates
on the canvas
element's bitmap.
User agents are encouraged to make use of the information present
in a canvas
element's hit region list to
improve the accessibility of canvas
elements.
Each hit region should be handled in a fashion
equivalent to a node in a virtual DOM tree rooted at the
canvas
element. The hierarchy of this virtual DOM tree
must match the hierarchy of the hit
regions, as described by the parent of each region. Regions without a parent must be treated as
children of the canvas
element for the purpose of this
virtual DOM tree. For each node in such a DOM tree, the hit
region's bounding circumference gives the region of the
screen to use when representing the node (if appropriate).
The semantics of a hit region for the purposes of this virtual DOM tree are those of the the control represented by the region, if it has one, or else of a non-interactive element whose ARIA role, if any, is that given by the hit region's ARIA role, and whose textual representation, if any, is given by the hit region's label.
For the purposes of accessibility tools, when an element C is a descendant
of a canvas
element and there is a
region representing the control C for that canvas
element's bitmap, then the element's position relative to the document should be presented as if
it was that region in the canvas
element's virtual DOM tree.
The semantics of a hit region for the purposes of this virtual DOM tree are those of the the control represented by the region, if it has one, or else of a non-interactive element whose ARIA role, if any, is that given by the hit region's ARIA role, and whose textual representation, if any, is given by the hit region's label.
Thus, for instance, a user agent on a touch-screen
device could provide haptic feedback when the user croses over a
hit region's bounding circumference, and then read the
hit region's label to the user. Similarly, a desktop
user agent with a virtual accessibility focus separate from the
keyboard input focus could allow the user to navigate through the
hit regions, using the virtual DOM tree described above to enable
hierarchical navigation. When an interactive control inside the
canvas
element is focused, if the control has a
corresponding region, then that hit region's bounding
circumference could be used to determine what area of the
display to magnify.
createImageData
(sw, sh)Returns an ImageData
object with the given
dimensions. All the pixels in the returned object are transparent
black.
Throws an IndexSizeError
exception if the either
of the width or height arguments are zero.
createImageData
(imagedata)Returns an ImageData
object with the same
dimensions as the argument. All the pixels in the returned object
are transparent black.
createImageDataHD
(sw, sh)Returns an ImageData
object whose dimensions equal
the dimensions given in the arguments, multiplied by the number of
pixels in the canvas bitmap that correspond to each coordinate
space unit. All the pixels in the returned object are transparent
black.
Throws an IndexSizeError
exception if the either
of the width or height arguments are zero.
getImageData
(sx, sy, sw, sh)Returns an ImageData
object containing the image data for the given rectangle of
the bitmap.
Throws an IndexSizeError
exception if the either
of the width or height arguments are zero.
The data will be returned with one pixel of image data for each coordinate space unit on the canvas (ignoring transforms).
getImageDataHD
(sx, sy, sw, sh)Returns an ImageData
object containing the image data for the given rectangle of
the bitmap.
Throws an IndexSizeError
exception if the either
of the width or height arguments are zero.
The data will be returned at the same resolution as the canvas bitmap.
width
height
Returns the actual dimensions of the data in the
ImageData
object, in pixels. For objects returned by
the non-HD variants of the methods in this API, this will
correspond to the dimensions given to the methods. For the HD
variants, the number of pixels might be different than the number
of corresponding coordinate space units.
resolution
Returns the theoretical number of pixels in the ImageData
object's data per
corresponding coordinate space unit. This value is automatically determined from the source
image when the ImageData
object is created. It is only used to ensure that
ImageBitmap
objects have the right pixel density when generated from
ImageData
objects.
data
Returns the one-dimensional array containing the data in RGBA order, as integers in the range 0 to 255.
putImageData
(imagedata, dx, dy [, dirtyX, dirtyY, dirtyWidth, dirtyHeight ])Paints the data from the given ImageData
object onto the bitmap. If a dirty
rectangle is provided, only the pixels from that rectangle are painted.
The globalAlpha
and globalCompositeOperation
attributes, as well as the shadow attributes, are ignored for the
purposes of this method call; pixels in the canvas are replaced
wholesale, with no composition, alpha blending, no shadows,
etc.
Throws a NotSupportedError
exception if any of the
arguments are not finite.
Each pixel in the image data is mapped to one coordinate space unit on the bitmap, regardless
of the value of the resolution
attribute.
putImageDataHD
(imagedata, dx, dy [, dirtyX, dirtyY, dirtyWidth, dirtyHeight ])Paints the data from the given ImageData
object onto the bitmap, at the bitmap's
native pixel density (regardless of the value of the ImageData
object's resolution
attribute). If a dirty rectangle is provided,
only the pixels from that rectangle are painted.
The globalAlpha
and globalCompositeOperation
attributes, as well as the shadow attributes, are ignored for the
purposes of this method call; pixels in the canvas are replaced
wholesale, with no composition, alpha blending, no shadows,
etc.
Throws a NotSupportedError
exception if any of the
arguments are not finite.
The createImageData()
and
createImageDataHD()
methods are used to instantiate new blank ImageData
objects.
When the createImageData()
method is invoked with two arguments sw and sh, it must
return a new ImageData
object representing a rectangle with a width in equal to the
absolute magnitude of sw and a height equal to the absolute magnitude of sh, and with a 1.0 pixel density, if both sw and sh
are non-zero. If one or both of sw and sh are zero, then the
method must throw an IndexSizeError
exception instead. When invoked with a single imagedata argument, it must return a new ImageData
object representing
a rectangle with the same dimensions and pixel density as the ImageData
object passed
as the argument. When the method returns an ImageData
object, it must be filled with
transparent black.
When the createImageDataHD()
method is invoked and both of its arguments (sw and sh) are non-zero, it
must return a new ImageData
object representing a rectangle with a width equal to
the absolute magnitude of sw multiplied by scale, a height
equal to the absolute magnitude of sh multiplied by scale,
and a pixel density equal to scale, where scale is the
number of pixels in the scratch bitmap per coordinate space units. The
ImageData
object returned must be filled with transparent black. If either or both
of the arguments are zero, the method must instead throw an IndexSizeError
exception.
The getImageData(sx, sy, sw, sh)
method must, if
either the sw or sh arguments are zero, throw an
IndexSizeError
exception; otherwise,
if the scratch bitmap's origin-clean
flag is set to false, it must throw a SecurityError
exception;
otherwise, it must return an ImageData
object with width sw and
height sh representing the scratch bitmap for the area of that
bitmap denoted by the rectangle whose corners are the four points (sx, sy), (sx+sw, sy), (sx+sw, sy+sh), (sx, sy+sh), in the bitmap's coordinate space
units. If the bitmap does not represent each coordinate space unit square using exactly one pixel,
the value of each pixel in the returned object must be derived from the value(s) of the pixel(s)
in the bitmap that correspond to the same coordinate. Pixels outside the scratch
bitmap must be returned as transparent black. Pixels must be returned as non-premultiplied
alpha values. The pixel density of the object returned must be 1.0.
The getImageDataHD(sx,
sy, sw, sh)
method must,
if either the sw or sh arguments are zero, throw an
IndexSizeError
exception; otherwise,
if the scratch bitmap's origin-clean
flag is set to false, it must throw a SecurityError
exception;
otherwise, it must return an ImageData
object with width sw
multiplied by scale and height sh multiplied by scale representing the scratch bitmap for the area of that bitmap
denoted by the rectangle whose corners are the four points (sx, sy), (sx+sw, sy), (sx+sw, sy+sh), (sx, sy+sh), in the bitmap's coordinate space
units. Pixels outside the scratch bitmap must be returned as transparent black.
Pixels must be returned as non-premultiplied alpha values. The pixel density of the object
returned must be scale. For the purposes of this paragraph, scale is the number of pixels in the scratch bitmap per coordinate
space units.
New ImageData
objects must be initialized so that
their width
attribute is set to the number of pixels per row in the image data,
their height
attribute is set to the number of rows in the image data,
their resolution
is set to the object's pixel density, and their
data
attribute is
initialized to a Uint8ClampedArray
object. The
Uint8ClampedArray
object must use a Canvas Pixel
ArrayBuffer
for its storage, and must have a
zero start offset and a length equal to the length of its storage,
in bytes. The Canvas Pixel ArrayBuffer
must contain the image data. At least one pixel's worth of image
data must be returned. [TYPEDARRAY]
A Canvas Pixel ArrayBuffer
is an
ArrayBuffer
that whose data is represented in
left-to-right order, row by row top to bottom, starting with the top
left, with each pixel's red, green, blue, and alpha components being
given in that order for each pixel. Each component of each pixel
represented in this array must be in the range 0..255, representing
the 8 bit value for that component. The components must be assigned
consecutive indices starting with 0 for the top left pixel's red
component. [TYPEDARRAY]
The putImageData()
and putImageDataHD()
methods write data from
ImageData
structures back to the rendering context's scratch bitmap.
Their arguments are: imagedata, dx, dy, dirtyX, dirtyY, dirtyWidth, and dirtyHeight.
When the last four arguments to these methods are omitted, they
must be assumed to have the values 0, 0, the width
member of the imagedata structure, and the height
member of the imagedata structure, respectively.
When invoked, these methods must act as follows:
If dirtyWidth is negative, let dirtyX be dirtyX+dirtyWidth, and let dirtyWidth be equal to the absolute magnitude of dirtyWidth.
If dirtyHeight is negative, let dirtyY be dirtyY+dirtyHeight, and let dirtyHeight be equal to the absolute magnitude of dirtyHeight.
If dirtyX is negative, let dirtyWidth be dirtyWidth+dirtyX, and let dirtyX be zero.
If dirtyY is negative, let dirtyHeight be dirtyHeight+dirtyY, and let dirtyY be zero.
If dirtyX+dirtyWidth is greater than the width
attribute of the imagedata argument, let dirtyWidth be the value of that width
attribute, minus the
value of dirtyX.
If dirtyY+dirtyHeight is greater than the height
attribute of the imagedata argument, let dirtyHeight be the value of that height
attribute, minus the
value of dirtyY.
If, after those changes, either dirtyWidth or dirtyHeight is negative or zero, stop these steps without affecting any bitmaps.
Run the appropriate steps from the following list:
putImageData()
Draw the region of the image data in the horizontal rectangle whose top left corner is at (dirtyX,dirtyY) and whose bottom right corner is at (dirtyX+dirtyWidth,dirtyY+dirtyHeight) onto the rendering context's scratch bitmap, aligned such that the top left of the rectangle is at coordinate (dx,dy).
If the imageSmoothingEnabled
attribute is set to true, then the user agent should attempt to apply a smoothing algorithm to
the image data if the scratch bitmap does not have exactly one device pixel per
coordinate space unit.
putImageDataHD()
Let dxdevice be the x-coordinate of the device pixel in the scratch bitmap corresponding to the dx coordinate in the scratch bitmap's coordinate space.
Let dydevice be the y-coordinate of the device pixel in the scratch bitmap corresponding to the dy coordinate in the scratch bitmap's coordinate space.
For all integer values of x and y where dirtyX ≤ x < dirtyX+dirtyWidth and dirtyY ≤ y < dirtyY+dirtyHeight, copy the four channels of the pixel with coordinate (x, y) in the imagedata data structure to the pixel with coordinate (dxdevice+x, dydevice+y) in the rendering context's scratch bitmap.
The handling of pixel rounding when the specified coordinates do not exactly map to the device coordinate space is not defined by this specification, except that the following must result in no visible changes to the rendering:
context.putImageDataHD(context.getImageDataHD(x, y, w, h), p, q);
...for any value of x, y, w, and h and where p is the smaller of x and the sum of x and w, and q is the smaller of y and the sum of y and h; and except that the following two calls:
context.createImageData(w, h); context.getImageData(0, 0, w, h);
...must return ImageData
objects with the same
dimensions as each other, and the following two calls:
context.createImageDataHD(w, h); context.getImageDataHD(0, 0, w, h);
...must also return ImageData
objects with the same
dimensions as each other, for any value of w and
h in both cases. In other words, while user
agents may round the arguments of these methods so that they map to
device pixel boundaries, any rounding performed must be performed
consistently for all of the methods described in this section.
Due to the lossy nature of converting to and from
premultiplied alpha color values, pixels that have just been set
using putImageDataHD()
might
be returned to an equivalent getImageDataHD()
as
different values.
The current path, transformation matrix, shadow attributes, global alpha, the clipping region, and global composition operator must not affect the methods described in this section.
In the following example, the script generates an
ImageData
object so that it can draw onto it.
// canvas is a reference to a <canvas> element var context = canvas.getContext('2d'); // create a blank slate var data = context.createImageDataHD(canvas.width, canvas.height); // create some plasma FillPlasma(data, 'green'); // green plasma // add a cloud to the plasma AddCloud(data, data.width/2, data.height/2); // put a cloud in the middle // paint the plasma+cloud on the canvas context.putImageDataHD(data, 0, 0); // support methods function FillPlasma(data, color) { ... } function AddCloud(data, x, y) { ... }
Here is an example of using getImageDataHD()
and
putImageDataHD()
to implement an edge detection filter.
<!DOCTYPE HTML> <html> <head> <title>Edge detection demo</title> <script> var image = new Image(); function init() { image.onload = demo; image.src = "image.jpeg"; } function demo() { var canvas = document.getElementsByTagName('canvas')[0]; var context = canvas.getContext('2d'); // draw the image onto the canvas context.drawImage(image, 0, 0); // get the image data to manipulate var input = context.getImageDataHD(0, 0, canvas.width, canvas.height); // get an empty slate to put the data into var output = context.createImageDataHD(canvas.width, canvas.height); // alias some variables for convenience // notice that we are using input.width and input.height here // as they might not be the same as canvas.width and canvas.height // (in particular, they might be different on high-res displays) var w = input.width, h = input.height; var inputData = input.data; var outputData = output.data; // edge detection for (var y = 1; y < h-1; y += 1) { for (var x = 1; x < w-1; x += 1) { for (var c = 0; c < 3; c += 1) { var i = (y*w + x)*4 + c; outputData[i] = 127 + -inputData[i - w*4 - 4] - inputData[i - w*4] - inputData[i - w*4 + 4] + -inputData[i - 4] + 8*inputData[i] - inputData[i + 4] + -inputData[i + w*4 - 4] - inputData[i + w*4] - inputData[i + w*4 + 4]; } outputData[(y*w + x)*4 + 3] = 255; // alpha } } // put the image data back after manipulation context.putImageDataHD(output, 0, 0); } </script> </head> <body onload="init()"> <canvas></canvas> </body> </html>
globalAlpha
[ = value ]Returns the current alpha value applied to rendering operations.
Can be set, to change the alpha value. Values outside of the range 0.0 .. 1.0 are ignored.
globalCompositeOperation
[ = value ]Returns the current composition operation, from the values defined in the Compositing and Blending specification. [COMPOSITE].
Can be set, to change the composition operation. Unknown values are ignored.
All drawing operations are affected by the global compositing
attributes, globalAlpha
and globalCompositeOperation
.
The globalAlpha
attribute gives an
alpha value that is applied to shapes and images before they are composited onto the scratch
bitmap. The value must be in the range from 0.0 (fully transparent) to 1.0 (no additional
transparency). If an attempt is made to set the attribute to a value outside this range, including
Infinity and Not-a-Number (NaN) values, the attribute must retain its previous value. When the
context is created, the globalAlpha
attribute must
initially have the value 1.0.
The globalCompositeOperation
attribute sets the current composition operator, which controls how shapes and images are drawn
onto the scratch bitmap, once they
have had globalAlpha
and the current
transformation matrix applied. The possible values are those defined in the Compositing and
Blending specification. [COMPOSITE]
These values are all case-sensitive — they must be used exactly as defined. User agents must not recognize values that are not a case-sensitive match for one of the values given in the Compositing and Blending specification. [COMPOSITE]
On setting, if the user agent does not recognize the specified
value, it must be ignored, leaving the value of globalCompositeOperation
unaffected. Otherwise, the attribute must be set to the given new value.
When the context is created, the globalCompositeOperation
attribute must initially have the value
source-over
.
imageSmoothingEnabled
[ = value ]Returns whether pattern fills and the drawImage()
method will attempt to smooth images if
their pixels don't line up exactly with the display, when scaling images up.
Can be set, to change whether images are smoothed (true) or not (false).
The imageSmoothingEnabled
attribute, on getting, must return the last value it was set to. On setting, it must be set to the
new value. When the CanvasRenderingContext2D
object is created, the attribute must be
set to true.
All drawing operations are affected by the four global shadow attributes.
shadowColor
[ = value ]Returns the current shadow color.
Can be set, to change the shadow color. Values that cannot be parsed as CSS colors are ignored.
shadowOffsetX
[ = value ]shadowOffsetY
[ = value ]Returns the current shadow offset.
Can be set, to change the shadow offset. Values that are not finite numbers are ignored.
shadowBlur
[ = value ]Returns the current level of blur applied to shadows.
Can be set, to change the blur level. Values that are not finite numbers greater than or equal to zero are ignored.
The shadowColor
attribute sets the color of the shadow.
When the context is created, the shadowColor
attribute
initially must be fully-transparent black.
On getting, the serialization of the color must be returned.
On setting, the new value must be parsed as a CSS <color> value and the color assigned. If the value cannot be parsed as a CSS <color> value then it must be ignored, and the attribute must retain its previous value. [CSSCOLOR]
The shadowOffsetX
and shadowOffsetY
attributes specify the distance that the shadow will be offset in
the positive horizontal and positive vertical distance
respectively. Their values are in coordinate space units. They are
not affected by the current transformation matrix.
When the context is created, the shadow offset attributes must
initially have the value 0
.
On getting, they must return their current value. On setting, the attribute being set must be set to the new value, except if the value is infinite or NaN, in which case the new value must be ignored.
The shadowBlur
attribute specifies the level of the blurring effect. (The units do
not map to coordinate space units, and are not affected by the
current transformation matrix.)
When the context is created, the shadowBlur
attribute must
initially have the value 0
.
On getting, the attribute must return its current value. On setting the attribute must be set to the new value, except if the value is negative, infinite or NaN, in which case the new value must be ignored.
Shadows are only drawn
if the opacity component of the alpha component of the color
of shadowColor
is
non-zero and either the shadowBlur
is non-zero, or
the shadowOffsetX
is non-zero, or the shadowOffsetY
is
non-zero.
It is likely that this will change: browser vendors have indicated an interest in changing the processing model for shadows such that they only draw when the composition operator is "source-over" (the default). Read more...
When shadows are drawn, they must be rendered as follows:
Let A be an infinite transparent black bitmap on which the source image for which a shadow is being created has been rendered.
Let B be an infinite transparent black bitmap, with a coordinate space and an origin identical to A.
Copy the alpha channel of A to B, offset by shadowOffsetX
in the
positive x direction, and shadowOffsetY
in the
positive y direction.
If shadowBlur
is greater than
0:
Let σ be half the value of
shadowBlur
.
Perform a 2D Gaussian Blur on B, using σ as the standard deviation.
User agents may limit values of σ to an implementation-specific maximum value to avoid exceeding hardware limitations during the Gaussian blur operation.
Set the red, green, and blue components of every pixel in
B to the red, green, and blue components
(respectively) of the color of shadowColor
.
Multiply the alpha component of every pixel in B by the alpha component of the color of shadowColor
.
The shadow is in the bitmap B, and is rendered as part of the drawing model described below.
If the current composition operation is copy
, shadows effectively won't render
(since the shape will overwrite the shadow).
When a shape or image is painted, user agents must follow these steps, in the order given (or act as if they do):
Render the shape or image onto an infinite transparent black bitmap, creating image A, as described in the previous sections. For shapes, the current fill, stroke, and line styles must be honored, and the stroke must itself also be subjected to the current transformation matrix.
When shadows are drawn, render the shadow from image A, using the current shadow styles, creating image B.
When shadows are drawn, multiply the alpha
component of every pixel in B by globalAlpha
.
When shadows are drawn, composite B within the clipping region over the current scratch bitmap using the current composition operator.
Multiply the alpha component of every pixel in A by globalAlpha
.
Composite A within the clipping region over the current scratch bitmap using the current composition operator.
When compositing onto the scratch bitmap, pixels that would fall outside of the scratch bitmap must be discarded.
This section is non-normative.
When a canvas is interactive, authors should include focusable elements in the element's fallback content corresponding to each focusable part of the canvas, as in the example above.
To indicate which focusable part of the canvas is currently
focused, authors should use the drawSystemFocusRing()
method, passing it the element for which a ring is being drawn. This
method only draws the focus ring if the element is focused, so that
it can simply be called whenever drawing the element, without
checking whether the element is focused or not first.
Authors should avoid implementing text editing controls using the
canvas
element. Doing so has a large number of
disadvantages:
This is a huge amount of work, and authors are most strongly
encouraged to avoid doing any of it by instead using the
input
element, the textarea
element, or
the contenteditable
attribute.
This section is non-normative.
Here is an example of a script that uses canvas to draw pretty glowing lines.
<canvas width="800" height="450"></canvas> <script> var context = document.getElementsByTagName('canvas')[0].getContext('2d'); var lastX = context.canvas.width * Math.random(); var lastY = context.canvas.height * Math.random(); var hue = 0; function line() { context.save(); context.translate(context.canvas.width/2, context.canvas.height/2); context.scale(0.9, 0.9); context.translate(-context.canvas.width/2, -context.canvas.height/2); context.beginPath(); context.lineWidth = 5 + Math.random() * 10; context.moveTo(lastX, lastY); lastX = context.canvas.width * Math.random(); lastY = context.canvas.height * Math.random(); context.bezierCurveTo(context.canvas.width * Math.random(), context.canvas.height * Math.random(), context.canvas.width * Math.random(), context.canvas.height * Math.random(), lastX, lastY); hue = hue + 10 * Math.random(); context.strokeStyle = 'hsl(' + hue + ', 50%, 50%)'; context.shadowColor = 'white'; context.shadowBlur = 10; context.stroke(); context.restore(); } setInterval(line, 50); function blank() { context.fillStyle = 'rgba(0,0,0,0.1)'; context.fillRect(0, 0, context.canvas.width, context.canvas.height); } setInterval(blank, 40); </script>
All references are normative unless marked "Non-normative".
XMLHttpRequest
, A. van Kesteren. WHATWG.