WebGPU

W3C Working Draft,

More details about this document
This version:
https://www.w3.org/TR/2022/WD-webgpu-20220105/
Latest published version:
https://www.w3.org/TR/webgpu/
Editor's Draft:
https://gpuweb.github.io/gpuweb/
Previous Versions:
History:
https://www.w3.org/standards/history/webgpu
Feedback:
public-gpu@w3.org with subject line “[webgpu] … message topic …” (archives)
GitHub
Inline In Spec
Editors:
(Mozilla)
(Google)
Former Editor:
(Apple)
Participate:
File an issue (open issues)

Abstract

WebGPU exposes an API for performing operations, such as rendering and computation, on a Graphics Processing Unit.

Status of this document

This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.

Feedback and comments on this specification are welcome. GitHub Issues are preferred for discussion on this specification. Alternatively, you can send comments to the GPU for the Web Working Group’s mailing-list, public-gpu@w3.org (archives). This draft highlights some of the pending issues that are still to be discussed in the working group. No decision has been taken on the outcome of these issues including whether they are valid.

This document was published by the GPU for the Web Working Group as a Working Draft using the Recommendation track. This document is intended to become a W3C Recommendation.

Publication as a Working Draft does not imply endorsement by W3C and its Members.

This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 2 November 2021 W3C Process Document.

1. Introduction

This section is non-normative.

Graphics Processing Units, or GPUs for short, have been essential in enabling rich rendering and computational applications in personal computing. WebGPU is an API that exposes the capabilities of GPU hardware for the Web. The API is designed from the ground up to efficiently map to (post-2014) native GPU APIs. WebGPU is not related to WebGL and does not explicitly target OpenGL ES.

WebGPU sees physical GPU hardware as GPUAdapters. It provides a connection to an adapter via GPUDevice, which manages resources, and the device’s GPUQueues, which execute commands. GPUDevice may have its own memory with high-speed access to the processing units. GPUBuffer and GPUTexture are the physical resources backed by GPU memory. GPUCommandBuffer and GPURenderBundle are containers for user-recorded commands. GPUShaderModule contains shader code. The other resources, such as GPUSampler or GPUBindGroup, configure the way physical resources are used by the GPU.

GPUs execute commands encoded in GPUCommandBuffers by feeding data through a pipeline, which is a mix of fixed-function and programmable stages. Programmable stages execute shaders, which are special programs designed to run on GPU hardware. Most of the state of a pipeline is defined by a GPURenderPipeline or a GPUComputePipeline object. The state not included in these pipeline objects is set during encoding with commands, such as beginRenderPass() or setBlendConstant().

2. Malicious use considerations

This section is non-normative. It describes the risks associated with exposing this API on the Web.

2.1. Security

The security requirements for WebGPU are the same as ever for the web, and are likewise non-negotiable. The general approach is strictly validating all the commands before they reach GPU, ensuring that a page can only work with its own data.

2.1.1. CPU-based undefined behavior

A WebGPU implementation translates the workloads issued by the user into API commands specific to the target platform. Native APIs specify the valid usage for the commands (for example, see vkCreateDescriptorSetLayout) and generally don’t guarantee any outcome if the valid usage rules are not followed. This is called "undefined behavior", and it can be exploited by an attacker to access memory they don’t own, or force the driver to execute arbitrary code.

In order to disallow insecure usage, the range of allowed WebGPU behaviors is defined for any input. An implementation has to validate all the input from the user and only reach the driver with the valid workloads. This document specifies all the error conditions and handling semantics. For example, specifying the same buffer with intersecting ranges in both "source" and "destination" of copyBufferToBuffer() results in GPUCommandEncoder generating an error, and no other operation occurring.

See § 21 Errors & Debugging for more information about error handling.

2.2. GPU-based undefined behavior

WebGPU shaders are executed by the compute units inside GPU hardware. In native APIs, some of the shader instructions may result in undefined behavior on the GPU. In order to address that, the shader instruction set and its defined behaviors are strictly defined by WebGPU. When a shader is provided to createShaderModule(), the WebGPU implementation has to validate it before doing any translation (to platform-specific shaders) or transformation passes.

2.3. Uninitialized data

Generally, allocating new memory may expose the leftover data of other applications running on the system. In order to address that, WebGPU conceptually initializes all the resources to zero, although in practice an implementation may skip this step if it sees the developer initializing the contents manually. This includes variables and shared workgroup memory inside shaders.

The precise mechanism of clearing the workgroup memory can differ between platforms. If the native API does not provide facilities to clear it, the WebGPU implementation transforms the compute shader to first do a clear across all invocations, synchronize them, and continue executing developer’s code.

2.4. Out-of-bounds access in shaders

Shaders can access physical resources either directly (for example, as a "uniform" GPUBufferBinding), or via texture units, which are fixed-function hardware blocks that handle texture coordinate conversions. Validation on the API side can only guarantee that all the inputs to the shader are provided and they have the correct usage and types. The host API side can not guarantee that the data is accessed within bounds if the texture units are not involved.

define the host API distinct from the shader API

In order to prevent the shaders from accessing GPU memory an application doesn’t own, the WebGPU implementation may enable a special mode (called "robust buffer access") in the driver that guarantees that the access is limited to buffer bounds.

Alternatively, an implementation may transform the shader code by inserting manual bounds checks. When this path is taken, the out-of-bound checks only apply to array indexing. They aren’t needed for plain field access of shader structures due to the minBindingSize validation on the host side.

If the shader attempts to load data outside of physical resource bounds, the implementation is allowed to:

  1. return a value at a different location within the resource bounds

  2. return a value vector of "(0, 0, 0, X)" with any "X"

  3. partially discard the draw or dispatch call

If the shader attempts to write data outside of physical resource bounds, the implementation is allowed to:

  1. write the value to a different location within the resource bounds

  2. discard the write operation

  3. partially discard the draw or dispatch call

2.5. Invalid data

When uploading floating-point data from CPU to GPU, or generating it on the GPU, we may end up with a binary representation that doesn’t correspond to a valid number, such as infinity or NaN (not-a-number). The GPU behavior in this case is subject to the accuracy of the GPU hardware implementation of the IEEE-754 standard. WebGPU guarantees that introducing invalid floating-point numbers would only affect the results of arithmetic computations and will not have other side effects.

2.5.1. Driver bugs

GPU drivers are subject to bugs like any other software. If a bug occurs, an attacker could possibly exploit the incorrect behavior of the driver to get access to unprivileged data. In order to reduce the risk, the WebGPU working group will coordinate with GPU vendors to integrate the WebGPU Conformance Test Suite (CTS) as part of their driver testing process, like it was done for WebGL. WebGPU implementations are expected to have workarounds for some of the discovered bugs, and disable WebGPU on drivers with known bugs that can’t be worked around.

2.5.2. Timing attacks

WebGPU is designed for multi-threaded use via Web Workers. As such, it is designed not to open the users to modern high-precision timing attacks. Some of the objects, like GPUBuffer or GPUQueue, have shared state which can be simultaneously accessed. This allows race conditions to occur, similar to those of accessing a SharedArrayBuffer from multiple Web Workers, which makes the thread scheduling observable.

WebGPU addresses this by limiting the ability to deserialize (or share) objects only to the agents inside the agent cluster, and only if the cross-origin isolated policies are in place. This restriction matches the mitigations against the malicious SharedArrayBuffer use. Similarly, the user agent may also serialize the agents sharing any handles to prevent any concurrency entirely.

In the end, the attack surface for races on shared state in WebGPU will be a small subset of the SharedArrayBuffer attacks.

WebGPU also specifies the "timestamp-query" feature, which provides high precision timing of GPU operations. The feature is optional, and a WebGPU implementation may limit its exposure only to those scenarios that are trusted. Alternatively, the timing query results could be processed by a compute shader and aligned to a lower precision.

2.5.3. Row hammer attacks

Row hammer is a class of attacks that exploit the leaking of states in DRAM cells. It could be used on GPU. WebGPU does not have any specific mitigations in place, and relies on platform-level solutions, such as reduced memory refresh intervals.

2.6. Denial of service

WebGPU applications have access to GPU memory and compute units. A WebGPU implementation may limit the available GPU memory to an application, in order to keep other applications responsive. For GPU processing time, a WebGPU implementation may set up "watchdog" timer that makes sure an application doesn’t cause GPU unresponsiveness for more than a few seconds. These measures are similar to those used in WebGL.

2.7. Workload identification

WebGPU provides access to constrained global resources shared between different programs (and web pages) running on the same machine. An application can try to indirectly probe how constrained these global resources are, in order to reason about workloads performed by other open web pages, based on the patterns of usage of these shared resources. These issues are generally analogous to issues with Javascript, such as system memory and CPU execution throughput. WebGPU does not provide any additional mitigations for this.

2.7.1. Memory resources

WebGPU exposes fallible allocations from machine-global memory heaps, such as VRAM. This allows for probing the size of the system’s remaining available memory (for a given heap type) by attempting to allocate and watching for allocation failures.

GPUs internally have one or more (typically only two) heaps of memory shared by all running applications. When a heap is depleted, WebGPU would fail to create a resource. This is observable, which may allow a malicious application to guess what heaps are used by other applications, and how much they allocate from them.

2.7.2. Computation resources

If one site uses WebGPU at the same time as another, it may observe the increase in time it takes to process some work. For example, if a site constantly submits compute workloads and tracks completion of work on the queue, it may observe that something else also started using the GPU.

A GPU has many parts that can be tested independently, such as the arithmetic units, texture sampling units, atomic units, etc. A malicious application may sense when some of these units are stressed, and attempt to guess the workload of another application by analyzing the stress patterns. This is analogous to the realities of CPU execution of Javascript.

2.8. Privacy

2.8.1. Machine-specific limits

WebGPU can expose a lot of detail on the underlying GPU architecture and the device geometry. This includes available physical adapters, many limits on the GPU and CPU resources that could be used (such as the maximum texture size), and any optional hardware-specific capabilities that are available.

User agents are not obligated to expose the real hardware limits, they are in full control of how much the machine specifics are exposed. One strategy to reduce fingerprinting is binning all the target platforms into a few number of bins. In general, the privacy impact of exposing the hardware limits matches the one of WebGL.

The default limits are also deliberately high enough to allow most applications to work without requesting higher limits. All the usage of the API is validated according to the requested limits, so the actual hardware capabilities are not exposed to the users by accident.

2.8.2. Machine-specific artifacts

There are some machine-specific rasterization/precision artifacts and performance differences that can be observed roughly in the same way as in WebGL. This applies to rasterization coverage and patterns, interpolation precision of the varyings between shader stages, compute unit scheduling, and more aspects of execution.

Generally, rasterization and precision fingerprints are identical across most or all of the devices of each vendor. Performance differences are relatively intractable, but also relatively low-signal (as with JS execution performance).

Privacy-critical applications and user agents should utilize software implementations to eliminate such artifacts.

2.8.3. Machine-specific performance

Another factor for differentiating users is measuring the performance of specific operations on the GPU. Even with low precision timing, repeated execution of an operation can show if the user’s machine is fast at specific workloads. This is a fairly common vector (present in both WebGL and Javascript), but it’s also low-signal and relatively intractable to truly normalize.

WebGPU compute pipelines expose access to GPU unobstructed by the fixed-function hardware. This poses an additional risk for unique device fingerprinting. User agents can take steps to dissociate logical GPU invocations with actual compute units to reduce this risk.

2.8.4. User Agent State

This specification doesn’t define any additional user-agent state for an origin. However it is expected that user agents will have compilation caches for the result of expensive compilation like GPUShaderModule, GPURenderPipeline and GPUComputePipeline. These caches are important to improve the loading time of WebGPU applications after the first visit.

For the specification, these caches are indifferentiable from incredibly fast compilation, but for applications it would be easy to measure how long createComputePipelineAsync() takes to resolve. This can leak information across origins (like "did the user access a site with this specific shader") so user agents should follow the best practices in storage partitioning.

The system’s GPU driver may also have its own cache of compiled shaders and pipelines. User agents may want to disable these when at all possible, or add per-partition data to shaders in ways that will make the GPU driver consider them different.

3. Fundamentals

3.1. Conventions

3.1.1. Dot Syntax

In this specification, the . ("dot") syntax, common in programming languages, is used. The phrasing "Foo.Bar" means "the Bar member of the value (or interface) Foo."

The ?. ("optional chaining") syntax, adopted from JavaScript, is also used. The phrasing "Foo?.Bar" means "if Foo is null or undefined, undefined; otherwise, Foo.Bar".

For example, where buffer is a GPUBuffer, buffer?.[[device]].[[adapter]] means "if buffer is null or undefined, then undefined; otherwise, the [[adapter]] internal slot of the [[device]] internal slot of buffer.

3.1.2. Internal Objects

An internal object is a conceptual, non-exposed WebGPU object. Internal objects track the state of an API object and hold any underlying implementation. If the state of a particular internal object can change in parallel from multiple agents, those changes are always atomic with respect to all agents.

Note: An "agent" refers to a JavaScript "thread" (i.e. main thread, or Web Worker).

3.1.3. WebGPU Interfaces

A WebGPU interface is an exposed interface which encapsulates an internal object. It provides the interface through which the internal object's state is changed.

Any interface which includes GPUObjectBase is a WebGPU interface.

interface mixin GPUObjectBase {
    attribute USVString? label;
};

GPUObjectBase has the following attributes:

label, of type USVString, nullable

A label which can be used by development tools (such as error/warning messages, browser developer tools, or platform debugging utilities) to identify the underlying internal object to the developer. It has no specified format, and therefore cannot be reliably machine-parsed.

In any given situation, the user agent may or may not choose to use this label.

GPUObjectBase has the following internal slots:

[[device]], of type device, readonly

An internal slot holding the device which owns the internal object.

3.1.4. Object Descriptors

An object descriptor holds the information needed to create an object, which is typically done via one of the create* methods of GPUDevice.

dictionary GPUObjectDescriptorBase {
    USVString label;
};

GPUObjectDescriptorBase has the following members:

label, of type USVString

The initial value of GPUObjectBase.label.

3.2. Invalid Internal Objects & Contagious Invalidity

Object creation operations in WebGPU are internally asynchronous, so they don’t fail with exceptions. Instead, returned objects may refer to internal objects which are either valid or invalid. An invalid object may never become valid at a later time. Some objects may become invalid during their lifetime, while most may only be invalid from creation.

Objects are invalid from creation if it wasn’t possible to create them. This can happen, for example, if the object descriptor doesn’t describe a valid object, or if there is not enough memory to allocate a resource.

Internal objects of most types cannot become invalid after they are created, but still may become unusable, e.g. if the owning device is lost or destroyed, or the object has a special internal state, like buffer state destroyed.

Internal objects of some types can become invalid after they are created; specifically, devices, adapters, and command/pass/bundle encoders.

A given GPUObjectBase object is valid to use with a targetObject if and only if the following requirements are met:

3.3. Coordinate Systems

Note: WebGPU’s coordinate systems match DirectX’s coordinate systems in a graphics pipeline.

3.4. Programming Model

3.4.1. Timelines

This section is non-normative.

A computer system with a user agent at the front-end and GPU at the back-end has components working on different timelines in parallel:

Content timeline

Associated with the execution of the Web script. It includes calling all methods described by this specification.

Steps executed on the content timeline look like this.
Device timeline

Associated with the GPU device operations that are issued by the user agent. It includes creation of adapters, devices, and GPU resources and state objects, which are typically synchronous operations from the point of view of the user agent part that controls the GPU, but can live in a separate OS process.

Steps executed on the device timeline look like this.
Queue timeline

Associated with the execution of operations on the compute units of the GPU. It includes actual draw, copy, and compute jobs that run on the GPU.

Steps executed on the queue timeline look like this.

In this specification, asynchronous operations are used when the result value depends on work that happens on any timeline other than the Content timeline. They are represented by callbacks and promises in JavaScript.

GPUComputePassEncoder.dispatch():
  1. User encodes a dispatch command by calling a method of the GPUComputePassEncoder which happens on the Content timeline.

  2. User issues GPUQueue.submit() that hands over the GPUCommandBuffer to the user agent, which processes it on the Device timeline by calling the OS driver to do a low-level submission.

  3. The submit gets dispatched by the GPU invocation scheduler onto the actual compute units for execution, which happens on the Queue timeline.

GPUDevice.createBuffer():
  1. User fills out a GPUBufferDescriptor and creates a GPUBuffer with it, which happens on the Content timeline.

  2. User agent creates a low-level buffer on the Device timeline.

GPUBuffer.mapAsync():
  1. User requests to map a GPUBuffer on the Content timeline and gets a promise in return.

  2. User agent checks if the buffer is currently used by the GPU and makes a reminder to itself to check back when this usage is over.

  3. After the GPU operating on Queue timeline is done using the buffer, the user agent maps it to memory and resolves the promise.

3.4.2. Memory Model

This section is non-normative.

Once a GPUDevice has been obtained during an application initialization routine, we can describe the WebGPU platform as consisting of the following layers:

  1. User agent implementing the specification.

  2. Operating system with low-level native API drivers for this device.

  3. Actual CPU and GPU hardware.

Each layer of the WebGPU platform may have different memory types that the user agent needs to consider when implementing the specification:

Most physical resources are allocated in the memory of type that is efficient for computation or rendering by the GPU. When the user needs to provide new data to the GPU, the data may first need to cross the process boundary in order to reach the user agent part that communicates with the GPU driver. Then it may need to be made visible to the driver, which sometimes requires a copy into driver-allocated staging memory. Finally, it may need to be transferred to the dedicated GPU memory, potentially changing the internal layout into one that is most efficient for GPUs to operate on.

All of these transitions are done by the WebGPU implementation of the user agent.

Note: This example describes the worst case, while in practice the implementation may not need to cross the process boundary, or may be able to expose the driver-managed memory directly to the user behind an ArrayBuffer, thus avoiding any data copies.

3.4.3. Multi-Threading

3.4.4. Resource Usages

A physical resource can be used on GPU with an internal usage:

input

Buffer with input data for draw or dispatch calls. Preserves the contents. Allowed by buffer INDEX, buffer VERTEX, or buffer INDIRECT.

constant

Resource bindings that are constant from the shader point of view. Preserves the contents. Allowed by buffer UNIFORM or texture TEXTURE_BINDING.

storage

Writable storage resource binding. Allowed by buffer STORAGE or texture STORAGE_BINDING.

storage-read

Read-only storage resource bindings. Preserves the contents. Allowed by buffer STORAGE.

attachment

Texture used as an output attachment in a render pass. Allowed by texture RENDER_ATTACHMENT.

attachment-read

Texture used as a read-only attachment in a render pass. Preserves the contents. Allowed by texture RENDER_ATTACHMENT.

Textures may consist of separate mipmap levels and array layers, which can be used differently at any given time. Each such texture subresource is uniquely identified by a texture, mipmap level, and (for 2d textures only) array layer, and aspect.

We define subresource to be either a whole buffer, or a texture subresource.

Some internal usages are compatible with others. A subresource can be in a state that combines multiple usages together. We consider a list U to be a compatible usage list if (and only if) it satisfies any of the following rules:

Enforcing that the usages are only combined into a compatible usage list allows the API to limit when data races can occur in working with memory. That property makes applications written against WebGPU more likely to run without modification on different platforms.

Generally, when an implementation processes an operation that uses a subresource in a different way than its current usage allows, it schedules a transition of the resource into the new state. In some cases, like within an open GPURenderPassEncoder, such a transition is impossible due to the hardware limitations. We define these places as usage scopes.

The main usage rule is, for any one subresource, its list of internal usages within one usage scope must be a compatible usage list.

For example, binding the same buffer for storage as well as for input within the same GPURenderPassEncoder would put the encoder as well as the owning GPUCommandEncoder into the error state. This combination of usages does not make a compatible usage list.

Note: race condition of multiple writable storage buffer/texture usages in a single usage scope is allowed.

The subresources of textures included in the views provided to GPURenderPassColorAttachment.view and GPURenderPassColorAttachment.resolveTarget are considered to be used as attachment for the usage scope of this render pass.

The physical size of a texture subresource is the dimension of the texture subresource in texels that includes the possible extra paddings to form complete texel blocks in the subresource.

Considering a GPUTexture in BC format whose [[descriptor]].size is {60, 60, 1}, when sampling the GPUTexture at mipmap level 2, the sampling hardware uses {15, 15, 1} as the size of the texture subresource, while its physical size is {16, 16, 1} as the block-compression algorithm can only operate on 4x4 texel blocks.

3.4.5. Synchronization

For each subresource of a physical resource, its set of internal usage flags is tracked on the Queue timeline.

This section will need to be revised to support multiple queues.

On the Queue timeline, there is an ordered sequence of usage scopes. For the duration of each scope, the set of internal usage flags of any given subresource is constant. A subresource may transition to new usages at the boundaries between usage scopes.

This specification defines the following usage scopes:

The above should probably talk about GPU commands. But we don’t have a way to reference specific GPU commands (like dispatch) yet.

The above rules mean the following example resource usages are included in usage scope validation:

During command encoding, every usage of a subresource is recorded in one of the usage scopes in the command buffer. For each usage scope, the implementation performs usage scope validation by composing the list of all internal usage flags of each subresource used in the usage scope. If any of those lists is not a compatible usage list, GPUCommandEncoder.finish() generates a GPUValidationError in the current error scope.

3.5. Core Internal Objects

3.5.1. Adapters

An adapter identifies an implementation of WebGPU on the system: both an instance of compute/rendering functionality on the platform underlying a browser, and an instance of a browser’s implementation of WebGPU on top of that functionality.

Adapters do not uniquely represent underlying implementations: calling requestAdapter() multiple times returns a different adapter object each time.

An adapter object may become invalid at any time. This happens inside "lose the device" and "mark adapters stale". An invalid adapter is unable to vend new devices.

Note: This mechanism ensures that various adapter-creation scenarios look similar to applications, so they can easily be robust to more scenarios with less testing: first initialization, reinitialization due to an unplugged adapter, reinitialization due to a test GPUDevice.destroy() call, etc. It also ensures applications use the latest system state to make decisions about which adapter to use.

An adapter may be considered a fallback adapter if it has significant performance caveats in exchange for some combination of wider compatibility, more predictable behavior, or improved privacy. It is not required that a fallback adapter is available on every system.

An adapter has the following internal slots:

[[features]], of type ordered set<GPUFeatureName>, readonly

The features which can be used to create devices on this adapter.

[[limits]], of type supported limits, readonly

The best limits which can be used to create devices on this adapter.

Each adapter limit must be the same or better than its default value in supported limits.

[[fallback]], of type boolean

If set to true indicates that the adapter is a fallback adapter.

Adapters are exposed via GPUAdapter.

3.5.2. Devices

A device is the logical instantiation of an adapter, through which internal objects are created. It can be shared across multiple agents (e.g. dedicated workers).

A device is the exclusive owner of all internal objects created from it: when the device is lost or destroyed, it and all objects created on it (directly, e.g. createTexture(), or indirectly, e.g. createView()) become implicitly unusable.

Define "ownership".

A device has the following internal slots:

[[adapter]], of type adapter, readonly

The adapter from which this device was created.

[[features]], of type ordered set<GPUFeatureName>, readonly

The features which can be used on this device. No additional features can be used, even if the underlying adapter can support them.

[[limits]], of type supported limits, readonly

The limits which can be used on this device. No better limits can be used, even if the underlying adapter can support them.

When a new device device is created from adapter adapter with GPUDeviceDescriptor descriptor:

Any time the user agent needs to revoke access to a device, it calls lose the device(device, undefined).

To lose the device(device, reason):
  1. Make device.[[adapter]] invalid.

  2. Make device invalid.

  3. explain how to get from device to its "primary" GPUDevice.

  4. Resolve device.lost with a new GPUDeviceLostInfo with reason set to reason and message set to an implementation-defined value.

    Note: message should not disclose unnecessary user/system information and should never be parsed by applications.

Devices are exposed via GPUDevice.

3.6. Optional Capabilities

WebGPU adapters and devices have capabilities, which describe WebGPU functionality that differs between different implementations, typically due to hardware or system software constraints. A capability is either a feature or a limit.

3.6.1. Features

A feature is a set of optional WebGPU functionality that is not supported on all implementations, typically due to hardware or system software constraints.

Each GPUAdapter exposes a set of available features. Only those features may be requested in requestDevice().

Functionality that is part of an feature may only be used if the feature was requested at device creation. Dictionary members added to existing dictionaries by optional features are always optional at the WebIDL level; if the feature is not enabled, such members must not be set to non-default values.

Note: Though enabling a feature won’t add new IDL-required fields, it may not necessarily be backward-compatible with existing code. An optional feature can enable new validation which invalidates previously-valid code.

See the Feature Index for a description of the functionality each feature enables.

3.6.2. Limits

Each limit is a numeric limit on the usage of WebGPU on a device.

A supported limits object has a value for every defined limit. Each adapter has a set of supported limits, and devices are created with specific supported limits in place. The device limits are enforced regardless of the adapter’s limits.

Each limit has a default value. Every adapter is guaranteed to support the default value or better. The default is used if a value is not explicitly specified in requiredLimits.

One limit value may be better than another. A better limit value always relaxes validation, enabling strictly more programs to be valid. For each limit class, "better" is defined.

Different limits have different limit classes:

maximum

The limit enforces a maximum on some value passed into the API.

Higher values are better.

May only be set to values ≥ the default. Lower values are clamped to the default.

alignment

The limit enforces a minimum alignment on some value passed into the API; that is, the value must be a multiple of the limit.

Lower values are better.

May only be set to powers of 2 which are ≤ the default. Values which are not powers of 2 are invalid. Higher powers of 2 are clamped to the default.

Note: Setting "better" limits may not necessarily be desirable, as they may have a performance impact. Because of this, and to improve portability across devices and implementations, applications should generally request the "worst" limits that work for their content (ideally, the default values).

Limit name Type Limit class Default
maxTextureDimension1D GPUSize32 maximum 8192
The maximum allowed value for the size.width of a texture created with dimension "1d".
maxTextureDimension2D GPUSize32 maximum 8192
The maximum allowed value for the size.width and size.height of a texture created with dimension "2d".
maxTextureDimension3D GPUSize32 maximum 2048
The maximum allowed value for the size.width, size.height and size.depthOrArrayLayers of a texture created with dimension "3d".
maxTextureArrayLayers GPUSize32 maximum 256
The maximum allowed value for the size.depthOrArrayLayers of a texture created with dimension "1d" or "2d".
maxBindGroups GPUSize32 maximum 4
The maximum number of GPUBindGroupLayouts allowed in bindGroupLayouts when creating a GPUPipelineLayout.
maxDynamicUniformBuffersPerPipelineLayout GPUSize32 maximum 8
The maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are uniform buffers with dynamic offsets. See Exceeds the binding slot limits.
maxDynamicStorageBuffersPerPipelineLayout GPUSize32 maximum 4
The maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are storage buffers with dynamic offsets. See Exceeds the binding slot limits.
maxSampledTexturesPerShaderStage GPUSize32 maximum 16
For each possible GPUShaderStage stage, the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are sampled textures. See Exceeds the binding slot limits.
maxSamplersPerShaderStage GPUSize32 maximum 16
For each possible GPUShaderStage stage, the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are samplers. See Exceeds the binding slot limits.
maxStorageBuffersPerShaderStage GPUSize32 maximum 8
For each possible GPUShaderStage stage, the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are storage buffers. See Exceeds the binding slot limits.
maxStorageTexturesPerShaderStage GPUSize32 maximum 4
For each possible GPUShaderStage stage, the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are storage textures. See Exceeds the binding slot limits.
maxUniformBuffersPerShaderStage GPUSize32 maximum 12
For each possible GPUShaderStage stage, the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are uniform buffers. See Exceeds the binding slot limits.
maxUniformBufferBindingSize GPUSize64 maximum 65536
The maximum GPUBufferBinding.size for bindings with a GPUBindGroupLayoutEntry entry for which entry.buffer?.type is "uniform".
maxStorageBufferBindingSize GPUSize64 maximum 134217728 (128 MiB)
The maximum GPUBufferBinding.size for bindings with a GPUBindGroupLayoutEntry entry for which entry.buffer?.type is "storage" or "read-only-storage".
minUniformBufferOffsetAlignment GPUSize32 alignment 256
The required alignment for GPUBufferBinding.offset and setBindGroup dynamicOffsets arguments for binding with a GPUBindGroupLayoutEntry entry for which entry.buffer?.type is "uniform".
minStorageBufferOffsetAlignment GPUSize32 alignment 256
The required alignment for GPUBufferBinding.offset and setBindGroup dynamicOffsets arguments for binding with a GPUBindGroupLayoutEntry entry for which entry.buffer?.type is "storage" or "read-only-storage".
maxVertexBuffers GPUSize32 maximum 8
The maximum number of buffers when creating a GPURenderPipeline.
maxVertexAttributes GPUSize32 maximum 16
The maximum number of attributes in total across buffers when creating a GPURenderPipeline.
maxVertexBufferArrayStride GPUSize32 maximum 2048
The maximum allowed arrayStride when creating a GPURenderPipeline.
maxInterStageShaderComponents GPUSize32 maximum 60
The maximum allowed number of components of input or output variables for inter-stage communication (like vertex outputs or fragment inputs).
maxComputeWorkgroupStorageSize GPUSize32 maximum 16352
The maximum number of bytes used for a compute stage GPUShaderModule entry-point.
maxComputeInvocationsPerWorkgroup GPUSize32 maximum 256
The maximum value of the product of the workgroup_size dimensions for a compute stage GPUShaderModule entry-point.
maxComputeWorkgroupSizeX GPUSize32 maximum 256
The maximum value of the workgroup_size X dimension for a compute stage GPUShaderModule entry-point.
maxComputeWorkgroupSizeY GPUSize32 maximum 256
The maximum value of the workgroup_size Y dimensions for a compute stage GPUShaderModule entry-point.
maxComputeWorkgroupSizeZ GPUSize32 maximum 64
The maximum value of the workgroup_size Z dimensions for a compute stage GPUShaderModule entry-point.
maxComputeWorkgroupsPerDimension GPUSize32 maximum 65535
The maximum value for the arguments of dispatch(x, y, z).

Do we need to have a max per-pixel render target size?

3.6.2.1. GPUSupportedLimits

GPUSupportedLimits exposes the limits supported by an adapter or device. See GPUAdapter.limits and GPUDevice.limits.

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUSupportedLimits {
    readonly attribute unsigned long maxTextureDimension1D;
    readonly attribute unsigned long maxTextureDimension2D;
    readonly attribute unsigned long maxTextureDimension3D;
    readonly attribute unsigned long maxTextureArrayLayers;
    readonly attribute unsigned long maxBindGroups;
    readonly attribute unsigned long maxDynamicUniformBuffersPerPipelineLayout;
    readonly attribute unsigned long maxDynamicStorageBuffersPerPipelineLayout;
    readonly attribute unsigned long maxSampledTexturesPerShaderStage;
    readonly attribute unsigned long maxSamplersPerShaderStage;
    readonly attribute unsigned long maxStorageBuffersPerShaderStage;
    readonly attribute unsigned long maxStorageTexturesPerShaderStage;
    readonly attribute unsigned long maxUniformBuffersPerShaderStage;
    readonly attribute unsigned long long maxUniformBufferBindingSize;
    readonly attribute unsigned long long maxStorageBufferBindingSize;
    readonly attribute unsigned long minUniformBufferOffsetAlignment;
    readonly attribute unsigned long minStorageBufferOffsetAlignment;
    readonly attribute unsigned long maxVertexBuffers;
    readonly attribute unsigned long maxVertexAttributes;
    readonly attribute unsigned long maxVertexBufferArrayStride;
    readonly attribute unsigned long maxInterStageShaderComponents;
    readonly attribute unsigned long maxComputeWorkgroupStorageSize;
    readonly attribute unsigned long maxComputeInvocationsPerWorkgroup;
    readonly attribute unsigned long maxComputeWorkgroupSizeX;
    readonly attribute unsigned long maxComputeWorkgroupSizeY;
    readonly attribute unsigned long maxComputeWorkgroupSizeZ;
    readonly attribute unsigned long maxComputeWorkgroupsPerDimension;
};
3.6.2.2. GPUSupportedFeatures

GPUSupportedFeatures is a setlike interface. Its set entries are the GPUFeatureName values of the features supported by an adapter or device. It must only contain strings from the GPUFeatureName enum.

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUSupportedFeatures {
    readonly setlike<DOMString>;
};
Note: The type of the GPUSupportedFeatures set entries is DOMString to allow user agents to gracefully handle valid GPUFeatureNames which are added in later revisions of the spec but which the user agent has not been updated to recognize yet. If the set entries type was GPUFeatureName the following code would throw an TypeError rather than reporting false:
Check for support of an unrecognized feature:
if (adapter.features.has('unknown-feature')) {
    // Use unknown-feature
} else {
    console.warn('unknown-feature is not supported by this adapter.');
}

3.7. Origin Restrictions

WebGPU allows accessing image data stored in images, videos, and canvases. Restrictions are imposed on the use of cross-domain media, because shaders can be used to indirectly deduce the contents of textures which have been uploaded to the GPU.

WebGPU disallows uploading an image source if it is not origin-clean.

This also implies that the origin-clean flag for a canvas rendered using WebGPU will never be set to false.

For more information on issuing CORS requests for image and video elements, consult:

3.8. Color Spaces and Encoding

WebGPU does not provide color management. All values within WebGPU (such as texture elements) are raw numeric values, not color-managed color values.

WebGPU does interface with color-managed outputs (via GPUCanvasConfiguration) and inputs (via copyExternalImageToTexture() and importExternalTexture()). Thus, color conversion must be performed between the WebGPU numeric values and the external color values. Each such interface point locally defines an encoding (color space, transfer function, and alpha premultiplication) in which the WebGPU numeric values are to be interpreted.

Each color space is defined over an extended range, if defined by the referenced CSS definitions, to represent color values outside of its space (in both chrominance and luminance).

enum GPUPredefinedColorSpace {
    "srgb",
};

Possibly replace this with PredefinedColorSpace, but note that doing so would mean new WebGPU functionality gets added automatically when items are added to that enum in the upstream spec.

Consider a path for uploading srgb-encoded images into linearly-encoded textures. [Issue #gpuweb/gpuweb#1715]

"srgb"

The CSS predefined color space srgb.

3.8.1. Color Space Conversions

A color is converted between spaces by translating its representation in one space to a representation in another according to the definitions above.

If the source value has fewer than 4 channels, the remaining green/blue/alpha channels are set to 0, 0, 1 respectively, as needed, before converting for color space/encoding and alpha premultiplication. After conversion, if the destination needs fewer than 4 channels, the additional channels are ignored.

Colors are not lossily clamped during conversion: converting from one color space to another will result in values outside the range [0, 1] if the source color values were outside the range of the destination color space’s gamut (e.g. if a Display P3 image is converted to sRGB).

4. Initialization

4.1. Examples

Need a robust example like the one in ErrorHandling.md, which handles all situations. Possibly also include a simple example with no handling.

A GPU object is available in the Window and DedicatedWorkerGlobalScope contexts through the Navigator and WorkerNavigator interfaces respectively and is exposed via navigator.gpu:

interface mixin NavigatorGPU {
    [SameObject, SecureContext] readonly attribute GPU gpu;
};
Navigator includes NavigatorGPU;
WorkerNavigator includes NavigatorGPU;

4.3. GPU

GPU is the entry point to WebGPU.

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPU {
    Promise<GPUAdapter?> requestAdapter(optional GPURequestAdapterOptions options = {});
};

GPU has the following methods:

requestAdapter(options)

Requests an adapter from the user agent. The user agent chooses whether to return an adapter, and, if so, chooses according to the provided options.

Called on: GPU this.

Arguments:

Arguments for the GPU.requestAdapter(options) method.
Parameter Type Nullable Optional Description
options GPURequestAdapterOptions Criteria used to select the adapter.

Returns: Promise<GPUAdapter?>

  1. Let promise be a new promise.

  2. Issue the following steps on the Device timeline of this:

    1. If the user agent chooses to return an adapter, it should:

      1. Create a valid adapter adapter, chosen according to the rules in § 4.3.1 Adapter Selection and the criteria in options.

      2. If adapter meets the criteria of a fallback adapter set adapter.[[fallback]] to true.

      3. Resolve promise with a new GPUAdapter encapsulating adapter.

    2. Otherwise, promise resolves with null.

  3. Return promise.

GPU has the following internal slots:

[[previously_returned_adapters]], of type ordered set<adapter>

The set of adapters that have been returned via requestAdapter(). It is used, then cleared, in mark adapters stale.

Upon any change in the system’s state that could affect the result of any requestAdapter() call, the user agent should mark adapters stale. For example:

Additionally, mark adapters stale may by scheduled at any time. User agents may choose to do this often even when there has been no system state change (e.g. several seconds after the last call to requestDevice(). This has no effect on well-formed applications, obfuscates real system state changes, and makes developers more aware that calling requestAdapter() again is always necessary before calling requestDevice().

To mark adapters stale:
  1. For each adapter in navigator.gpu.[[previously_returned_adapters]]:

    1. Make adapter.[[adapter]] invalid.

  2. Empty navigator.gpu.[[previously_returned_adapters]].

Update here if an adaptersadded/adapterschanged event is introduced.

Request a GPUAdapter:
const adapter = await navigator.gpu.requestAdapter(/* ... */);
const features = adapter.features;
// ...

4.3.1. Adapter Selection

GPURequestAdapterOptions provides hints to the user agent indicating what configuration is suitable for the application.

dictionary GPURequestAdapterOptions {
    GPUPowerPreference powerPreference;
    boolean forceFallbackAdapter = false;
};
enum GPUPowerPreference {
    "low-power",
    "high-performance",
};

GPURequestAdapterOptions has the following members:

powerPreference, of type GPUPowerPreference

Optionally provides a hint indicating what class of adapter should be selected from the system’s available adapters.

The value of this hint may influence which adapter is chosen, but it must not influence whether an adapter is returned or not.

Note: The primary utility of this hint is to influence which GPU is used in a multi-GPU system. For instance, some laptops have a low-power integrated GPU and a high-performance discrete GPU.

Note: Depending on the exact hardware configuration, such as battery status and attached displays or removable GPUs, the user agent may select different adapters given the same power preference. Typically, given the same hardware configuration and state and powerPreference, the user agent is likely to select the same adapter.

It must be one of the following values:

undefined (or not present)

Provides no hint to the user agent.

"low-power"

Indicates a request to prioritize power savings over performance.

Note: Generally, content should use this if it is unlikely to be constrained by drawing performance; for example, if it renders only one frame per second, draws only relatively simple geometry with simple shaders, or uses a small HTML canvas element. Developers are encouraged to use this value if their content allows, since it may significantly improve battery life on portable devices.

"high-performance"

Indicates a request to prioritize performance over power consumption.

Note: By choosing this value, developers should be aware that, for devices created on the resulting adapter, user agents are more likely to force device loss, in order to save power by switching to a lower-power adapter. Developers are encouraged to only specify this value if they believe it is absolutely necessary, since it may significantly decrease battery life on portable devices.

forceFallbackAdapter, of type boolean, defaulting to false

When set to true indicates that only a fallback adapter may be returned. If the user agent does not support a fallback adapter, will cause requestAdapter() to resolve to null.

Note: requestAdapter() may still return a fallback adapter if forceFallbackAdapter is set to false and either no other appropriate adapter is available or the user agent chooses to return a fallback adapter. Developers that wish to prevent their applications from running on fallback adapters should check the GPUAdapter.isFallbackAdapter attribute prior to requesting a GPUDevice.

4.4. GPUAdapter

A GPUAdapter encapsulates an adapter, and describes its capabilities (features and limits).

To get a GPUAdapter, use requestAdapter().

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUAdapter {
    readonly attribute DOMString name;
    [SameObject] readonly attribute GPUSupportedFeatures features;
    [SameObject] readonly attribute GPUSupportedLimits limits;
    readonly attribute boolean isFallbackAdapter;

    Promise<GPUDevice> requestDevice(optional GPUDeviceDescriptor descriptor = {});
};

GPUAdapter has the following attributes:

name, of type DOMString, readonly

A human-readable name identifying the adapter. The contents are implementation-defined.

features, of type GPUSupportedFeatures, readonly

The set of values in this.[[adapter]].[[features]].

limits, of type GPUSupportedLimits, readonly

The limits in this.[[adapter]].[[limits]].

isFallbackAdapter, of type boolean, readonly

Returns the value of [[adapter]].[[fallback]].

GPUAdapter has the following internal slots:

[[adapter]], of type adapter, readonly

The adapter to which this GPUAdapter refers.

GPUAdapter has the following methods:

requestDevice(descriptor)

Requests a device from the adapter.

Called on: GPUAdapter this.

Arguments:

Arguments for the GPUAdapter.requestDevice(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUDeviceDescriptor Description of the GPUDevice to request.

Returns: Promise<GPUDevice>

  1. Let promise be a new promise.

  2. Let adapter be this.[[adapter]].

  3. Issue the following steps to the Device timeline:

    1. If any of the following requirements are unmet, reject promise with a TypeError and stop.

      Note: This is the same error that is produced if a feature name isn’t known by the browser at all (in its GPUFeatureName definition). This converges the behavior when the browser doesn’t support a feature with the behavior when a particular adapter doesn’t support a feature.

    2. If any of the following requirements are unmet, reject promise with an OperationError and stop.

    3. If adapter is invalid, or the user agent otherwise cannot fulfill the request:

      1. Let device be a new device.

      2. Lose the device(device, undefined).

        Note: This makes adapter invalid, if it wasn’t already.

        Note: User agents should consider issuing developer-visible warnings in most or all cases when this occurs. Applications should perform reinitialization logic starting with requestAdapter().

      3. Resolve promise with a new GPUDevice encapsulating device, and stop.

    4. Resolve promise with a new GPUDevice object encapsulating a new device with the capabilities described by descriptor.

  4. Return promise.

4.4.1. GPUDeviceDescriptor

GPUDeviceDescriptor describes a device request.

dictionary GPUDeviceDescriptor : GPUObjectDescriptorBase {
    sequence<GPUFeatureName> requiredFeatures = [];
    record<DOMString, GPUSize64> requiredLimits = {};
};

GPUDeviceDescriptor has the following members:

requiredFeatures, of type sequence<GPUFeatureName>, defaulting to []

Specifies the features that are required by the device request. The request will fail if the adapter cannot provide these features.

Exactly the specified set of features, and no more or less, will be allowed in validation of API calls on the resulting device.

requiredLimits, of type record<DOMString, GPUSize64>, defaulting to {}

Specifies the limits that are required by the device request. The request will fail if the adapter cannot provide these limits.

Each key must be the name of a member of supported limits. Exactly the specified limits, and no better or worse, will be allowed in validation of API calls on the resulting device.

4.4.1.1. GPUFeatureName

Each GPUFeatureName identifies a set of functionality which, if available, allows additional usages of WebGPU that would have otherwise been invalid.

enum GPUFeatureName {
    "depth-clip-control",
    "depth24unorm-stencil8",
    "depth32float-stencil8",
    "texture-compression-bc",
    "texture-compression-etc2",
    "texture-compression-astc",
    "timestamp-query",
    "indirect-first-instance",
};

4.5. GPUDevice

A GPUDevice encapsulates a device and exposes the functionality of that device.

GPUDevice is the top-level interface through which WebGPU interfaces are created.

To get a GPUDevice, use requestDevice().

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUDevice : EventTarget {
    [SameObject] readonly attribute GPUSupportedFeatures features;
    [SameObject] readonly attribute GPUSupportedLimits limits;

    [SameObject] readonly attribute GPUQueue queue;

    undefined destroy();

    GPUBuffer createBuffer(GPUBufferDescriptor descriptor);
    GPUTexture createTexture(GPUTextureDescriptor descriptor);
    GPUSampler createSampler(optional GPUSamplerDescriptor descriptor = {});
    GPUExternalTexture importExternalTexture(GPUExternalTextureDescriptor descriptor);

    GPUBindGroupLayout createBindGroupLayout(GPUBindGroupLayoutDescriptor descriptor);
    GPUPipelineLayout createPipelineLayout(GPUPipelineLayoutDescriptor descriptor);
    GPUBindGroup createBindGroup(GPUBindGroupDescriptor descriptor);

    GPUShaderModule createShaderModule(GPUShaderModuleDescriptor descriptor);
    GPUComputePipeline createComputePipeline(GPUComputePipelineDescriptor descriptor);
    GPURenderPipeline createRenderPipeline(GPURenderPipelineDescriptor descriptor);
    Promise<GPUComputePipeline> createComputePipelineAsync(GPUComputePipelineDescriptor descriptor);
    Promise<GPURenderPipeline> createRenderPipelineAsync(GPURenderPipelineDescriptor descriptor);

    GPUCommandEncoder createCommandEncoder(optional GPUCommandEncoderDescriptor descriptor = {});
    GPURenderBundleEncoder createRenderBundleEncoder(GPURenderBundleEncoderDescriptor descriptor);

    GPUQuerySet createQuerySet(GPUQuerySetDescriptor descriptor);
};
GPUDevice includes GPUObjectBase;

GPUDevice has the following attributes:

features, of type GPUSupportedFeatures, readonly

A set containing the GPUFeatureName values of the features supported by the device (i.e. the ones with which it was created).

limits, of type GPUSupportedLimits, readonly

Exposes the limits supported by the device (which are exactly the ones with which it was created).

queue, of type GPUQueue, readonly

The primary GPUQueue for this device.

The [[device]] for a GPUDevice is the device that the GPUDevice refers to.

GPUDevice has the methods listed in its WebIDL definition above. Those not defined here are defined elsewhere in this document.

destroy()

Destroys the device, preventing further operations on it. Outstanding asynchronous operations will fail.

Called on: GPUDevice this.
  1. Lose the device(this.[[device]], "destroyed").

Note: Since no further operations can occur on this device, implementations can free resource allocations and abort outstanding asynchronous operations immediately.

GPUDevice objects are serializable objects.

Finish defining multithreading API and add [Serializable] back to the interface. [Issue #gpuweb/gpuweb#354]

The steps to serialize a GPUDevice object, given value, serialized, and forStorage, are:
  1. Set serialized.agentCluster to be the surrounding agent's agent cluster.

  2. If serialized.agentCluster’s cross-origin isolated capability is false, throw a "DataCloneError".

  3. If forStorage is true, throw a "DataCloneError".

  4. Set serialized.device to the value of value.[[device]].

The steps to deserialize a GPUDevice object, given serialized and value, are:
  1. If serialized.agentCluster is not the surrounding agent's agent cluster, throw a "DataCloneError".

  2. Set value.[[device]] to serialized.device.

GPUDevice doesn’t really need the cross-origin policy restriction. It should be usable from multiple agents regardless. Once we describe the serialization of buffers, textures, and queues - the COOP+COEP logic should be moved in there.

5. Buffers

5.1. GPUBuffer

define buffer (internal object)

A GPUBuffer represents a block of memory that can be used in GPU operations. Data is stored in linear layout, meaning that each byte of the allocation can be addressed by its offset from the start of the GPUBuffer, subject to alignment restrictions depending on the operation. Some GPUBuffers can be mapped which makes the block of memory accessible via an ArrayBuffer called its mapping.

GPUBuffers are created via GPUDevice.createBuffer(descriptor) that returns a new buffer in the mapped or unmapped state.

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUBuffer {
    Promise<undefined> mapAsync(GPUMapModeFlags mode, optional GPUSize64 offset = 0, optional GPUSize64 size);
    ArrayBuffer getMappedRange(optional GPUSize64 offset = 0, optional GPUSize64 size);
    undefined unmap();

    undefined destroy();
};
GPUBuffer includes GPUObjectBase;

GPUBuffer has the following internal slots:

[[size]] of type GPUSize64.

The length of the GPUBuffer allocation in bytes.

[[usage]] of type GPUBufferUsageFlags.

The allowed usages for this GPUBuffer.

[[state]] of type buffer state.

The current state of the GPUBuffer.

[[mapping]] of type ArrayBuffer or Promise or null.

The mapping for this GPUBuffer. The ArrayBuffer isn’t directly accessible and is instead accessed through views into it, called the mapped ranges, that are stored in [[mapped_ranges]]

Specify [[mapping]] in term of DataBlock similarly to AllocateArrayBuffer? [Issue #gpuweb/gpuweb#605]

[[mapping_range]] of type list<unsigned long long> or null.

The range of this GPUBuffer that is mapped.

[[mapped_ranges]] of type list<ArrayBuffer> or null.

The ArrayBuffers returned via getMappedRange to the application. They are tracked so they can be detached when unmap is called.

[[map_mode]] of type GPUMapModeFlags.

The GPUMapModeFlags of the last call to mapAsync() (if any).

[[usage]] is differently named from [[descriptor]].usage. We should make it consistent.

Each GPUBuffer has a current buffer state on the Content timeline which is one of the following:

Note: [[size]] and [[usage]] are immutable once the GPUBuffer has been created.

Note: GPUBuffer has a state machine with the following states. ([[mapping]], [[mapping_range]], and [[mapped_ranges]] are null when not specified.)

GPUBuffer is a reference to an internal buffer object.

Finish defining multithreading API and add [Serializable] back to the interface. [Issue #gpuweb/gpuweb#354]

5.2. Buffer Creation

5.2.1. GPUBufferDescriptor

This specifies the options to use in creating a GPUBuffer.

dictionary GPUBufferDescriptor : GPUObjectDescriptorBase {
    required GPUSize64 size;
    required GPUBufferUsageFlags usage;
    boolean mappedAtCreation = false;
};
validating GPUBufferDescriptor(device, descriptor)
  1. If device is lost return false.

  2. If any of the bits of descriptor’s usage aren’t present in this device’s [[allowed buffer usages]] return false.

  3. If both the MAP_READ and MAP_WRITE bits of descriptor’s usage attribute are set, return false.

  4. Return true.

5.2.2. Buffer Usage

typedef [EnforceRange] unsigned long GPUBufferUsageFlags;
[Exposed=(Window, DedicatedWorker)]
namespace GPUBufferUsage {
    const GPUFlagsConstant MAP_READ      = 0x0001;
    const GPUFlagsConstant MAP_WRITE     = 0x0002;
    const GPUFlagsConstant COPY_SRC      = 0x0004;
    const GPUFlagsConstant COPY_DST      = 0x0008;
    const GPUFlagsConstant INDEX         = 0x0010;
    const GPUFlagsConstant VERTEX        = 0x0020;
    const GPUFlagsConstant UNIFORM       = 0x0040;
    const GPUFlagsConstant STORAGE       = 0x0080;
    const GPUFlagsConstant INDIRECT      = 0x0100;
    const GPUFlagsConstant QUERY_RESOLVE = 0x0200;
};
createBuffer(descriptor)

Creates a GPUBuffer.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createBuffer(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUBufferDescriptor Description of the GPUBuffer to create.

Returns: GPUBuffer

  1. If any of the following conditions are unsatisfied, return an error buffer and stop.

    Explain what are a GPUDevice's [[allowed buffer usages]]. [Issue #gpuweb/gpuweb#605]

Note: If buffer creation fails, and descriptor.mappedAtCreation is false, any calls to mapAsync() will reject, so any resources allocated to enable mapping can and may be discarded or recycled.

  1. Let b be a new GPUBuffer object.

  2. Set b.[[size]] to descriptor.size.

  3. Set b.[[usage]] to descriptor.usage.

  4. If descriptor.mappedAtCreation is true:

    1. Set b.[[mapping]] to a new ArrayBuffer of size b.[[size]].

    2. Set b.[[mapping_range]] to [0, descriptor.size].

    3. Set b.[[mapped_ranges]] to [].

    4. Set b.[[state]] to mapped at creation.

    Else:

    1. Set b.[[mapping]] to null.

    2. Set b.[[mapping_range]] to null.

    3. Set b.[[mapped_ranges]] to null.

    4. Set b.[[state]] to unmapped.

  5. Set each byte of b’s allocation to zero.

  6. Return b.

Note: it is valid to set mappedAtCreation to true without MAP_READ or MAP_WRITE in usage. This can be used to set the buffer’s initial data.

5.3. Buffer Destruction

An application that no longer requires a GPUBuffer can choose to lose access to it before garbage collection by calling destroy(). Destroying a buffer also unmaps it, freeing any memory allocated for the mapping.

Note: This allows the user agent to reclaim the GPU memory associated with the GPUBuffer once all previously submitted operations using it are complete.

destroy()

Destroys the GPUBuffer.

Called on: GPUBuffer this.

Returns: undefined

  1. If the this.[[state]] is not either of unmapped or destroyed

    1. Run the steps to unmap this.

  2. Set this.[[state]] to destroyed.

5.4. Buffer Mapping

An application can request to map a GPUBuffer so that they can access its content via ArrayBuffers that represent part of the GPUBuffer's allocations. Mapping a GPUBuffer is requested asynchronously with mapAsync() so that the user agent can ensure the GPU finished using the GPUBuffer before the application can access its content. Once the GPUBuffer is mapped the application can synchronously ask for access to ranges of its content with getMappedRange. A mapped GPUBuffer cannot be used by the GPU and must be unmapped using unmap before work using it can be submitted to the Queue timeline.

Add client-side validation that a mapped buffer can only be unmapped and destroyed on the worker on which it was mapped. Likewise getMappedRange can only be called on that worker. [Issue #gpuweb/gpuweb#605]

typedef [EnforceRange] unsigned long GPUMapModeFlags;
[Exposed=(Window, DedicatedWorker)]
namespace GPUMapMode {
    const GPUFlagsConstant READ  = 0x0001;
    const GPUFlagsConstant WRITE = 0x0002;
};
mapAsync(mode, offset, size)

Maps the given range of the GPUBuffer and resolves the returned Promise when the GPUBuffer's content is ready to be accessed with getMappedRange().

Called on: GPUBuffer this.

Arguments:

Arguments for the GPUBuffer.mapAsync(mode, offset, size) method.
Parameter Type Nullable Optional Description
mode GPUMapModeFlags Whether the buffer should be mapped for reading or writing.
offset GPUSize64 Offset in bytes into the buffer to the start of the range to map.
size GPUSize64 Size in bytes of the range to map.

Returns: Promise<undefined>

Handle error buffers once we have a description of the error monad. [Issue #gpuweb/gpuweb#605]

  1. If size is missing:

    1. Let rangeSize be max(0, this.[[size]] - offset).

    Otherwise, let rangeSize be size.

  2. If any of the following conditions are unsatisfied:

    Do we validate that mode contains only valid flags?

    Then:

    1. Record a validation error on the current scope.

    2. Return a promise rejected with an OperationError on the Device timeline.

  3. Let p be a new Promise.

  4. Set this.[[mapping]] to p.

  5. Set this.[[state]] to mapping pending.

  6. Set this.[[map_mode]] to mode.

  7. Enqueue an operation on the default queue’s Queue timeline that will execute the following:

    1. If this.[[state]] is mapping pending:

      1. Let m be a new ArrayBuffer of size rangeSize.

      2. Set the content of m to the content of this’s allocation starting at offset offset and for rangeSize bytes.

      3. Set this.[[mapping]] to m.

      4. Set this.[[state]] to mapped.

      5. Set this.[[mapping_range]] to [offset, offset + rangeSize].

      6. Set this.[[mapped_ranges]] to [].

    2. Resolve p.

  8. Return p.

getMappedRange(offset, size)

Returns a ArrayBuffer with the contents of the GPUBuffer in the given mapped range.

Called on: GPUBuffer this.

Arguments:

Arguments for the GPUBuffer.getMappedRange(offset, size) method.
Parameter Type Nullable Optional Description
offset GPUSize64 Offset in bytes into the buffer to return buffer contents from.
size GPUSize64 Size in bytes of the ArrayBuffer to return.

Returns: ArrayBuffer

  1. If size is missing:

    1. Let rangeSize be max(0, this.[[size]] - offset).

    Otherwise, let rangeSize be size.

  2. If any of the following conditions are unsatisfied, throw an OperationError and stop.

    Note: It is always valid to get mapped ranges of a GPUBuffer that is mapped at creation, even if it is invalid, because the Content timeline might not know it is invalid.

    Consider aligning mapAsync offset to 8 to match this.

  3. Let m be a new ArrayBuffer of size rangeSize pointing at the content of this.[[mapping]] at offset offset - this.[[mapping_range]][0].

  4. Append m to this.[[mapped_ranges]].

  5. Return m.

unmap()

Unmaps the mapped range of the GPUBuffer and makes it’s contents available for use by the GPU again.

Called on: GPUBuffer this.

Returns: undefined

  1. If any of the following requirements are unmet, generate a validation error and stop.

    Note: It is valid to unmap an invalid GPUBuffer that is mapped at creation because the Content timeline might not know it is an error GPUBuffer. This allows the temporary [[mapping]] memory to be freed.

  2. If this.[[state]] is mapping pending:

    1. Reject [[mapping]] with an AbortError.

    2. Set this.[[mapping]] to null.

  3. If this.[[state]] is mapped or mapped at creation:

    1. If one of the two following conditions holds:

      Then:

      1. Enqueue an operation on the default queue’s Queue timeline that updates the this.[[mapping_range]] of this’s allocation to the content of this.[[mapping]].

    2. Detach each ArrayBuffer in this.[[mapped_ranges]] from its content.

    3. Set this.[[mapping]] to null.

    4. Set this.[[mapping_range]] to null.

    5. Set this.[[mapped_ranges]] to null.

  4. Set this.[[state]] to unmapped.

Note: When a MAP_READ buffer (not currently mapped at creation) is unmapped, any local modifications done by the application to the mapped ranges ArrayBuffer are discarded and will not affect the content of follow-up mappings.

6. Textures and Texture Views

define texture (internal object)

define mipmap level, array layer, aspect, slice (concepts)

6.1. GPUTexture

GPUTextures are created via GPUDevice.createTexture(descriptor) that returns a new texture.

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUTexture {
    GPUTextureView createView(optional GPUTextureViewDescriptor descriptor = {});

    undefined destroy();
};
GPUTexture includes GPUObjectBase;

GPUTexture has the following internal slots:

[[descriptor]], of type GPUTextureDescriptor

The GPUTextureDescriptor describing this texture.

All optional fields of GPUTextureDescriptor are defined.

[[destroyed]], of type boolean, initially false

If the texture is destroyed, it can no longer be used in any operation, and its underlying memory can be freed.

compute render extent(baseSize, mipLevel)

Arguments:

Returns: GPUExtent3DDict

  1. Let extent be a new GPUExtent3DDict object.

  2. Set extent.width to max(1, baseSize.widthmipLevel).

  3. Set extent.height to max(1, baseSize.heightmipLevel).

  4. Set extent.depthOrArrayLayers to 1.

  5. Return extent.

share this definition with the part of the specification that describes sampling.

6.1.1. Texture Creation

dictionary GPUTextureDescriptor : GPUObjectDescriptorBase {
    required GPUExtent3D size;
    GPUIntegerCoordinate mipLevelCount = 1;
    GPUSize32 sampleCount = 1;
    GPUTextureDimension dimension = "2d";
    required GPUTextureFormat format;
    required GPUTextureUsageFlags usage;
};
enum GPUTextureDimension {
    "1d",
    "2d",
    "3d",
};
typedef [EnforceRange] unsigned long GPUTextureUsageFlags;
[Exposed=(Window, DedicatedWorker)]
namespace GPUTextureUsage {
    const GPUFlagsConstant COPY_SRC          = 0x01;
    const GPUFlagsConstant COPY_DST          = 0x02;
    const GPUFlagsConstant TEXTURE_BINDING   = 0x04;
    const GPUFlagsConstant STORAGE_BINDING   = 0x08;
    const GPUFlagsConstant RENDER_ATTACHMENT = 0x10;
};
maximum mipLevel count(dimension, size) Arguments:
  1. Calculate the max dimension value m:

  2. Return floor(log2(m)) + 1.

createTexture(descriptor)

Creates a GPUTexture.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createTexture(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUTextureDescriptor Description of the GPUTexture to create.

Returns: GPUTexture

  1. Issue the following steps on the Device timeline of this:

    1. If descriptor.format is a GPUTextureFormat that requires a feature (see § 25.1 Texture Format Capabilities), but this.[[device]].[[features]] does not contain the feature, throw a TypeError.

    2. If any of the following requirements are unmet:

      Then:

      1. Generate a GPUValidationError in the current scope with appropriate error message.

      2. Return a new invalid GPUTexture.

    3. Let t be a new GPUTexture object.

    4. Set t.[[descriptor]] to descriptor.

    5. Return t.

6.1.2. Texture Destruction

An application that no longer requires a GPUTexture can choose to lose access to it before garbage collection by calling destroy().

Note: This allows the user agent to reclaim the GPU memory associated with the GPUTexture once all previously submitted operations using it are complete.

destroy()

Destroys the GPUTexture.

Called on: GPUTexture this.

Returns: undefined

  1. Set this.[[destroyed]] to true.

6.2. GPUTextureView

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUTextureView {
};
GPUTextureView includes GPUObjectBase;

GPUTextureView has the following internal slots:

[[texture]]

The GPUTexture into which this is a view.

[[descriptor]]

The GPUTextureViewDescriptor describing this texture view.

All optional fields of GPUTextureViewDescriptor are defined.

[[renderExtent]]

For renderable views, this is the effective GPUExtent3DDict for rendering.

Note: this extent depends on the baseMipLevel.

6.2.1. Texture View Creation

dictionary GPUTextureViewDescriptor : GPUObjectDescriptorBase {
    GPUTextureFormat format;
    GPUTextureViewDimension dimension;
    GPUTextureAspect aspect = "all";
    GPUIntegerCoordinate baseMipLevel = 0;
    GPUIntegerCoordinate mipLevelCount;
    GPUIntegerCoordinate baseArrayLayer = 0;
    GPUIntegerCoordinate arrayLayerCount;
};
enum GPUTextureViewDimension {
    "1d",
    "2d",
    "2d-array",
    "cube",
    "cube-array",
    "3d",
};
"1d"

The texture is viewed as a 1-dimensional image.

Corresponding WGSL types:

  • texture_1d

  • texture_storage_1d

"2d"

The texture is viewed as a single 2-dimensional image.

Corresponding WGSL types:

  • texture_2d

  • texture_storage_2d

  • texture_multisampled_2d

  • texture_depth_2d

  • texture_depth_multisampled_2d

"2d-array"

The texture view is viewed as an array of 2-dimensional images.

Corresponding WGSL types:

  • texture_2d_array

  • texture_storage_2d_array

  • texture_depth_2d_array

"cube"

The texture is viewed as a cubemap. The view has 6 array layers, corresponding to the [+X, -X, +Y, -Y, +Z, -Z] faces of the cube.

Corresponding WGSL types:

  • texture_cube

  • texture_depth_cube

"cube-array"

The texture is viewed as a packed array of n cubemaps, each with 6 array layers corresponding to the [+X, -X, +Y, -Y, +Z, -Z] faces of the cube.

Corresponding WGSL types:

  • texture_cube_array

  • texture_depth_cube_array

"3d"

The texture is viewed as a 3-dimensional image.

Corresponding WGSL types:

  • texture_3d

  • texture_storage_3d

enum GPUTextureAspect {
    "all",
    "stencil-only",
    "depth-only",
};
createView(descriptor)

Creates a GPUTextureView.

Called on: GPUTexture this.

Arguments:

Arguments for the GPUTexture.createView(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUTextureViewDescriptor Description of the GPUTextureView to create.

Returns: view, of type GPUTextureView.

  1. Set descriptor to the result of resolving GPUTextureViewDescriptor defaults with descriptor.

  2. Issue the following steps on the Device timeline of this:

    1. If any of the following requirements are unmet:

      Then:

      1. Generate a GPUValidationError in the current scope with appropriate error message.

      2. Return a new invalid GPUTextureView.

    2. Let view be a new GPUTextureView object.

    3. Set view.[[texture]] to this.

    4. Set view.[[descriptor]] to descriptor.

    5. If this.[[descriptor]].usage contains RENDER_ATTACHMENT:

      1. Let renderExtent be compute render extent(this.[[descriptor]].size, descriptor.baseMipLevel).

      2. Set view.[[renderExtent]] to renderExtent.

    6. Return view.

When resolving GPUTextureViewDescriptor defaults for GPUTextureViewDescriptor descriptor run the following steps:
  1. Let resolved be a copy of descriptor.

  2. If resolved.format is undefined, set resolved.format to texture.[[descriptor]].format.

  3. If resolved.mipLevelCount is undefined, set resolved.mipLevelCount to texture.[[descriptor]].mipLevelCountbaseMipLevel.

  4. If resolved.dimension is undefined and texture.[[descriptor]].dimension is:

    "1d"

    Set resolved.dimension to "1d".

    "2d"

    Set resolved.dimension to "2d".

    "3d"

    Set resolved.dimension to "3d".

  5. If resolved.arrayLayerCount is undefined and resolved.dimension is:

    "1d", "2d", or "3d"

    Set resolved.arrayLayerCount to 1.

    "cube"

    Set resolved.arrayLayerCount to 6.

    "2d-array" or "cube-array"

    Set resolved.arrayLayerCount to texture.[[descriptor]].size.depthOrArrayLayersbaseArrayLayer.

  6. Return resolved.

To determine the array layer count of GPUTexture texture, run the following steps:
  1. If texture.[[descriptor]].dimension is:

    "1d" or "3d"

    Return 1.

    "2d"

    Return texture.[[descriptor]].size.depthOrArrayLayers.

6.3. Texture Formats

The name of the format specifies the order of components, bits per component, and data type for the component.

If the format has the -srgb suffix, then sRGB conversions from gamma to linear and vice versa are applied during the reading and writing of color values in the shader. Compressed texture formats are provided by features. Their naming should follow the convention here, with the texture name as a prefix. e.g. etc2-rgba8unorm.

The texel block is a single addressable element of the textures in pixel-based GPUTextureFormats, and a single compressed block of the textures in block-based compressed GPUTextureFormats.

The texel block width and texel block height specifies the dimension of one texel block.

The texel block size of a GPUTextureFormat is the number of bytes to store one texel block. The texel block size of each GPUTextureFormat is constant except for "stencil8", "depth24plus", and "depth24plus-stencil8".

enum GPUTextureFormat {
    // 8-bit formats
    "r8unorm",
    "r8snorm",
    "r8uint",
    "r8sint",

    // 16-bit formats
    "r16uint",
    "r16sint",
    "r16float",
    "rg8unorm",
    "rg8snorm",
    "rg8uint",
    "rg8sint",

    // 32-bit formats
    "r32uint",
    "r32sint",
    "r32float",
    "rg16uint",
    "rg16sint",
    "rg16float",
    "rgba8unorm",
    "rgba8unorm-srgb",
    "rgba8snorm",
    "rgba8uint",
    "rgba8sint",
    "bgra8unorm",
    "bgra8unorm-srgb",
    // Packed 32-bit formats
    "rgb9e5ufloat",
    "rgb10a2unorm",
    "rg11b10ufloat",

    // 64-bit formats
    "rg32uint",
    "rg32sint",
    "rg32float",
    "rgba16uint",
    "rgba16sint",
    "rgba16float",

    // 128-bit formats
    "rgba32uint",
    "rgba32sint",
    "rgba32float",

    // Depth/stencil formats
    "stencil8",
    "depth16unorm",
    "depth24plus",
    "depth24plus-stencil8",
    "depth32float",

    // "depth24unorm-stencil8" feature
    "depth24unorm-stencil8",

    // "depth32float-stencil8" feature
    "depth32float-stencil8",

    // BC compressed formats usable if "texture-compression-bc" is both
    // supported by the device/user agent and enabled in requestDevice.
    "bc1-rgba-unorm",
    "bc1-rgba-unorm-srgb",
    "bc2-rgba-unorm",
    "bc2-rgba-unorm-srgb",
    "bc3-rgba-unorm",
    "bc3-rgba-unorm-srgb",
    "bc4-r-unorm",
    "bc4-r-snorm",
    "bc5-rg-unorm",
    "bc5-rg-snorm",
    "bc6h-rgb-ufloat",
    "bc6h-rgb-float",
    "bc7-rgba-unorm",
    "bc7-rgba-unorm-srgb",

    // ETC2 compressed formats usable if "texture-compression-etc2" is both
    // supported by the device/user agent and enabled in requestDevice.
    "etc2-rgb8unorm",
    "etc2-rgb8unorm-srgb",
    "etc2-rgb8a1unorm",
    "etc2-rgb8a1unorm-srgb",
    "etc2-rgba8unorm",
    "etc2-rgba8unorm-srgb",
    "eac-r11unorm",
    "eac-r11snorm",
    "eac-rg11unorm",
    "eac-rg11snorm",

    // ASTC compressed formats usable if "texture-compression-astc" is both
    // supported by the device/user agent and enabled in requestDevice.
    "astc-4x4-unorm",
    "astc-4x4-unorm-srgb",
    "astc-5x4-unorm",
    "astc-5x4-unorm-srgb",
    "astc-5x5-unorm",
    "astc-5x5-unorm-srgb",
    "astc-6x5-unorm",
    "astc-6x5-unorm-srgb",
    "astc-6x6-unorm",
    "astc-6x6-unorm-srgb",
    "astc-8x5-unorm",
    "astc-8x5-unorm-srgb",
    "astc-8x6-unorm",
    "astc-8x6-unorm-srgb",
    "astc-8x8-unorm",
    "astc-8x8-unorm-srgb",
    "astc-10x5-unorm",
    "astc-10x5-unorm-srgb",
    "astc-10x6-unorm",
    "astc-10x6-unorm-srgb",
    "astc-10x8-unorm",
    "astc-10x8-unorm-srgb",
    "astc-10x10-unorm",
    "astc-10x10-unorm-srgb",
    "astc-12x10-unorm",
    "astc-12x10-unorm-srgb",
    "astc-12x12-unorm",
    "astc-12x12-unorm-srgb",
};

The depth component of the "depth24plus") and "depth24plus-stencil8") formats may be implemented as either a 24-bit unsigned normalized value (like "depth24unorm" in "depth24unorm-stencil8") or a 32-bit IEEE 754 floating point value (like "depth32float").

add something on GPUAdapter(?) that gives an estimate of the bytes per texel of "stencil8", "depth24plus-stencil8", and "depth32float-stencil8".

The stencil8 format may be implemented as either a real "stencil8", or "depth24stencil8", where the depth aspect is hidden and inaccessible.

Note: While the precision of depth32float channels is strictly higher than the precision of depth24unorm channels for all values in the representable range (0.0 to 1.0), note that the set of representable values is not an exact superset: for depth24unorm, 1 ULP has a constant value of 1 / (224 − 1); for depth32float, 1 ULP has a variable value no greater than 1 / (224).

A renderable format is either a color renderable format, or a depth-or-stencil format. If a format is listed in § 25.1.1 Plain color formats with RENDER_ATTACHMENT capability, it is a color renderable format. Any other format is not a color renderable format. All depth-or-stencil formats are renderable.

6.4. GPUExternalTexture

A GPUExternalTexture is a sampleable texture wrapping an external video object. The contents of a GPUExternalTexture object may not change, either from inside WebGPU (it is only sampleable) or from outside WebGPU (e.g. due to video frame advancement).

Update this description with canvas.

They are bound into bind group layouts using the externalTexture bind group layout entry member. External textures use several binding slots: see Exceeds the binding slot limits.

External textures can be implemented without creating a copy of the imported source, but this depends implementation-defined factors. Ownership of the underlying representation may either be exclusive or shared with other owners (such as a video decoder), but this is not visible to the application.

The underlying representation of an external texture is unobservable (except for sampling behavior) but typically may include

The configuration used may not be stable across time, systems, user agents, media sources, or frames within a single video source. In order to account for many possible representations, the binding conservatively uses the following, for each external texture:

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUExternalTexture {
};
GPUExternalTexture includes GPUObjectBase;

GPUExternalTexture has the following internal slots:

[[destroyed]], of type boolean

Indicates whether the object has been destroyed (can no longer be used). Initially set to false.

6.4.1. Importing External Textures

An external texture is created from an external video object using importExternalTexture().

Update this description with canvas.

External textures are destroyed automatically, as a microtask, instead of manually or upon garbage collection like other resources.

dictionary GPUExternalTextureDescriptor : GPUObjectDescriptorBase {
    required HTMLVideoElement source;
    GPUPredefinedColorSpace colorSpace = "srgb";
};
importExternalTexture(descriptor)

Creates a GPUExternalTexture wrapping the provided image source.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.importExternalTexture(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUExternalTextureDescriptor Provides the external image source object (and any creation options).

Returns: GPUExternalTexture

  1. Let source be descriptor.source.

  2. Let usability be the result of checking the usability of source (which may throw an exception).

  3. If usability is bad, throw an InvalidStateError and stop.

  4. If source is not origin-clean, throw a SecurityError and stop.

  5. Let data be the result of converting the current image contents of source into the color space descriptor.colorSpace with unpremultiplied alpha.

    This may result in values outside of the range [0, 1]. If clamping is desired, it may be performed after sampling.

    Note: This is described like a copy, but may be implemented as a reference to read-only underlying data plus appropriate metadata to perform conversion later.

  6. Let result be a new GPUExternalTexture object wrapping data.

  7. Queue a microtask to set result.[[destroyed]] to true, releasing the underlying resource.

    Is this too restrictive?

  8. Return result.

6.4.2. Sampling External Textures

External textures are represented in WGSL with texture_external and may be read using textureLoad and textureSampleLevel.

The sampler provided to textureSampleLevel is used to sample the underlying textures. The result is in the color space set by colorSpace. It is implementation-dependent whether, for any given external texture, the sampler (and filtering) is applied before or after conversion from underlying values into the specified color space.

Note: If the internal representation is an RGBA plane, sampling behaves as on a regular 2D texture. If there are several underlying planes (e.g. Y+UV), the sampler is used to sample each underlying texture separately, prior to conversion from YUV to the specified color space.

7. Samplers

7.1. GPUSampler

A GPUSampler encodes transformations and filtering information that can be used in a shader to interpret texture resource data.

GPUSamplers are created via GPUDevice.createSampler(optional descriptor) that returns a new sampler object.

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUSampler {
};
GPUSampler includes GPUObjectBase;

GPUSampler has the following internal slots:

[[descriptor]], of type GPUSamplerDescriptor, readonly

The GPUSamplerDescriptor with which the GPUSampler was created.

[[isComparison]] of type boolean.

Whether the GPUSampler is used as a comparison sampler.

[[isFiltering]] of type boolean.

Whether the GPUSampler weights multiple samples of a texture.

7.2. Sampler Creation

7.2.1. GPUSamplerDescriptor

A GPUSamplerDescriptor specifies the options to use to create a GPUSampler.

dictionary GPUSamplerDescriptor : GPUObjectDescriptorBase {
    GPUAddressMode addressModeU = "clamp-to-edge";
    GPUAddressMode addressModeV = "clamp-to-edge";
    GPUAddressMode addressModeW = "clamp-to-edge";
    GPUFilterMode magFilter = "nearest";
    GPUFilterMode minFilter = "nearest";
    GPUFilterMode mipmapFilter = "nearest";
    float lodMinClamp = 0;
    float lodMaxClamp = 32;
    GPUCompareFunction compare;
    [Clamp] unsigned short maxAnisotropy = 1;
};

explain how LOD is calculated and if there are differences here between platforms.

explain what anisotropic sampling is

GPUAddressMode describes the behavior of the sampler if the sample footprint extends beyond the bounds of the sampled texture.

Describe a "sample footprint" in greater detail.

enum GPUAddressMode {
    "clamp-to-edge",
    "repeat",
    "mirror-repeat",
};
"clamp-to-edge"

Texture coordinates are clamped between 0.0 and 1.0, inclusive.

"repeat"

Texture coordinates wrap to the other side of the texture.

"mirror-repeat"

Texture coordinates wrap to the other side of the texture, but the texture is flipped when the integer part of the coordinate is odd.

GPUFilterMode describes the behavior of the sampler if the sample footprint does not exactly match one texel.

enum GPUFilterMode {
    "nearest",
    "linear",
};
"nearest"

Return the value of the texel nearest to the texture coordinates.

"linear"

Select two texels in each dimension and return a linear interpolation between their values.

GPUCompareFunction specifies the behavior of a comparison sampler. If a comparison sampler is used in a shader, an input value is compared to the sampled texture value, and the result of this comparison test (0.0f for pass, or 1.0f for fail) is used in the filtering operation.

describe how filtering interacts with comparison sampling.

enum GPUCompareFunction {
    "never",
    "less",
    "equal",
    "less-equal",
    "greater",
    "not-equal",
    "greater-equal",
    "always",
};
"never"

Comparison tests never pass.

"less"

A provided value passes the comparison test if it is less than the sampled value.

"equal"

A provided value passes the comparison test if it is equal to the sampled value.

"less-equal"

A provided value passes the comparison test if it is less than or equal to the sampled value.

"greater"

A provided value passes the comparison test if it is greater than the sampled value.

"not-equal"

A provided value passes the comparison test if it is not equal to the sampled value.

"greater-equal"

A provided value passes the comparison test if it is greater than or equal to the sampled value.

"always"

Comparison tests always pass.

validating GPUSamplerDescriptor(device, descriptor) Arguments:

Returns: boolean

Return true if and only if all of the following conditions are satisfied:

createSampler(descriptor)

Creates a GPUBindGroupLayout.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createSampler(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUSamplerDescriptor Description of the GPUSampler to create.

Returns: GPUSampler

  1. Let s be a new GPUSampler object.

  2. Set s.[[descriptor]] to descriptor.

  3. Set s.[[isComparison]] to false if the compare attribute of s.[[descriptor]] is null or undefined. Otherwise, set it to true.

  4. Set s.[[isFiltering]] to false if none of minFilter, magFilter, or mipmapFilter has the value of "linear". Otherwise, set it to true.

  5. Return s.

Valid Usage

8. Resource Binding

8.1. GPUBindGroupLayout

A GPUBindGroupLayout defines the interface between a set of resources bound in a GPUBindGroup and their accessibility in shader stages.

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUBindGroupLayout {
};
GPUBindGroupLayout includes GPUObjectBase;

GPUBindGroupLayout has the following internal slots:

[[descriptor]]

8.1.1. Creation

A GPUBindGroupLayout is created via GPUDevice.createBindGroupLayout().

dictionary GPUBindGroupLayoutDescriptor : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayoutEntry> entries;
};

A GPUBindGroupLayoutEntry describes a single shader resource binding to be included in a GPUBindGroupLayout.

typedef [EnforceRange] unsigned long GPUShaderStageFlags;
[Exposed=(Window, DedicatedWorker)]
namespace GPUShaderStage {
    const GPUFlagsConstant VERTEX   = 0x1;
    const GPUFlagsConstant FRAGMENT = 0x2;
    const GPUFlagsConstant COMPUTE  = 0x4;
};

dictionary GPUBindGroupLayoutEntry {
    required GPUIndex32 binding;
    required GPUShaderStageFlags visibility;

    GPUBufferBindingLayout buffer;
    GPUSamplerBindingLayout sampler;
    GPUTextureBindingLayout texture;
    GPUStorageTextureBindingLayout storageTexture;
    GPUExternalTextureBindingLayout externalTexture;
};

GPUBindGroupLayoutEntry dictionaries have the following members:

binding, of type GPUIndex32

A unique identifier for a resource binding within a GPUBindGroupLayoutEntry, a corresponding GPUBindGroupEntry, and the GPUShaderModules.

visibility, of type GPUShaderStageFlags

A bitset of the members of GPUShaderStage. Each set bit indicates that a GPUBindGroupLayoutEntry's resource will be accessible from the associated shader stage.

buffer, of type GPUBufferBindingLayout

When not undefined, indicates the binding resource type for this GPUBindGroupLayoutEntry is GPUBufferBinding.

sampler, of type GPUSamplerBindingLayout

When not undefined, indicates the binding resource type for this GPUBindGroupLayoutEntry is GPUSampler.

texture, of type GPUTextureBindingLayout

When not undefined, indicates the binding resource type for this GPUBindGroupLayoutEntry is GPUTextureView.

storageTexture, of type GPUStorageTextureBindingLayout

When not undefined, indicates the binding resource type for this GPUBindGroupLayoutEntry is GPUTextureView.

externalTexture, of type GPUExternalTextureBindingLayout

When not undefined, indicates the binding resource type for this GPUBindGroupLayoutEntry is GPUExternalTexture.

The binding member of a GPUBindGroupLayoutEntry is determined by which member of the GPUBindGroupLayoutEntry is defined: buffer, sampler, texture, storageTexture, or externalTexture. Only one may be defined for any given GPUBindGroupLayoutEntry. Each member has an associated GPUBindingResource type and each binding type has an associated internal usage, given by this table:

Binding member Resource type Binding type
Binding usage
buffer GPUBufferBinding "uniform" constant
"storage" storage
"read-only-storage" storage-read
sampler GPUSampler "filtering" constant
"non-filtering"
"comparison"
texture GPUTextureView "float" constant
"unfilterable-float"
"depth"
"sint"
"uint"
storageTexture GPUTextureView "write-only" storage
externalTexture GPUExternalTexture constant
The list of GPUBindGroupLayoutEntry values entries exceeds the binding slot limits of supported limits limits if the number of slots used toward a limit exceeds the supported value in limits. Each entry may use multiple slots toward multiple limits.
  1. For each entry in entries, if:

    entry.buffer?.type is "uniform" and entry.buffer?.hasDynamicOffset is true

    Consider 1 maxDynamicUniformBuffersPerPipelineLayout slot to be used.

    entry.buffer?.type is "storage" and entry.buffer?.hasDynamicOffset is true

    Consider 1 maxDynamicStorageBuffersPerPipelineLayout slot to be used.

  2. For each shader stage stage in « VERTEX, FRAGMENT, COMPUTE »:

    1. For each entry in entries for which entry.visibility contains stage, if:

      entry.buffer?.type is "uniform"

      Consider 1 maxUniformBuffersPerShaderStage slot to be used.

      entry.buffer?.type is "storage" or "read-only-storage"

      Consider 1 maxStorageBuffersPerShaderStage slot to be used.

      entry.sampler is not undefined

      Consider 1 maxSamplersPerShaderStage slot to be used.

      entry.texture is not undefined

      Consider 1 maxSampledTexturesPerShaderStage slot to be used.

      entry.storageTexture is not undefined

      Consider 1 maxStorageTexturesPerShaderStage slot to be used.

      entry.externalTexture is not undefined

      Consider 4 maxSampledTexturesPerShaderStage slot, 1 maxSamplersPerShaderStage slot, and 1 maxUniformBuffersPerShaderStage slot to be used.

enum GPUBufferBindingType {
    "uniform",
    "storage",
    "read-only-storage",
};

dictionary GPUBufferBindingLayout {
    GPUBufferBindingType type = "uniform";
    boolean hasDynamicOffset = false;
    GPUSize64 minBindingSize = 0;
};

GPUBufferBindingLayout dictionaries have the following members:

type, of type GPUBufferBindingType, defaulting to "uniform"

Indicates the type required for buffers bound to this bindings.

hasDynamicOffset, of type boolean, defaulting to false

Indicates whether this binding requires a dynamic offset.

minBindingSize, of type GPUSize64, defaulting to 0

Indicates the minimum buffer binding size.

Bindings are always validated against this size in createBindGroup().

If this is not 0, pipeline creation additionally validates that this value is large enough for the bindings declared in the shader.

If this is 0, draw/dispatch commands additionally validate that each binding in the GPUBindGroup is large enough for the bindings declared in the shader.

Note: Similar execution-time validation is theoretically possible for other binding-related fields specified for early validation, like sampleType and format, which currently can only be validated in pipeline creation. However, such execution-time validation could be costly or unnecessarily complex, so it is available only for minBindingSize which is expected to have the most ergonomic impact.

enum GPUSamplerBindingType {
    "filtering",
    "non-filtering",
    "comparison",
};

dictionary GPUSamplerBindingLayout {
    GPUSamplerBindingType type = "filtering";
};

GPUSamplerBindingLayout dictionaries have the following members:

type, of type GPUSamplerBindingType, defaulting to "filtering"

Indicates the required type of a sampler bound to this bindings.

enum GPUTextureSampleType {
    "float",
    "unfilterable-float",
    "depth",
    "sint",
    "uint",
};

dictionary GPUTextureBindingLayout {
    GPUTextureSampleType sampleType = "float";
    GPUTextureViewDimension viewDimension = "2d";
    boolean multisampled = false;
};

consider making sampleType truly optional.

GPUTextureBindingLayout dictionaries have the following members:

sampleType, of type GPUTextureSampleType, defaulting to "float"

Indicates the type required for texture views bound to this binding.

viewDimension, of type GPUTextureViewDimension, defaulting to "2d"

Indicates the required dimension for texture views bound to this binding.

multisampled, of type boolean, defaulting to false

Indicates whether or not texture views bound to this binding must be multisampled.

enum GPUStorageTextureAccess {
    "write-only",
};

dictionary GPUStorageTextureBindingLayout {
    GPUStorageTextureAccess access = "write-only";
    required GPUTextureFormat format;
    GPUTextureViewDimension viewDimension = "2d";
};

consider making format truly optional.

GPUStorageTextureBindingLayout dictionaries have the following members:

access, of type GPUStorageTextureAccess, defaulting to "write-only"

Indicates whether texture views bound to this binding will be bound for read-only or write-only access.

format, of type GPUTextureFormat

The required format of texture views bound to this binding.

viewDimension, of type GPUTextureViewDimension, defaulting to "2d"

Indicates the required dimension for texture views bound to this binding.

dictionary GPUExternalTextureBindingLayout {
};

A GPUBindGroupLayout object has the following internal slots:

[[entryMap]] of type ordered map<GPUSize32, GPUBindGroupLayoutEntry>.

The map of binding indices pointing to the GPUBindGroupLayoutEntrys, which this GPUBindGroupLayout describes.

[[dynamicOffsetCount]] of type GPUSize32.

The number of buffer bindings with dynamic offsets in this GPUBindGroupLayout.

[[exclusivePipeline]] of type GPUPipelineBase?, initially null.

The pipeline that created this GPUBindGroupLayout, if it was created as part of a default pipeline layout. If not null, GPUBindGroups created with this GPUBindGroupLayout can only be used with the specified GPUPipelineBase.

createBindGroupLayout(descriptor)

Creates a GPUBindGroupLayout.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createBindGroupLayout(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUBindGroupLayoutDescriptor Description of the GPUBindGroupLayout to create.

Returns: GPUBindGroupLayout

  1. Let layout be a new valid GPUBindGroupLayout object.

  2. Set layout.[[descriptor]] to descriptor.

  3. Issue the following steps on the Device timeline of this:

    1. If any of the following conditions are unsatisfied:

      Then:

      1. Generate a GPUValidationError in the current scope with appropriate error message.

      2. Make layout invalid and return layout.

    2. Set layout.[[dynamicOffsetCount]] to the number of entries in descriptor where buffer is not undefined and buffer.hasDynamicOffset is true.

    3. For each GPUBindGroupLayoutEntry entry in descriptor.entries:

      1. Insert entry into layout.[[entryMap]] with the key of entry.binding.

  4. Return layout.

8.1.2. Compatibility

Two GPUBindGroupLayout objects a and b are considered group-equivalent if and only if all of the following conditions are satisfied:

If bind groups layouts are group-equivalent they can be interchangeably used in all contents.

8.2. GPUBindGroup

A GPUBindGroup defines a set of resources to be bound together in a group and how the resources are used in shader stages.

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUBindGroup {
};
GPUBindGroup includes GPUObjectBase;

8.2.1. Bind Group Creation

A GPUBindGroup is created via GPUDevice.createBindGroup().

dictionary GPUBindGroupDescriptor : GPUObjectDescriptorBase {
    required GPUBindGroupLayout layout;
    required sequence<GPUBindGroupEntry> entries;
};

A GPUBindGroupEntry describes a single resource to be bound in a GPUBindGroup.

typedef (GPUSampler or GPUTextureView or GPUBufferBinding or GPUExternalTexture) GPUBindingResource;

dictionary GPUBindGroupEntry {
    required GPUIndex32 binding;
    required GPUBindingResource resource;
};
dictionary GPUBufferBinding {
    required GPUBuffer buffer;
    GPUSize64 offset = 0;
    GPUSize64 size;
};

A GPUBindGroup object has the following internal slots:

[[layout]] of type GPUBindGroupLayout.

The GPUBindGroupLayout associated with this GPUBindGroup.

[[entries]] of type sequence<GPUBindGroupEntry>.

The set of GPUBindGroupEntrys this GPUBindGroup describes.

[[usedResources]] of type ordered map<subresource, list<internal usage>>.

The set of buffer and texture subresources used by this bind group, associated with lists of the internal usage flags.

createBindGroup(descriptor)

Creates a GPUBindGroup.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createBindGroup(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUBindGroupDescriptor Description of the GPUBindGroup to create.

Returns: GPUBindGroup

  1. Let bindGroup be a new valid GPUBindGroup object.

  2. Let limits be this.[[device]].[[limits]].

  3. Issue the following steps on the Device timeline of this:

    1. If any of the following conditions are unsatisfied:

      For each GPUBindGroupEntry bindingDescriptor in descriptor.entries:

      Then:

      1. Generate a GPUValidationError in the current scope with appropriate error message.

      2. Make bindGroup invalid and return bindGroup.

    2. Let bindGroup.[[layout]] = descriptor.layout.

    3. Let bindGroup.[[entries]] = descriptor.entries.

    4. Let bindGroup.[[usedResources]] = {}.

    5. For each GPUBindGroupEntry bindingDescriptor in descriptor.entries:

      1. Let internalUsage be the binding usage for layoutBinding.

      2. Each subresource seen by resource is added to [[usedResources]] as internalUsage.

  4. Return bindGroup.

effective buffer binding size(binding)
  1. If binding.size is undefined:

    1. Return max(0, binding.buffer.[[size]] - binding.offset);

  2. Return binding.size.

8.3. GPUPipelineLayout

A GPUPipelineLayout defines the mapping between resources of all GPUBindGroup objects set up during command encoding in setBindGroup, and the shaders of the pipeline set by GPURenderEncoderBase.setPipeline or GPUComputePassEncoder.setPipeline.

The full binding address of a resource can be defined as a trio of:

  1. shader stage mask, to which the resource is visible

  2. bind group index

  3. binding number

The components of this address can also be seen as the binding space of a pipeline. A GPUBindGroup (with the corresponding GPUBindGroupLayout) covers that space for a fixed bind group index. The contained bindings need to be a superset of the resources used by the shader at this bind group index.

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUPipelineLayout {
};
GPUPipelineLayout includes GPUObjectBase;

GPUPipelineLayout has the following internal slots:

[[bindGroupLayouts]] of type list<GPUBindGroupLayout>.

The GPUBindGroupLayout objects provided at creation in GPUPipelineLayoutDescriptor.bindGroupLayouts.

Note: using the same GPUPipelineLayout for many GPURenderPipeline or GPUComputePipeline pipelines guarantees that the user agent doesn’t need to rebind any resources internally when there is a switch between these pipelines.

GPUComputePipeline object X was created with GPUPipelineLayout.bindGroupLayouts A, B, C. GPUComputePipeline object Y was created with GPUPipelineLayout.bindGroupLayouts A, D, C. Supposing the command encoding sequence has two dispatches:
  1. setBindGroup(0, ...)

  2. setBindGroup(1, ...)

  3. setBindGroup(2, ...)

  4. setPipeline(X)

  5. dispatch()

  6. setBindGroup(1, ...)

  7. setPipeline(Y)

  8. dispatch()

In this scenario, the user agent would have to re-bind the group slot 2 for the second dispatch, even though neither the GPUBindGroupLayout at index 2 of GPUPipelineLayout.bindGrouplayouts, or the GPUBindGroup at slot 2, change.

should this example and the note be moved to some "best practices" document?

Note: the expected usage of the GPUPipelineLayout is placing the most common and the least frequently changing bind groups at the "bottom" of the layout, meaning lower bind group slot numbers, like 0 or 1. The more frequently a bind group needs to change between draw calls, the higher its index should be. This general guideline allows the user agent to minimize state changes between draw calls, and consequently lower the CPU overhead.

8.3.1. Creation

A GPUPipelineLayout is created via GPUDevice.createPipelineLayout().

dictionary GPUPipelineLayoutDescriptor : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayout> bindGroupLayouts;
};
createPipelineLayout(descriptor)

Creates a GPUPipelineLayout.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createPipelineLayout(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUPipelineLayoutDescriptor Description of the GPUPipelineLayout to create.

Returns: GPUPipelineLayout

  1. If any of the following requirements are unmet:

    Let limits be this.[[device]].[[limits]].

    Let allEntries be the result of concatenating bgl.[[descriptor]].entries for all bgl in descriptor.bindGroupLayouts.

    Then:

    1. Generate a GPUValidationError in the current scope with appropriate error message.

    2. Create a new invalid GPUPipelineLayout and return the result.

  2. Let pl be a new GPUPipelineLayout object.

  3. Set the pl.[[bindGroupLayouts]] to descriptor.bindGroupLayouts.

  4. Return pl.

Note: two GPUPipelineLayout objects are considered equivalent for any usage if their internal [[bindGroupLayouts]] sequences contain GPUBindGroupLayout objects that are group-equivalent.

9. Shader Modules

9.1. GPUShaderModule

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUShaderModule {
    Promise<GPUCompilationInfo> compilationInfo();
};
GPUShaderModule includes GPUObjectBase;

GPUShaderModule is a reference to an internal shader module object.

Finish defining multithreading API and add [Serializable] back to the interface. [Issue #gpuweb/gpuweb#354]

9.1.1. Shader Module Creation

dictionary GPUShaderModuleCompilationHint {
    required GPUPipelineLayout layout;
};

dictionary GPUShaderModuleDescriptor : GPUObjectDescriptorBase {
    required USVString code;
    object sourceMap;
    record<USVString, GPUShaderModuleCompilationHint> hints;
};

sourceMap, if defined, MAY be interpreted as a source-map-v3 format. Source maps are optional, but serve as a standardized way to support dev-tool integration such as source-language debugging. [SourceMap]

hints, if defined, maps an entry point name from the shader to a GPUShaderModuleCompilationHint. No validation is performed with any of these GPUShaderModuleCompilationHint. Implementations should use any information present in the GPUShaderModuleCompilationHint to perform as much compilation as is possible within createShaderModule().

Note: Supplying information in hints does not have any observable effect, other than performance. Because a single shader module can hold multiple entry points, and multiple pipelines can be created from a single shader module, it can be more performant for an implementation to do as much compilation as possible once in createShaderModule() rather than multiple times in the multiple calls to createComputePipeline() / createRenderPipeline().

Note: If possible, authors should be supplying the same information to createShaderModule() and createComputePipeline() / createRenderPipeline().

Note: If an author is unable to provide this hints information at the time of calling createShaderModule(), they should usually not delay calling createShaderModule(); but should instead just omit the unknown information from hints or GPUShaderModuleCompilationHint. Omitting this information may cause compilation to be deferred to createComputePipeline() / createRenderPipeline().

Note: If an author is not confident that the information passed to createShaderModule() will match the information later passed to createComputePipeline() / createRenderPipeline() with that same module, they should avoid passing that information to createShaderModule(), as passing mismatched information to createShaderModule() may cause unnecessary compilations to occur.

createShaderModule(descriptor)

Creates a GPUShaderModule.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createShaderModule(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUShaderModuleDescriptor Description of the GPUShaderModule to create.

Returns: GPUShaderModule

Describe createShaderModule() algorithm steps.

9.1.2. Shader Module Compilation Information

enum GPUCompilationMessageType {
    "error",
    "warning",
    "info",
};

[Exposed=(Window, DedicatedWorker), Serializable, SecureContext]
interface GPUCompilationMessage {
    readonly attribute DOMString message;
    readonly attribute GPUCompilationMessageType type;
    readonly attribute unsigned long long lineNum;
    readonly attribute unsigned long long linePos;
    readonly attribute unsigned long long offset;
    readonly attribute unsigned long long length;
};

[Exposed=(Window, DedicatedWorker), Serializable, SecureContext]
interface GPUCompilationInfo {
    readonly attribute FrozenArray<GPUCompilationMessage> messages;
};

A GPUCompilationMessage is an informational, warning, or error message generated by the GPUShaderModule compiler. The messages are intended to be human readable to help developers diagnose issues with their shader code. Each message may correspond to either a single point in the shader code, a substring of the shader code, or may not correspond to any specific point in the code at all.

GPUCompilationMessage has the following attributes:

message, of type DOMString, readonly

A human-readable string containing the message generated during the shader compilation.

type, of type GPUCompilationMessageType, readonly

The severity level of the message.

If the type is "error", it corresponds to a shader-creation error.

lineNum, of type unsigned long long, readonly

The line number in the shader code the message corresponds to. Value is one-based, such that a lineNum of 1 indicates the first line of the shader code.

If the message corresponds to a substring this points to the line on which the substring begins. Must be 0 if the message does not correspond to any specific point in the shader code.

Reference WGSL spec when it defines what a line is. [Issue #gpuweb/gpuweb#2435]

linePos, of type unsigned long long, readonly

The offset, in UTF-16 code units, from the beginning of line lineNum of the shader code to the point or beginning of the substring that the message corresponds to. Value is one-based, such that a linePos of 1 indicates the first character of the line.

If message corresponds to a substring this points to the first UTF-16 code unit of the substring. Must be 0 if the message does not correspond to any specific point in the shader code.

offset, of type unsigned long long, readonly

The offset from the beginning of the shader code in UTF-16 code units to the point or beginning of the substring that message corresponds to. Must reference the same position as lineNum and linePos. Must be 0 if the message does not correspond to any specific point in the shader code.

length, of type unsigned long long, readonly

The number of UTF-16 code units in the substring that message corresponds to. If the message does not correspond with a substring then length must be 0.

Note: GPUCompilationMessage.lineNum and GPUCompilationMessage.linePos are one-based since the most common use for them is expected to be printing human readable messages that can be correlated with the line and column numbers shown in many text editors.

Note: GPUCompilationMessage.offset and GPUCompilationMessage.length are appropriate to pass to substr() in order to retrieve the substring of the shader code the message corresponds to.

compilationInfo()

Returns any messages generated during the GPUShaderModule's compilation.

The locations, order, and and contents of messages are implementation-defined. In particular, messages may not be ordered by lineNum.

Called on: GPUShaderModule this.

Returns: Promise<GPUCompilationInfo>

Describe compilationInfo() algorithm steps.

10. Pipelines

A pipeline, be it GPUComputePipeline or GPURenderPipeline, represents the complete function done by a combination of the GPU hardware, the driver, and the user agent, that process the input data in the shape of bindings and vertex buffers, and produces some output, like the colors in the output render targets.

Structurally, the pipeline consists of a sequence of programmable stages (shaders) and fixed-function states, such as the blending modes.

Note: Internally, depending on the target platform, the driver may convert some of the fixed-function states into shader code, and link it together with the shaders provided by the user. This linking is one of the reason the object is created as a whole.

This combination state is created as a single object (by GPUDevice.createComputePipeline() or GPUDevice.createRenderPipeline()), and switched as one (by GPUComputePassEncoder.setPipeline or GPURenderEncoderBase.setPipeline correspondingly).

10.1. Base pipelines

dictionary GPUPipelineDescriptorBase : GPUObjectDescriptorBase {
    GPUPipelineLayout layout;
};

interface mixin GPUPipelineBase {
    GPUBindGroupLayout getBindGroupLayout(unsigned long index);
};

GPUPipelineBase has the following internal slots:

[[layout]] of type GPUPipelineLayout.

The definition of the layout of resources which can be used with this.

GPUPipelineBase has the following methods:

getBindGroupLayout(index)

Gets a GPUBindGroupLayout that is compatible with the GPUPipelineBase's GPUBindGroupLayout at index.

Called on: GPUPipelineBase this.

Arguments:

Arguments for the GPUPipelineBase.getBindGroupLayout(index) method.
Parameter Type Nullable Optional Description
index unsigned long Index into the pipeline layout’s [[bindGroupLayouts]] sequence.

Returns: GPUBindGroupLayout

  1. If indexthis.[[device]].[[limits]].maxBindGroups:

    1. Throw a RangeError.

  2. If this is not valid:

    1. Return a new error GPUBindGroupLayout.

  3. Return a new GPUBindGroupLayout object that references the same internal object as this.[[layout]].[[bindGroupLayouts]][index].

Specify this more properly once we have internal objects for GPUBindGroupLayout. Alternatively only spec is as a new internal objects that’s group-equivalent

Note: Only returning new GPUBindGroupLayout objects ensures no synchronization is necessary between the Content timeline and the Device timeline.

10.1.1. Default pipeline layout

A GPUPipelineBase object that was created without a layout has a default layout created and used instead.

To create a default pipeline layout for GPUPipelineBase pipeline, run the following steps:

  1. Let groupDescs be a sequence of device.[[limits]].maxBindGroups new GPUBindGroupLayoutDescriptor objects.

  2. For each groupDesc in groupDescs:

    1. Set groupDesc.entries to an empty sequence.

  3. For each GPUProgrammableStage stageDesc in the descriptor used to create pipeline:

    1. Let stageInfo be the "reflection information" for stageDesc.

      Define the reflection information concept so that this spec can interface with the WGSL spec and get information what the interface is for a GPUShaderModule for a specific entrypoint.

    2. Let shaderStage be the GPUShaderStageFlags for stageDesc.entryPoint in stageDesc.module.

    3. For each resource resource in stageInfo’s resource interface:

      1. Let group be resource’s "group" decoration.

      2. Let binding be resource’s "binding" decoration.

      3. Let entry be a new GPUBindGroupLayoutEntry.

      4. Set entry.binding to binding.

      5. Set entry.visibility to shaderStage.

      6. If resource is for a sampler binding:

        1. Let samplerLayout be a new GPUSamplerBindingLayout.

        2. Set entry.sampler to samplerLayout.

      7. If resource is for a comparison sampler binding:

        1. Let samplerLayout be a new GPUSamplerBindingLayout.

        2. Set samplerLayout.type to "comparison".

        3. Set entry.sampler to samplerLayout.

      8. If resource is for a buffer binding:

        1. Let bufferLayout be a new GPUBufferBindingLayout.

        2. Set bufferLayout.minBindingSize to resource’s minimum buffer binding size.

          link to a definition for "minimum buffer binding size" in the "reflection information".

        3. If resource is for a read-only storage buffer:

          1. Set bufferLayout.type to "read-only-storage".

        4. If resource is for a storage buffer:

          1. Set bufferLayout.type to "storage".

        5. Set entry.buffer to bufferLayout.

      9. If resource is for a sampled texture binding:

        1. Let textureLayout be a new GPUTextureBindingLayout.

        2. If resource is a depth texture binding:

          Else if the sampled type of resource is:

        3. Set textureLayout.viewDimension to resource’s dimension.

        4. If resource is for a multisampled texture:

          1. Set textureLayout.multisampled to true.

        5. Set entry.texture to textureLayout.

      10. If resource is for a storage texture binding:

        1. Let storageTextureLayout be a new GPUStorageTextureBindingLayout.

        2. Set storageTextureLayout.format to resource’s format.

        3. Set storageTextureLayout.viewDimension to resource’s dimension.

        4. If resource is for a write-only storage texture:

          1. Set storageTextureLayout.access to "write-only".

        5. Set entry.storageTexture to storageTextureLayout.

      11. If groupDescs[group] has an entry previousEntry with binding equal to binding:

        1. If entry has different visibility than previousEntry:

          1. Add the bits set in entry.visibility into previousEntry.visibility

        2. If resource is for a buffer binding and entry has greater buffer.minBindingSize than previousEntry:

          1. Set previousEntry.buffer.minBindingSize to entry.buffer.minBindingSize.

        3. If resource is a sampled texture binding and entry has different texture.sampleType than previousEntry and both entry and previousEntry have texture.sampleType of either "float" or "unfilterable-float":

          1. Set previousEntry.texture.sampleType to "float".

        4. If any other property is unequal between entry and previousEntry:

          1. Return null (which will cause the creation of the pipeline to fail).

      12. Else

        1. Append entry to groupDescs[group].

  4. Let groupLayouts be a new sequence.

  5. For each groupDesc in groupDescs:

    1. Let bindGroupLayout be the result of calling device.createBindGroupLayout()(groupDesc).

    2. Set bindGroupLayout.[[exclusivePipeline]] to pipeline.

    3. Append bindGroupLayout to groupLayouts.

  6. Let desc be a new GPUPipelineLayoutDescriptor.

  7. Set desc.bindGroupLayouts to groupLayouts.

  8. Return device.createPipelineLayout()(desc).

This fills the pipeline layout with empty bindgroups. Revisit once the behavior of empty bindgroups is specified.

10.1.2. GPUProgrammableStage

A GPUProgrammableStage describes the entry point in the user-provided GPUShaderModule that controls one of the programmable stages of a pipeline.

dictionary GPUProgrammableStage {
    required GPUShaderModule module;
    required USVString entryPoint;
    record<USVString, GPUPipelineConstantValue> constants;
};

typedef double GPUPipelineConstantValue; // May represent WGSL’s bool, f32, i32, u32.
constants, of type record<USVString, GPUPipelineConstantValue>

Specifies the values of pipeline-overridable constants in the shader module module.

Each such pipeline-overridable constant is uniquely identified by a single pipeline-overridable constant identifier string (representing the numeric ID of the constant, if one is specified, and otherwise the constant’s identifier name).

The key of each key-value pair must equal the identifier string of one such constant. When the pipeline is executed, that constant will have the specified value.

Values are specified as GPUPipelineConstantValue, which is a double which is converted to the WGSL data type of the corresponding pipeline-overridable constant (bool, i32, u32, or f32) via an IDL value (boolean, long, unsigned long, or float).

Pipeline-overridable constants defined in WGSL:
[[override(0)]]    let has_point_light: bool = true; // Algorithmic control.
[[override(1200)]] let specular_param: f32 = 2.3;    // Numeric control.
[[override(1300)]] let gain: f32;                    // Must be overridden.
[[override]]       let width: f32 = 0.0;             // Specifed at the API level
                                                     //   using the name "width".
[[override]]       let depth: f32;                   // Specifed at the API level
                                                     //   using the name "depth".
                                                     //   Must be overridden.

Corresponding JavaScript code, providing only the overrides which are required (have no defaults):

{
    // ...
    constants: {
        1300: 2.0,  // "gain"
        depth: -1,  // "depth"
    }
}

Corresponding JavaScript code, overriding all constants:

{
    // ...
    constants: {
        0: false,   // "has_point_light"
        1200: 3.0,  // "specular_param"
        1300: 2.0,  // "gain"
        width: 20,  // "width"
        depth: -1,  // "depth"
    }
}
validating GPUProgrammableStage(stage, descriptor, layout)

Arguments:

Return true if all of the following conditions are met:

A return value of false corresponds to a pipeline-creation error.

validating shader binding(binding, layout)

Arguments:

Let bindGroup be the bind group index, and bindIndex be the binding index, of the shader binding declaration variable.

Return true if all of the following conditions are satisfied:

A resource binding is considered to be statically used by a shader entry point if and only if it’s reachable by the control flow graph of the shader module, starting at the entry point.

10.2. GPUComputePipeline

A GPUComputePipeline is a kind of pipeline that controls the compute shader stage, and can be used in GPUComputePassEncoder.

Compute inputs and outputs are all contained in the bindings, according to the given GPUPipelineLayout. The outputs correspond to buffer bindings with a type of "storage" and storageTexture bindings with a type of "write-only".

Stages of a compute pipeline:

  1. Compute shader

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUComputePipeline {
};
GPUComputePipeline includes GPUObjectBase;
GPUComputePipeline includes GPUPipelineBase;

10.2.1. Creation

dictionary GPUComputePipelineDescriptor : GPUPipelineDescriptorBase {
    required GPUProgrammableStage compute;
};
createComputePipeline(descriptor)

Creates a GPUComputePipeline.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createComputePipeline(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUComputePipelineDescriptor Description of the GPUComputePipeline to create.

Returns: GPUComputePipeline

  1. Let pipeline be a new valid GPUComputePipeline object.

  2. Issue the following steps on the Device timeline of this:

    1. If any of the following conditions are unsatisfied:

      Then:

      1. Generate a GPUValidationError in the current scope with appropriate error message.

      2. Make pipeline invalid.

    2. If descriptor.layout is undefined:

      1. Set pipeline.[[layout]] to a new default pipeline layout for pipeline.

      Otherwise set pipeline.[[layout]] to descriptor.layout.

  3. Return pipeline.

createComputePipelineAsync(descriptor)

Creates a GPUComputePipeline. The returned Promise resolves when the created pipeline is ready to be used without additional delay.

If pipeline creation fails, the returned Promise rejects with an OperationError.

Note: Use of this method is preferred whenever possible, as it prevents blocking the queue timeline work on pipeline compilation.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createComputePipelineAsync(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUComputePipelineDescriptor Description of the GPUComputePipeline to create.

Returns: Promise<GPUComputePipeline>

  1. Let promise be a new promise.

  2. Issue the following steps on the Device timeline of this:

    1. Let pipeline be a new GPUComputePipeline created as if this.createComputePipeline() was called with descriptor;

    2. When pipeline is ready to be used, resolve promise with pipeline.

  3. Return promise.

10.3. GPURenderPipeline

A GPURenderPipeline is a kind of pipeline that controls the vertex and fragment shader stages, and can be used in GPURenderPassEncoder as well as GPURenderBundleEncoder.

Render pipeline inputs are:

Render pipeline outputs are:

A render pipeline is comprised of the following render stages:

  1. Vertex fetch, controlled by GPUVertexState.buffers

  2. Vertex shader, controlled by GPUVertexState

  3. Primitive assembly, controlled by GPUPrimitiveState

  4. Rasterization, controlled by GPUPrimitiveState, GPUDepthStencilState, and GPUMultisampleState

  5. Fragment shader, controlled by GPUFragmentState

  6. Stencil test and operation, controlled by GPUDepthStencilState

  7. Depth test and write, controlled by GPUDepthStencilState

  8. Output merging, controlled by GPUFragmentState.targets

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPURenderPipeline {
};
GPURenderPipeline includes GPUObjectBase;
GPURenderPipeline includes GPUPipelineBase;

GPURenderPipeline has the following internal slots:

[[descriptor]], of type GPURenderPipelineDescriptor

The GPURenderPipelineDescriptor describing this pipeline.

All optional fields of GPURenderPipelineDescriptor are defined.

[[writesDepth]], of type boolean

True if the pipeline writes to the depth component of the depth/stencil attachment

[[writesStencil]], of type boolean

True if the pipeline writes to the stencil component of the depth/stencil attachment

10.3.1. Creation

dictionary GPURenderPipelineDescriptor : GPUPipelineDescriptorBase {
    required GPUVertexState vertex;
    GPUPrimitiveState primitive = {};
    GPUDepthStencilState depthStencil;
    GPUMultisampleState multisample = {};
    GPUFragmentState fragment;
};

A GPURenderPipelineDescriptor describes the state of a render pipeline by configuring each of the render stages. See § 22.3 Rendering for the details.

createRenderPipeline(descriptor)

Creates a GPURenderPipeline.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createRenderPipeline(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPURenderPipelineDescriptor Description of the GPURenderPipeline to create.

Returns: GPURenderPipeline

  1. Let pipeline be a new valid GPURenderPipeline object.

  2. Issue the following steps on the Device timeline of this:

    1. If any of the following conditions are unsatisfied:

      Then:

      1. Generate a GPUValidationError in the current scope with appropriate error message.

      2. Make pipeline invalid.

    2. Set pipeline.[[descriptor]] to descriptor.

    3. Set pipeline.[[writesDepth]] to false.

    4. Set pipeline.[[writesStencil]] to false.

    5. Let depthStencil be descriptor.depthStencil.

    6. If depthStencil is not null:

      1. Set pipeline.[[writesDepth]] to depthStencil.depthWriteEnabled.

      2. If depthStencil.stencilWriteMask is not 0:

        1. Let stencilFront be depthStencil.stencilFront.

        2. Let stencilBack be depthStencil.stencilBack.

        3. Let cullMode be descriptor.primitive.cullMode.

        4. If cullMode is not "front", and any of stencilFront.passOp, stencilFront.depthFailOp, or stencilFront.failOp is not "keep":

          1. Set pipeline.[[writesStencil]] to true.

        5. If cullMode is not "back", and any of stencilBack.passOp, stencilBack.depthFailOp, or stencilBack.failOp is not "keep":

          1. Set pipeline.[[writesStencil]] to true.

    7. If descriptor.layout is undefined:

      1. Set pipeline.[[layout]] to a new default pipeline layout for pipeline.

      Otherwise set pipeline.[[layout]] to descriptor.layout.

  3. Return pipeline.

need description of the render states.

createRenderPipelineAsync(descriptor)

Creates a GPURenderPipeline. The returned Promise resolves when the created pipeline is ready to be used without additional delay.

If pipeline creation fails, the returned Promise rejects with an OperationError.

Note: Use of this method is preferred whenever possible, as it prevents blocking the queue timeline work on pipeline compilation.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createRenderPipelineAsync(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPURenderPipelineDescriptor Description of the GPURenderPipeline to create.

Returns: Promise<GPURenderPipeline>

  1. Let promise be a new promise.

  2. Issue the following steps on the Device timeline of this:

    1. Let pipeline be a new GPURenderPipeline created as if this.createRenderPipeline() was called with descriptor;

    2. When pipeline is ready to be used, resolve promise with pipeline.

  3. Return promise.

validating GPURenderPipelineDescriptor(descriptor, device) Arguments:

Return true if all of the following conditions are satisfied:

should we validate that cullMode is none for points and lines?

define what "compatible" means for render target formats.

need a proper limit for the maximum number of color targets.

10.3.2. Primitive State

enum GPUPrimitiveTopology {
    "point-list",
    "line-list",
    "line-strip",
    "triangle-list",
    "triangle-strip",
};
dictionary GPUPrimitiveState {
    GPUPrimitiveTopology topology = "triangle-list";
    GPUIndexFormat stripIndexFormat;
    GPUFrontFace frontFace = "ccw";
    GPUCullMode cullMode = "none";

    // Requires "depth-clip-control" feature.
    boolean unclippedDepth = false;
};
validating GPUPrimitiveState(descriptor, features) Arguments:

Return true if all of the following conditions are satisfied:

enum GPUFrontFace {
    "ccw",
    "cw",
};
enum GPUCullMode {
    "none",
    "front",
    "back",
};

10.3.3. Multisample State

dictionary GPUMultisampleState {
    GPUSize32 count = 1;
    GPUSampleMask mask = 0xFFFFFFFF;
    boolean alphaToCoverageEnabled = false;
};
validating GPUMultisampleState(descriptor) Arguments:

Return true if all of the following conditions are satisfied:

10.3.4. Fragment State

dictionary GPUFragmentState : GPUProgrammableStage {
    required sequence<GPUColorTargetState> targets;
};
validating GPUFragmentState(descriptor) Return true if all of the following requirements are met:
component is a valid GPUBlendComponent if it meets the following requirements:

define the area of reach for "statically used" things of GPUProgrammableStage

10.3.5. Color Target State

dictionary GPUColorTargetState {
    required GPUTextureFormat format;

    GPUBlendState blend;
    GPUColorWriteFlags writeMask = 0xF;  // GPUColorWrite.ALL
};
dictionary GPUBlendState {
    required GPUBlendComponent color;
    required GPUBlendComponent alpha;
};
typedef [EnforceRange] unsigned long GPUColorWriteFlags;
[Exposed=(Window, DedicatedWorker)]
namespace GPUColorWrite {
    const GPUFlagsConstant RED   = 0x1;
    const GPUFlagsConstant GREEN = 0x2;
    const GPUFlagsConstant BLUE  = 0x4;
    const GPUFlagsConstant ALPHA = 0x8;
    const GPUFlagsConstant ALL   = 0xF;
};
10.3.5.1. Blend State
dictionary GPUBlendComponent {
    GPUBlendOperation operation = "add";
    GPUBlendFactor srcFactor = "one";
    GPUBlendFactor dstFactor = "zero";
};
enum GPUBlendFactor {
    "zero",
    "one",
    "src",
    "one-minus-src",
    "src-alpha",
    "one-minus-src-alpha",
    "dst",
    "one-minus-dst",
    "dst-alpha",
    "one-minus-dst-alpha",
    "src-alpha-saturated",
    "constant",
    "one-minus-constant",
};
enum GPUBlendOperation {
    "add",
    "subtract",
    "reverse-subtract",
    "min",
    "max",
};

10.3.6. Depth/Stencil State

dictionary GPUDepthStencilState {
    required GPUTextureFormat format;

    boolean depthWriteEnabled = false;
    GPUCompareFunction depthCompare = "always";

    GPUStencilFaceState stencilFront = {};
    GPUStencilFaceState stencilBack = {};

    GPUStencilValue stencilReadMask = 0xFFFFFFFF;
    GPUStencilValue stencilWriteMask = 0xFFFFFFFF;

    GPUDepthBias depthBias = 0;
    float depthBiasSlopeScale = 0;
    float depthBiasClamp = 0;
};
dictionary GPUStencilFaceState {
    GPUCompareFunction compare = "always";
    GPUStencilOperation failOp = "keep";
    GPUStencilOperation depthFailOp = "keep";
    GPUStencilOperation passOp = "keep";
};
enum GPUStencilOperation {
    "keep",
    "zero",
    "replace",
    "invert",
    "increment-clamp",
    "decrement-clamp",
    "increment-wrap",
    "decrement-wrap",
};
validating GPUDepthStencilState(descriptor) Arguments:

Return true, if and only if, all of the following conditions are satisfied:

how can this algorithm support depth/stencil formats that are added in extensions?

10.3.7. Vertex State

enum GPUIndexFormat {
    "uint16",
    "uint32",
};

The index format determines both the data type of index values in a buffer and, when used with strip primitive topologies ("line-strip" or "triangle-strip") also specifies the primitive restart value. The primitive restart value indicates which index value indicates that a new primitive should be started rather than continuing to construct the triangle strip with the prior indexed vertices.

GPUPrimitiveStates that specify a strip primitive topology must specify a stripIndexFormat if they are used for indexed draws so that the primitive restart value that will be used is known at pipeline creation time. GPUPrimitiveStates that specify a list primitive topology will use the index format passed to setIndexBuffer() when doing indexed rendering.

Index format Byte size Primitive restart value
"uint16" 2 0xFFFF
"uint32" 4 0xFFFFFFFF
10.3.7.1. Vertex Formats

The name of the format specifies the order of components, bits per component, and vertex data type for the component.

enum GPUVertexFormat {
    "uint8x2",
    "uint8x4",
    "sint8x2",
    "sint8x4",
    "unorm8x2",
    "unorm8x4",
    "snorm8x2",
    "snorm8x4",
    "uint16x2",
    "uint16x4",
    "sint16x2",
    "sint16x4",
    "unorm16x2",
    "unorm16x4",
    "snorm16x2",
    "snorm16x4",
    "float16x2",
    "float16x4",
    "float32",
    "float32x2",
    "float32x3",
    "float32x4",
    "uint32",
    "uint32x2",
    "uint32x3",
    "uint32x4",
    "sint32",
    "sint32x2",
    "sint32x3",
    "sint32x4",
};

The multi-component formats specify the number of components after "x". As such, "sint32x3" denotes a 3-component vector of i32 values in the shader.

enum GPUVertexStepMode {
    "vertex",
    "instance",
};

The step mode configures how an address for vertex buffer data is computed, based on the current vertex or instance index:

"vertex"

The address is advanced by arrayStride for each vertex, and reset between instances.

"instance"

The address is advanced by arrayStride for each instance.

dictionary GPUVertexState : GPUProgrammableStage {
    sequence<GPUVertexBufferLayout?> buffers = [];
};

A vertex buffer is, conceptually, a view into buffer memory as an array of structures. arrayStride is the stride, in bytes, between elements of that array. Each element of a vertex buffer is like a structure with a memory layout defined by its attributes, which describe the members of the structure.

Each GPUVertexAttribute describes its format and its offset, in bytes, within the structure.

Each attribute appears as a separate input in a vertex shader, each bound by a numeric location, which is specified by shaderLocation. Every location must be unique within the GPUVertexState.

dictionary GPUVertexBufferLayout {
    required GPUSize64 arrayStride;
    GPUVertexStepMode stepMode = "vertex";
    required sequence<GPUVertexAttribute> attributes;
};
dictionary GPUVertexAttribute {
    required GPUVertexFormat format;
    required GPUSize64 offset;

    required GPUIndex32 shaderLocation;
};
validating GPUVertexBufferLayout(device, descriptor, vertexStage) Arguments:

Return true, if and only if, all of the following conditions are satisfied:

validating GPUVertexState(device, descriptor) Arguments:

Return true, if and only if, all of the following conditions are satisfied:

11. Command Buffers

Command buffers are pre-recorded lists of GPU commands that can be submitted to a GPUQueue for execution. Each GPU command represents a task to be performed on the GPU, such as setting state, drawing, copying resources, etc.

11.1. GPUCommandBuffer

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUCommandBuffer {
};
GPUCommandBuffer includes GPUObjectBase;

GPUCommandBuffer has the following internal slots:

[[command_list]] of type list<GPU command>.

A list of GPU commands to be executed on the Queue timeline when this command buffer is submitted.

11.1.1. Creation

dictionary GPUCommandBufferDescriptor : GPUObjectDescriptorBase {
};

12. Command Encoding

12.1. GPUCommandsMixin

GPUCommandsMixin defines state common to all interfaces which encode commands. It has no methods.

interface mixin GPUCommandsMixin {
};

GPUCommandsMixin adds the following internal slots to interfaces which include it:

[[state]], of type encoder state

The current state of the encoder, initially set to "open".

[[commands]], of type list<GPU command>

A list of GPU commands to be executed on the Queue timeline when a GPUCommandBuffer containing these commands is submitted.

The encoder state may be one of the following:

"open"

The encoder is available to encode new commands.

"locked"

The encoder cannot be used, because it is locked by a child encoder: it is a GPUCommandEncoder, and a GPURenderPassEncoder or GPUComputePassEncoder is active. The encoder becomes "open" again when the pass is ended.

Any command issued in this state makes the encoder invalid.

"ended"

The encoder has been ended and new commands can no longer be encoded.

Any command issued in this state generates a GPUValidationError.

To Prepare the encoder state of GPUCommandsMixin encoder:

If encoder.[[state]] is:

"open"

Return true.

"locked"

Make encoder invalid, and return false.

"ended"

Generate a GPUValidationError in the current scope, and return false.

12.2. GPUCommandEncoder

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUCommandEncoder {
    GPURenderPassEncoder beginRenderPass(GPURenderPassDescriptor descriptor);
    GPUComputePassEncoder beginComputePass(optional GPUComputePassDescriptor descriptor = {});

    undefined copyBufferToBuffer(
        GPUBuffer source,
        GPUSize64 sourceOffset,
        GPUBuffer destination,
        GPUSize64 destinationOffset,
        GPUSize64 size);

    undefined copyBufferToTexture(
        GPUImageCopyBuffer source,
        GPUImageCopyTexture destination,
        GPUExtent3D copySize);

    undefined copyTextureToBuffer(
        GPUImageCopyTexture source,
        GPUImageCopyBuffer destination,
        GPUExtent3D copySize);

    undefined copyTextureToTexture(
        GPUImageCopyTexture source,
        GPUImageCopyTexture destination,
        GPUExtent3D copySize);

    undefined clearBuffer(
        GPUBuffer buffer,
        optional GPUSize64 offset = 0,
        optional GPUSize64 size);

    undefined writeTimestamp(GPUQuerySet querySet, GPUSize32 queryIndex);

    undefined resolveQuerySet(
        GPUQuerySet querySet,
        GPUSize32 firstQuery,
        GPUSize32 queryCount,
        GPUBuffer destination,
        GPUSize64 destinationOffset);

    GPUCommandBuffer finish(optional GPUCommandBufferDescriptor descriptor = {});
};
GPUCommandEncoder includes GPUObjectBase;
GPUCommandEncoder includes GPUCommandsMixin;
GPUCommandEncoder includes GPUDebugCommandsMixin;

12.2.1. Creation

dictionary GPUCommandEncoderDescriptor : GPUObjectDescriptorBase {
};
createCommandEncoder(descriptor)

Creates a GPUCommandEncoder.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createCommandEncoder(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUCommandEncoderDescriptor Description of the GPUCommandEncoder to create.

Returns: GPUCommandEncoder

Describe createCommandEncoder() algorithm steps.

12.3. Pass Encoding

beginRenderPass(descriptor)

Begins encoding a render pass described by descriptor.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.beginRenderPass(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPURenderPassDescriptor Description of the GPURenderPassEncoder to create.

Returns: GPURenderPassEncoder

Issue the following steps on the Device timeline of this:

  1. Let pass be a new GPURenderPassEncoder object.

  2. If any of the following conditions are unsatisfied, generate a validation error and stop.

  3. Set this.[[state]] to "locked".

  4. For each colorAttachment in descriptor.colorAttachments:

    1. The texture subresource seen by colorAttachment.view is considered to be used as attachment for the duration of the render pass.

  5. Let depthStencilAttachment be descriptor.depthStencilAttachment.

  6. If depthStencilAttachment is not null:

    1. Let depthStencilView be depthStencilAttachment.view.

    2. If depthStencilAttachment.depthReadOnly and stencilReadOnly are set:

      1. The texture subresources seen by depthStencilView are considered to be used as attachment-read for the duration of the render pass.

    3. Else, the texture subresource seen by depthStencilView is considered to be used as attachment for the duration of the render pass.

    4. Set pass.[[depthReadOnly]] to depthStencilAttachment.depthReadOnly.

    5. Set pass.[[stencilReadOnly]] to depthStencilAttachment.stencilReadOnly.

  7. Set pass.[[layout]] to derive render targets layout from pass(descriptor).

  8. For each timestampWrite in descriptor.timestampWrites,

    1. If timestampWrite.location is "beginning", Append a GPU command to pass.[[command_encoder]].[[commands]] that writes the GPU’s timestamp value into the timestampWrite.queryIndexth index in timestampWrite.querySet.

    2. Otherwise, if timestampWrite.location is "end", Append timestampWrite to pass.[[endTimestampWrites]].

  9. Enqueue attachment loads/clears.

  10. Return pass.

specify the behavior of read-only depth/stencil

beginComputePass(descriptor)

Begins encoding a compute pass described by descriptor.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.beginComputePass(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUComputePassDescriptor

Returns: GPUComputePassEncoder

Issue the following steps on the Device timeline of this:

  1. If any of the following conditions are unsatisfied, generate a validation error and stop.

  2. Set this.[[state]] to "locked".

  3. Let pass be a new GPUComputePassEncoder object.

  4. For each timestampWrite in descriptor.timestampWrites,

    1. If timestampWrite.location is "beginning", Append a GPU command to pass.[[command_encoder]].[[commands]] that writes the GPU’s timestamp value into the timestampWrite.queryIndexth index in timestampWrite.querySet.

    2. Otherwise, if timestampWrite.location is "end", Append timestampWrite to pass.[[endTimestampWrites]].

  5. Return pass.

12.4. Copy Commands

these dictionary definitions should be inside the image copies section.

12.4.1. GPUImageDataLayout

dictionary GPUImageDataLayout {
    GPUSize64 offset = 0;
    GPUSize32 bytesPerRow;
    GPUSize32 rowsPerImage;
};

A GPUImageDataLayout is a layout of images within some linear memory. It’s used when copying data between a texture and a buffer, or when scheduling a write into a texture from the GPUQueue.

Define images more precisely. In particular, define them as being comprised of texel blocks.

Operations that copy between byte arrays and textures always work with rows of texel blocks, which we’ll call block rows. It’s not possible to update only a part of a texel block.

Texel blocks are tightly packed within each block row in the linear memory layout of an image copy, with each subsequent texel block immediately following the previous texel block, with no padding. This includes copies to/from specific aspects of depth-or-stencil format textures: stencil values are tightly packed in an array of bytes; depth values are tightly packed in an array of the appropriate type ("depth16unorm" or "depth32float").

Define the exact copy semantics, by reference to common algorithms shared by the copy methods.

bytesPerRow, of type GPUSize32

The stride, in bytes, between the beginning of each block row and the subsequent block row.

Required if there are multiple block rows (i.e. the copy height or depth is more than one block).

rowsPerImage, of type GPUSize32

Number of block rows per single image of the texture. rowsPerImage × bytesPerRow is the stride, in bytes, between the beginning of each image of data and the subsequent image.

Required if there are multiple images (i.e. the copy depth is more than one).

12.4.2. GPUImageCopyBuffer

In an image copy operation, GPUImageCopyBuffer defines a GPUBuffer and, together with the copySize, how image data is laid out in the buffer’s memory (see GPUImageDataLayout).

dictionary GPUImageCopyBuffer : GPUImageDataLayout {
    required GPUBuffer buffer;
};
validating GPUImageCopyBuffer

Arguments:

Returns: boolean

Return true if and only if all of the following conditions are satisfied:

12.4.3. GPUImageCopyTexture

In an image copy operation, a GPUImageCopyTexture defines a GPUTexture and, together with the copySize, the sub-region of the texture (spanning one or more contiguous texture subresources at the same mip-map level).

dictionary GPUImageCopyTexture {
    required GPUTexture texture;
    GPUIntegerCoordinate mipLevel = 0;
    GPUOrigin3D origin = {};
    GPUTextureAspect aspect = "all";
};
texture, of type GPUTexture

Texture to copy to/from.

mipLevel, of type GPUIntegerCoordinate, defaulting to 0

Mip-map level of the texture to copy to/from.

origin, of type GPUOrigin3D, defaulting to {}

Defines the origin of the copy - the minimum corner of the texture sub-region to copy to/from. Together with copySize, defines the full copy sub-region.

aspect, of type GPUTextureAspect, defaulting to "all"

Defines which aspects of the texture to copy to/from.

validating GPUImageCopyTexture

Arguments:

Returns: boolean

Let:

Return true if and only if all of the following conditions apply:

Define the copies with 1d and 3d textures. [Issue #gpuweb/gpuweb#69]

12.4.4. GPUImageCopyTextureTagged

WebGPU textures hold raw numeric data, and are not tagged with semantic metadata describing colors. However, copyExternalImageToTexture() copies from sources that describe colors.

A GPUImageCopyTextureTagged is a GPUImageCopyTexture which is additionally tagged with color space/encoding and alpha-premultiplication metadata, so that semantic color data may be preserved during copies. This metadata affects only the semantics of the copyExternalImageToTexture() operation, not the semantics of the destination texture.

dictionary GPUImageCopyTextureTagged : GPUImageCopyTexture {
    GPUPredefinedColorSpace colorSpace = "srgb";
    boolean premultipliedAlpha = false;
};
colorSpace, of type GPUPredefinedColorSpace, defaulting to "srgb"

Describes the color space and encoding used to encode data into the destination texture.

This may result in values outside of the range [0, 1] being written to the target texture, if its format can represent them. Otherwise, the results are clamped to the target texture format’s range.

Note: If colorSpace matches the source image, no conversion occurs. ImageBitmap color space tagging and conversion can be controlled via ImageBitmapOptions.

premultipliedAlpha, of type boolean, defaulting to false

Describes whether the data written into the texture should be have its RGB channels premultiplied by the alpha channel, or not.

If this option is set to true and the source is also premultiplied, the source RGB values must be preserved even if they exceed their corresponding alpha values.

Note: If premultipliedAlpha matches the source image, no conversion occurs. 2d canvases are always premultiplied, while WebGL canvases can be controlled via WebGLContextAttributes. ImageBitmap premultiplication can be controlled via ImageBitmapOptions.

Define (and test) the encoding of color values into the various encodings allowed by copyExternalImageToTexture().

12.4.5. GPUImageCopyExternalImage

dictionary GPUImageCopyExternalImage {
    required (ImageBitmap or HTMLCanvasElement or OffscreenCanvas) source;
    GPUOrigin2D origin = {};
    boolean flipY = false;
};

GPUImageCopyExternalImage has the following members:

source, of type (ImageBitmap or HTMLCanvasElement or OffscreenCanvas)

The source of the image copy. The copy source data is captured at the moment that copyExternalImageToTexture() is issued.

origin, of type GPUOrigin2D, defaulting to {}

Defines the origin of the copy - the minimum (top-left) corner of the source sub-region to copy from. Together with copySize, defines the full copy sub-region.

flipY, of type boolean, defaulting to false

Describes whether the source image is vertically flipped, or not.

If this option is set to true, the copy is flipped vertically: the bottom row of the source region is copied into the first row of the destination region, and so on. The origin option is still relative to the top-left corner of the source image, increasing downward.

12.4.6. Buffer Copies

copyBufferToBuffer(source, sourceOffset, destination, destinationOffset, size)

Encode a command into the GPUCommandEncoder that copies data from a sub-region of a GPUBuffer to a sub-region of another GPUBuffer.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.copyBufferToBuffer(source, sourceOffset, destination, destinationOffset, size) method.
Parameter Type Nullable Optional Description
source GPUBuffer The GPUBuffer to copy from.
sourceOffset GPUSize64 Offset in bytes into source to begin copying from.
destination GPUBuffer The GPUBuffer to copy to.
destinationOffset GPUSize64 Offset in bytes into destination to place the copied data.
size GPUSize64 Bytes to copy.

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. Prepare the encoder state of this. If it returns false, stop.

  2. If any of the following conditions are unsatisfied, generate a validation error and stop.

    figure out how to handle overflows in the spec. [Issue #gpuweb/gpuweb#69]

  3. Describe and enqueue the GPU command.

12.4.7. Buffer Fills

clearBuffer(buffer, offset, size)

Encode a command into the GPUCommandEncoder that fills a sub-region of a GPUBuffer with zeros.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.clearBuffer(buffer, offset, size) method.
Parameter Type Nullable Optional Description
buffer GPUBuffer The GPUBuffer to clear.
offset GPUSize64 Offset in bytes into buffer where the sub-region to clear begins.
size GPUSize64 Size in bytes of the sub-region to clear. Defaults to the size of the buffer minus offset.

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. Prepare the encoder state of this. If it returns false, stop.

  2. If size is missing, set size to max(0, |buffer|.{{GPUBuffer/[[size]]}} - |offset|).

  3. If any of the following conditions are unsatisfied, generate a validation error and stop.

12.4.8. Image Copies

WebGPU provides copyBufferToTexture() for buffer-to-texture copies and copyTextureToBuffer() for texture-to-buffer copies, as well as writeTexture() for ArrayBuffer-to-texture writes.

The following definitions and validation rules are used by these methods, as well as copyTextureToTexture().

Does the term "image copy" include copyTextureToTexture?

imageCopyTexture subresource size and Valid Texture Copy Range also applies to copyTextureToTexture().

imageCopyTexture subresource size

Arguments:

Returns: GPUExtent3D

The imageCopyTexture subresource size of imageCopyTexture is calculated as follows:

Its width, height and depthOrArrayLayers are the width, height, and depth, respectively, of the physical size of imageCopyTexture.texture subresource at mipmap level imageCopyTexture.mipLevel.

define this as an algorithm with (texture, mipmapLevel) parameters and use the call syntax instead of referring to the definition by label.

validating linear texture data(layout, byteSize, format, copyExtent)

Arguments:

GPUImageDataLayout layout

Layout of the linear texture data.

GPUSize64 byteSize

Total size of the linear data, in bytes.

GPUTextureFormat format

Format of the texture.

GPUExtent3D copyExtent

Extent of the texture to copy.

  1. Let blockWidth, blockHeight, and blockSize be the texel block width, height, and size of format.

  2. It is assumed that copyExtent.width is a multiple of blockWidth and copyExtent.height is a multiple of blockHeight. Let:

    • widthInBlocks be copyExtent.width ÷ blockWidth.

    • heightInBlocks be copyExtent.height ÷ blockHeight.

    • bytesInLastRow be blockSize × widthInBlocks.

  3. Fail if the following conditions are not satisfied:

  4. Let requiredBytesInCopy be 0.

  5. If copyExtent.depthOrArrayLayers > 1:

    1. Let bytesPerImage be layout.bytesPerRow × layout.rowsPerImage.

    2. Let bytesBeforeLastImage be bytesPerImage × (copyExtent.depthOrArrayLayers − 1).

    3. Add bytesBeforeLastImage to requiredBytesInCopy.

  6. If copyExtent.depthOrArrayLayers > 0:

    1. If heightInBlocks > 1, add layout.bytesPerRow × (heightInBlocks − 1) to requiredBytesInCopy.

    2. If heightInBlocks > 0, add bytesInLastRow to requiredBytesInCopy.

  7. Fail if the following conditions are not satisfied:

    • layout.offset + requiredBytesInCopybyteSize.

Valid Texture Copy Range

Given a GPUImageCopyTexture imageCopyTexture and a GPUExtent3D copySize, let

The following validation rules apply:

Define the copies with 1d and 3d textures. [Issue #gpuweb/gpuweb#69]

Additional restrictions on rowsPerImage if needed. [Issue #gpuweb/gpuweb#537]

Define the copies with "depth24plus", "depth24plus-stencil8", and "stencil8". [Issue #gpuweb/gpuweb#652]

convert "Valid Texture Copy Range" into an algorithm with parameters, similar to "validating linear texture data"

copyBufferToTexture(source, destination, copySize)

Encode a command into the GPUCommandEncoder that copies data from a sub-region of a GPUBuffer to a sub-region of one or multiple continuous texture subresources.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.copyBufferToTexture(source, destination, copySize) method.
Parameter Type Nullable Optional Description
source GPUImageCopyBuffer Combined with copySize, defines the region of the source buffer.
destination GPUImageCopyTexture Combined with copySize, defines the region of the destination texture subresource.
copySize GPUExtent3D

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. Prepare the encoder state of this. If it returns false, stop.

  2. If any of the following conditions are unsatisfied, generate a validation error and stop.

copyTextureToBuffer(source, destination, copySize)

Encode a command into the GPUCommandEncoder that copies data from a sub-region of one or multiple continuous texture subresourcesto a sub-region of a GPUBuffer.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.copyTextureToBuffer(source, destination, copySize) method.
Parameter Type Nullable Optional Description
source GPUImageCopyTexture Combined with copySize, defines the region of the source texture subresources.
destination GPUImageCopyBuffer Combined with copySize, defines the region of the destination buffer.
copySize GPUExtent3D

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. Prepare the encoder state of this. If it returns false, stop.

  2. If any of the following conditions are unsatisfied, generate a validation error and stop.

copyTextureToTexture(source, destination, copySize)

Encode a command into the GPUCommandEncoder that copies data from a sub-region of one or multiple contiguous texture subresources to another sub-region of one or multiple continuous texture subresources.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.copyTextureToTexture(source, destination, copySize) method.
Parameter Type Nullable Optional Description
source GPUImageCopyTexture Combined with copySize, defines the region of the source texture subresources.
destination GPUImageCopyTexture Combined with copySize, defines the region of the destination texture subresources.
copySize GPUExtent3D

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. Prepare the encoder state of this. If it returns false, stop.

  2. If any of the following conditions are unsatisfied, generate a validation error and stop.

Two GPUTextureFormats format1 and format2 are copy-compatible if:
The set of subresources for texture copy(imageCopyTexture, copySize) is the set containing:

12.5. Queries

writeTimestamp(querySet, queryIndex)

Writes a timestamp value into a querySet when all previous commands have completed executing.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.writeTimestamp(querySet, queryIndex) method.
Parameter Type Nullable Optional Description
querySet GPUQuerySet The query set that will store the timestamp values.
queryIndex GPUSize32 The index of the query in the query set.

Returns: undefined

  1. If this.[[device]].[[features]] does not contain "timestamp-query", throw a TypeError.

  2. Issue the following steps on the Device timeline of this.[[device]]:

    1. Prepare the encoder state of this. If it returns false, stop.

    2. If any of the following conditions are unsatisfied, generate a validation error and stop.

    Describe writeTimestamp() algorithm steps.

resolveQuerySet(querySet, firstQuery, queryCount, destination, destinationOffset)

Resolves query results from a GPUQuerySet out into a range of a GPUBuffer.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.resolveQuerySet(querySet, firstQuery, queryCount, destination, destinationOffset) method.
Parameter Type Nullable Optional Description
querySet GPUQuerySet
firstQuery GPUSize32
queryCount GPUSize32
destination GPUBuffer
destinationOffset GPUSize64

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. Prepare the encoder state of this. If it returns false, stop.

  2. If any of the following conditions are unsatisfied, generate a GPUValidationError and stop.

    • querySet is valid to use with this.

    • destination is valid to use with this.

    • destination.[[usage]] contains QUERY_RESOLVE.

    • firstQuery is less than the number of queries in querySet.

    • (firstQuery + queryCount) is less than or equal to the number of queries in querySet.

    • destinationOffset is a multiple of 256.

    • destinationOffset + 8 × queryCountdestination.[[size]].

Describe resolveQuerySet() algorithm steps.

12.6. Finalization

A GPUCommandBuffer containing the commands recorded by the GPUCommandEncoder can be created by calling finish(). Once finish() has been called the command encoder can no longer be used.

finish(descriptor)

Completes recording of the commands sequence and returns a corresponding GPUCommandBuffer.

Called on: GPUCommandEncoder this.

Arguments:

Arguments for the GPUCommandEncoder.finish(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUCommandBufferDescriptor

Returns: GPUCommandBuffer

  1. Let commandBuffer be a new GPUCommandBuffer.

  2. Issue the following steps on the Device timeline of this:

    1. Let validationFailed be true if all of the following requirements are met, and false otherwise.

    2. Set this.[[state]] to "ended".

    3. If validationFailed, then:

      1. Generate a GPUValidationError in the current scope with appropriate error message.

      2. Return a new invalid GPUCommandBuffer.

    4. Set commandBuffer.[[command_list]] to this.[[commands]].

  3. Return commandBuffer.

13. Programmable Passes

interface mixin GPUProgrammablePassEncoder {
    undefined setBindGroup(GPUIndex32 index, GPUBindGroup bindGroup,
                      optional sequence<GPUBufferDynamicOffset> dynamicOffsets = []);

    undefined setBindGroup(GPUIndex32 index, GPUBindGroup bindGroup,
                      Uint32Array dynamicOffsetsData,
                      GPUSize64 dynamicOffsetsDataStart,
                      GPUSize32 dynamicOffsetsDataLength);
};

GPUProgrammablePassEncoder has the following internal slots:

[[command_encoder]] of type GPUCommandEncoder.

The GPUCommandEncoder that created this programmable pass.

[[bind_groups]], of type ordered map<GPUIndex32, GPUBindGroup>

The current GPUBindGroup for each index, initially empty.

13.1. Bind Groups

setBindGroup(index, bindGroup, dynamicOffsets)

Sets the current GPUBindGroup for the given index.

Called on: GPUProgrammablePassEncoder this.

Arguments:

Arguments for the GPUProgrammablePassEncoder.setBindGroup(index, bindGroup, dynamicOffsets) method.
Parameter Type Nullable Optional Description
index GPUIndex32 The index to set the bind group at.
bindGroup GPUBindGroup Bind group to use for subsequent render or compute commands.

Resolve bikeshed conflict when using argumentdef with overloaded functions that prevents us from defining dynamicOffsets.

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. If any of the following conditions are unsatisfied, make this invalid and stop.

  2. Set this.[[bind_groups]][index] to be bindGroup.

setBindGroup(index, bindGroup, dynamicOffsetsData, dynamicOffsetsDataStart, dynamicOffsetsDataLength)

Sets the current GPUBindGroup for the given index, specifying dynamic offsets as a subset of a Uint32Array.

Called on: GPUProgrammablePassEncoder this.

Arguments:

Arguments for the GPUProgrammablePassEncoder.setBindGroup(index, bindGroup, dynamicOffsetsData, dynamicOffsetsDataStart, dynamicOffsetsDataLength) method.
Parameter Type Nullable Optional Description
index GPUIndex32 The index to set the bind group at.
bindGroup GPUBindGroup Bind group to use for subsequent render or compute commands.
dynamicOffsetsData Uint32Array Array containing buffer offsets in bytes for each entry in bindGroup marked as buffer.hasDynamicOffset.
dynamicOffsetsDataStart GPUSize64 Offset in elements into dynamicOffsetsData where the buffer offset data begins.
dynamicOffsetsDataLength GPUSize32 Number of buffer offsets to read from dynamicOffsetsData.

Returns: undefined

  1. If any of the following requirements are unmet, throw a RangeError and stop.

    • dynamicOffsetsDataStart must be ≥ 0.

    • dynamicOffsetsDataStart + dynamicOffsetsDataLength must be ≤ dynamicOffsetsData.length.

  2. Let dynamicOffsets be a list containing the range, starting at index dynamicOffsetsDataStart, of dynamicOffsetsDataLength elements of a copy of dynamicOffsetsData.

  3. Call this.setBindGroup(index, bindGroup, dynamicOffsets).

To Iterate over each dynamic binding offset in a given GPUBindGroup bindGroup with a given list of steps to be executed for each dynamic offset:
  1. Let dynamicOffsetIndex be 0.

  2. Let layout be bindGroup.[[layout]].

  3. For each GPUBindGroupEntry entry in bindGroup.[[entries]]:

    1. Let bindingDescriptor be the GPUBindGroupLayoutEntry at layout.[[entryMap]][entry.binding]:

    2. If bindingDescriptor.buffer is not undefined and bindingDescriptor.buffer.hasDynamicOffset is true:

      1. Let bufferBinding be entry.resource.

      2. Let bufferLayout be bindingDescriptor.buffer.

      3. Call steps with bufferBinding, bufferLayout, and dynamicOffsetIndex.

      4. Let dynamicOffsetIndex be dynamicOffsetIndex + 1

Validate encoder bind groups(encoder, pipeline)

Arguments:

GPUProgrammablePassEncoder encoder

Encoder who’s bind groups are being validated.

GPUPipelineBase pipeline

Pipeline to validate encoders bind groups are compatible with.

If any of the following conditions are unsatisfied, return false:

Add validation that, for buffer bindings that weren’t prevalidated with minBindingSize, the binding ranges are large enough for the shader’s minimum binding size requirements.

Otherwise return true.

14. Debug Markers

GPUDebugCommandsMixin provides methods to apply debug labels to groups of commands or insert a single label into the command sequence.

Debug groups can be nested to create a hierarchy of labeled commands, and must be well-balanced.

Like object labels, these labels have no required behavior, but may be shown in error messages and browser developer tools, and may be passed to native API backends.

interface mixin GPUDebugCommandsMixin {
    undefined pushDebugGroup(USVString groupLabel);
    undefined popDebugGroup();
    undefined insertDebugMarker(USVString markerLabel);
};

GPUDebugCommandsMixin is only included by interfaces which include GPUObjectBase and GPUCommandsMixin.

GPUDebugCommandsMixin adds the following internal slots to interfaces which include it:

[[debug_group_stack]] of type stack<USVString>.

A stack of active debug group labels.

GPUDebugCommandsMixin adds the following methods to interfaces which include it:

pushDebugGroup(groupLabel)

Begins a labeled debug group containing subsequent commands.

Called on: GPUDebugCommandsMixin this.

Arguments:

Arguments for the GPUDebugCommandsMixin.pushDebugGroup(groupLabel) method.
Parameter Type Nullable Optional Description
groupLabel USVString The label for the command group.

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. Prepare the encoder state of this. If it returns false, stop.

  2. Push groupLabel onto this.[[debug_group_stack]].

popDebugGroup()

Ends the labeled debug group most recently started by pushDebugGroup().

Called on: GPUDebugCommandsMixin this.

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. Prepare the encoder state of this. If it returns false, stop.

  2. If any of the following requirements are unmet, make this invalid, and stop.

  3. Pop an entry off of this.[[debug_group_stack]].

insertDebugMarker(markerLabel)

Marks a point in a stream of commands with a label.

Called on: GPUDebugCommandsMixin this.

Arguments:

Arguments for the GPUDebugCommandsMixin.insertDebugMarker(markerLabel) method.
Parameter Type Nullable Optional Description
markerLabel USVString The label to insert.

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. Prepare the encoder state of this. If it returns false, stop.

15. Compute Passes

15.1. GPUComputePassEncoder

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUComputePassEncoder {
    undefined setPipeline(GPUComputePipeline pipeline);
    undefined dispatch(GPUSize32 x, optional GPUSize32 y = 1, optional GPUSize32 z = 1);
    undefined dispatchIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);

    undefined endPass();
};
GPUComputePassEncoder includes GPUObjectBase;
GPUComputePassEncoder includes GPUCommandsMixin;
GPUComputePassEncoder includes GPUDebugCommandsMixin;
GPUComputePassEncoder includes GPUProgrammablePassEncoder;

GPUComputePassEncoder has the following internal slots:

[[pipeline]], of type GPUComputePipeline

The current GPUComputePipeline, initially null.

[[endTimestampWrites]], of type GPUComputePassTimestampWrites

The timestamp attachments which need to be executed when the pass ends.

15.1.1. Creation

enum GPUComputePassTimestampLocation {
    "beginning",
    "end",
};

dictionary GPUComputePassTimestampWrite {
    required GPUQuerySet querySet;
    required GPUSize32 queryIndex;
    required GPUComputePassTimestampLocation location;
};

typedef sequence<GPUComputePassTimestampWrite> GPUComputePassTimestampWrites;

dictionary GPUComputePassDescriptor : GPUObjectDescriptorBase {
    GPUComputePassTimestampWrites timestampWrites = [];
};
timestampWrites, of type GPUComputePassTimestampWrites, defaulting to []

A sequence of GPUComputePassTimestampWrite values define where and when timestamp values will be written for this pass.

Valid Usage

Given a GPUComputePassDescriptor this the following validation rules apply:

  1. For each timestampWrite in this.timestampWrites:

    1. timestampWrite.querySet.[[descriptor]].type is "timestamp".

    2. timestampWrite.queryIndex < timestampWrite.querySet.[[descriptor]].count.

15.1.2. Dispatch

setPipeline(pipeline)

Sets the current GPUComputePipeline.

Called on: GPUComputePassEncoder this.

Arguments:

Arguments for the GPUComputePassEncoder.setPipeline(pipeline) method.
Parameter Type Nullable Optional Description
pipeline GPUComputePipeline The compute pipeline to use for subsequent dispatch commands.

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. If any of the following conditions are unsatisfied, make this invalid and stop.

  2. Set this.[[pipeline]] to be pipeline.

dispatch(x, y, z)

Dispatch work to be performed with the current GPUComputePipeline. See § 22.2 Computing for the detailed specification.

Called on: GPUComputePassEncoder this.

Arguments:

Arguments for the GPUComputePassEncoder.dispatch(x, y, z) method.
Parameter Type Nullable Optional Description
x GPUSize32 X dimension of the grid of workgroups to dispatch.
y GPUSize32 Y dimension of the grid of workgroups to dispatch.
z GPUSize32 Z dimension of the grid of workgroups to dispatch.

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. If any of the following conditions are unsatisfied, make this invalid and stop.

  2. Let passState be a snapshot of this’s current state.

  3. Append a GPU command to this.[[commands]] executing the following queue timeline steps:

    1. Dispatch a grid of workgroups with dimensions [x, y, z] with passState.[[pipeline]] using passState.[[bind_groups]].

dispatchIndirect(indirectBuffer, indirectOffset)

Dispatch work to be performed with the current GPUComputePipeline using parameters read from a GPUBuffer. See § 22.2 Computing for the detailed specification.

The indirect dispatch parameters encoded in the buffer must be a tightly packed block of three 32-bit unsigned integer values (12 bytes total), given in the same order as the arguments for dispatch(). For example:

let dispatchIndirectParameters = new Uint32Array(3);
dispatchIndirectParameters[0] = x;
dispatchIndirectParameters[1] = y;
dispatchIndirectParameters[2] = z;
Called on: GPUComputePassEncoder this.

Arguments:

Arguments for the GPUComputePassEncoder.dispatchIndirect(indirectBuffer, indirectOffset) method.
Parameter Type Nullable Optional Description
indirectBuffer GPUBuffer Buffer containing the indirect dispatch parameters.
indirectOffset GPUSize64 Offset in bytes into indirectBuffer where the dispatch data begins.

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. If any of the following conditions are unsatisfied, make this invalid and stop.

  2. Add indirectBuffer to the usage scope as INDIRECT.

If any of the dispatch parameters (x, y, or z) is greater than this.device.limits.maxComputeWorkgroupsPerDimension, no workgroups will be dispatched.

15.1.3. Finalization

The compute pass encoder can be ended by calling endPass() once the user has finished recording commands for the pass. Once endPass() has been called the compute pass encoder can no longer be used.

endPass()

Completes recording of the compute pass commands sequence.

Called on: GPUComputePassEncoder this.

Returns: undefined

Issue the following steps on the Device timeline of this:

  1. If any of the following requirements are unmet, generate a GPUValidationError and stop.

  2. Assert: this.[[command_encoder]].[[state]] is "locked".

  3. Set this.[[state]] to "ended".

  4. Set this.[[command_encoder]].[[state]] to "open".

  5. If any of the following requirements are unmet, make this.[[command_encoder]] invalid and stop.

  6. Extend this.[[command_encoder]].[[commands]] with this.[[commands]].

  7. For each timestampWrite in this.[[endTimestampWrites]]:

    1. Assert: timestampWrite.location is "end".

    2. Append a GPU command to this.[[command_encoder]].[[commands]] that writes the GPU’s timestamp value into the timestampWrite.queryIndexth index in timestampWrite.querySet.

16. Render Passes

16.1. GPURenderPassEncoder

interface mixin GPURenderEncoderBase {
    undefined setPipeline(GPURenderPipeline pipeline);

    undefined setIndexBuffer(GPUBuffer buffer, GPUIndexFormat indexFormat, optional GPUSize64 offset = 0, optional GPUSize64 size);
    undefined setVertexBuffer(GPUIndex32 slot, GPUBuffer buffer, optional GPUSize64 offset = 0, optional GPUSize64 size);

    undefined draw(GPUSize32 vertexCount, optional GPUSize32 instanceCount = 1,
              optional GPUSize32 firstVertex = 0, optional GPUSize32 firstInstance = 0);
    undefined drawIndexed(GPUSize32 indexCount, optional GPUSize32 instanceCount = 1,
                     optional GPUSize32 firstIndex = 0,
                     optional GPUSignedOffset32 baseVertex = 0,
                     optional GPUSize32 firstInstance = 0);

    undefined drawIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
    undefined drawIndexedIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPURenderPassEncoder {
    undefined setViewport(float x, float y,
                     float width, float height,
                     float minDepth, float maxDepth);

    undefined setScissorRect(GPUIntegerCoordinate x, GPUIntegerCoordinate y,
                        GPUIntegerCoordinate width, GPUIntegerCoordinate height);

    undefined setBlendConstant(GPUColor color);
    undefined setStencilReference(GPUStencilValue reference);

    undefined beginOcclusionQuery(GPUSize32 queryIndex);
    undefined endOcclusionQuery();

    undefined executeBundles(sequence<GPURenderBundle> bundles);
    undefined endPass();
};
GPURenderPassEncoder includes GPUObjectBase;
GPURenderPassEncoder includes GPUCommandsMixin;
GPURenderPassEncoder includes GPUDebugCommandsMixin;
GPURenderPassEncoder includes GPUProgrammablePassEncoder;
GPURenderPassEncoder includes GPURenderEncoderBase;

GPURenderEncoderBase has the following internal slots:

[[layout]], of type GPURenderPassLayout

The layout of the render pass.

[[depthReadOnly]], of type boolean?

If present, indicates that the depth component is not modified.

[[stencilReadOnly]], of type boolean?

If present, indicates that the stencil component is not modified.

[[pipeline]], of type GPURenderPipeline

The current GPURenderPipeline, initially null.

[[index_buffer]], of type GPUBuffer

The current buffer to read index data from, initially null.

[[index_format]], of type GPUIndexFormat

The format of the index data in [[index_buffer]].

[[index_buffer_size]], of type GPUSize64

The size in bytes of the section of [[index_buffer]] currently set, initially 0.

[[vertex_buffers]], of type ordered map<slot, GPUBuffer>

The current GPUBuffers to read vertex data from for each slot, initially empty.

[[vertex_buffer_sizes]], of type ordered map<slot, GPUSize64>

The size in bytes of the section of GPUBuffer currently set for each slot, initially empty.

GPURenderPassEncoder has the following internal slots:

[[attachment_size]]

Set to the following extents:

  • width, height = the dimensions of the pass’s render attachments

[[occlusion_query_set]], of type GPUQuerySet.

The GPUQuerySet to store occlusion query results for the pass, which is initialized with GPURenderPassDescriptor.occlusionQuerySet at pass creation time.

[[occlusion_query_active]], of type boolean.

Whether the pass’s [[occlusion_query_set]] is being written.

[[viewport]]

Current viewport rectangle and depth range.

[[endTimestampWrites]], of type GPURenderPassTimestampWrites

The timestamp attachments which need to be executed when the pass ends.

When a GPURenderPassEncoder is created, it has the following default state:

16.1.1. Creation

enum GPURenderPassTimestampLocation {
    "beginning",
    "end",
};

dictionary GPURenderPassTimestampWrite {
    required GPUQuerySet querySet;
    required GPUSize32 queryIndex;
    required GPURenderPassTimestampLocation location;
};

typedef sequence<GPURenderPassTimestampWrite> GPURenderPassTimestampWrites;

dictionary GPURenderPassDescriptor : GPUObjectDescriptorBase {
    required sequence<GPURenderPassColorAttachment> colorAttachments;
    GPURenderPassDepthStencilAttachment depthStencilAttachment;
    GPUQuerySet occlusionQuerySet;
    GPURenderPassTimestampWrites timestampWrites = [];
};
colorAttachments, of type sequence<GPURenderPassColorAttachment>

The set of GPURenderPassColorAttachment values in this sequence defines which color attachments will be output to when executing this render pass.

Due to usage compatibility, no color attachment may alias another attachment or any resource used inside the render pass.

depthStencilAttachment, of type GPURenderPassDepthStencilAttachment

The GPURenderPassDepthStencilAttachment value that defines the depth/stencil attachment that will be output to and tested against when executing this render pass.

Due to usage compatibility, no writable depth/stencil attachment may alias another attachment or any resource used inside the render pass.

occlusionQuerySet, of type GPUQuerySet

The GPUQuerySet value defines where the occlusion query results will be stored for this pass.

timestampWrites, of type GPURenderPassTimestampWrites, defaulting to []

A sequence of GPURenderPassTimestampWrite values defines where and when timestamp values will be written for this pass.

Valid Usage

Given a GPURenderPassDescriptor this the following validation rules apply:

  1. this.colorAttachments.length must be less than or equal to 8.

  2. this.colorAttachments.length must be greater than 0 or this.depthStencilAttachment must not be null.

  3. For each colorAttachment in this.colorAttachments:

    1. colorAttachment must meet the GPURenderPassColorAttachment Valid Usage rules.

  4. If this.depthStencilAttachment is not null:

    1. this.depthStencilAttachment must meet the GPURenderPassDepthStencilAttachment Valid Usage rules.

  5. All views in this.colorAttachments, and this.depthStencilAttachment.view if present, must have equal [[descriptor]].sampleCounts.

  6. For each view in this.colorAttachments and this.depthStencilAttachment.view, if present, the [[renderExtent]] must match.

  7. If this.occlusionQuerySet is not null:

    1. this.occlusionQuerySet.[[descriptor]].type must be occlusion.

  8. For each timestampWrite in this.timestampWrites:

    1. timestampWrite.querySet.[[descriptor]].type is "timestamp".

    2. timestampWrite.queryIndex < timestampWrite.querySet.[[descriptor]].count.

support for no attachments [Issue #gpuweb/gpuweb#503]

For a given GPURenderPassDescriptor value descriptor, the syntax:

make it a define once we reference to this from other places

Note: the Valid Usage guarantees that all of the render extents of the attachments are the same, so we can take any of them, assuming the descriptor is valid.

16.1.1.1. Color Attachments
dictionary GPURenderPassColorAttachment {
    required GPUTextureView view;
    GPUTextureView resolveTarget;

    required (GPULoadOp or GPUColor) loadValue;
    required GPUStoreOp storeOp;
};
view, of type GPUTextureView

A GPUTextureView describing the texture subresource that will be output to for this color attachment.

resolveTarget, of type GPUTextureView

A GPUTextureView describing the texture subresource that will receive the resolved output for this color attachment if view is multisampled.

loadValue, of type (GPULoadOp or GPUColor)

If a GPULoadOp, indicates the load operation to perform on view prior to executing the render pass. If a GPUColor, indicates the value to clear view to prior to executing the render pass.

Note: It is recommended to prefer a clear-value; see "load".

storeOp, of type GPUStoreOp

The store operation to perform on view after executing the render pass.

GPURenderPassColorAttachment Valid Usage

Given a GPURenderPassColorAttachment this:

  1. Let renderViewDescriptor be this.view.[[descriptor]].

  2. Let resolveViewDescriptor be this.resolveTarget.[[descriptor]].

  3. Let renderTextureDescriptor be this.view.[[texture]].[[descriptor]].

  4. Let resolveTextureDescriptor be this.resolveTarget.[[texture]].[[descriptor]].

The following validation rules apply:

A GPUTextureView view is a renderable texture view if the following requirements are met:

where descriptor is view.[[descriptor]].

16.1.1.2. Depth/Stencil Attachments
dictionary GPURenderPassDepthStencilAttachment {
    required GPUTextureView view;

    required (GPULoadOp or float) depthLoadValue;
    required GPUStoreOp depthStoreOp;
    boolean depthReadOnly = false;

    required (GPULoadOp or GPUStencilValue) stencilLoadValue;
    required GPUStoreOp stencilStoreOp;
    boolean stencilReadOnly = false;
};
view, of type GPUTextureView

A GPUTextureView describing the texture subresource that will be output to and read from for this depth/stencil attachment.

depthLoadValue, of type (GPULoadOp or float)

If this is a GPULoadOp, it indicates the load operation to perform on view's depth component prior to executing the render pass.

Otherwise, this is a float indicating the value to clear view's depth component to prior to executing the render pass. Must be between 0.0 and 1.0, inclusive.

Note: It is recommended to prefer a clear-value; see "load".

depthStoreOp, of type GPUStoreOp

The store operation to perform on view's depth component after executing the render pass.

Note: It is recommended to prefer a clear-value; see "load".

depthReadOnly, of type boolean, defaulting to false

Indicates that the depth component of view is read only.

stencilLoadValue, of type (GPULoadOp or GPUStencilValue)

If a GPULoadOp, indicates the load operation to perform on view's stencil component prior to executing the render pass. If a GPUStencilValue, indicates the value to clear view's stencil component to prior to executing the render pass.

stencilStoreOp, of type GPUStoreOp

The store operation to perform on view's stencil component after executing the render pass.

stencilReadOnly, of type boolean, defaulting to false

Indicates that the stencil component of view is read only.

GPURenderPassDepthStencilAttachment Valid Usage

Given a GPURenderPassDepthStencilAttachment this, the following validation rules apply:

16.1.1.3. Load & Store Operations
enum GPULoadOp {
    "load",
};
"load"

Loads the existing value for this attachment into the render pass.

Note: On some GPU hardware (primarily mobile), providing a clear-value is significantly cheaper because it avoids loading data from main memory into tile-local memory. On other GPU hardware, there isn’t a significant difference. As a result, it is recommended to use a clear-value, rather than "load", in cases where the initial value doesn’t matter (e.g. the render target will be cleared using a skybox).

enum GPUStoreOp {
    "store",
    "discard",
};
16.1.1.4. Render Pass Layout

GPURenderPassLayout contains the layout of the render targets for the current pass, which determines the compatibility of the pass with render pipelines.

dictionary GPURenderPassLayout: GPUObjectDescriptorBase {
    required sequence<GPUTextureFormat> colorFormats;
    GPUTextureFormat depthStencilFormat;
    GPUSize32 sampleCount = 1;
};
derive render targets layout from pass

Arguments:

Returns: GPURenderPassLayout

  1. Let layout be a new GPURenderPassLayout object.

  2. For each colorAttachment in descriptor.colorAttachments:

    1. Set layout.sampleCount to colorAttachment.view.[[texture]].[[descriptor]].sampleCount.

    2. Append colorAttachment.view.[[descriptor]].format to layout.colorFormats.

  3. Let depthStencilAttachment be descriptor.depthStencilAttachment.

  4. If depthStencilAttachment is not null:

    1. Let view be depthStencilAttachment.view

    2. Set layout.sampleCount to view.[[texture]].[[descriptor]].sampleCount.

    3. Set layout.depthStencilFormat to view.[[descriptor]].format.

  5. Return layout.

derive render targets layout from pipeline

Arguments:

Returns: GPURenderPassLayout

  1. Let layout be a new GPURenderPassLayout object.

  2. Set layout.sampleCount to descriptor.multisample.count.

  3. If descriptor.depthStencil is not null:

    1. Set layout.depthStencilFormat to descriptor.depthStencil/format.

  4. If descriptor.fragment is not null:

    1. For each colorTarget in descriptor.fragment.targets:

      1. Append colorTarget.format to layout.colorFormats

  5. Return layout.

16.1.2. Drawing

setPipeline(pipeline)

Sets the current GPURenderPipeline.

Called on: GPURenderEncoderBase this.

Arguments:

Arguments for the GPURenderEncoderBase.setPipeline(pipeline) method.
Parameter Type Nullable Optional Description
pipeline GPURenderPipeline The render pipeline to use for subsequent drawing commands.

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. Let pipelineTargetsLayout be derive render targets layout from pipeline(pipeline.[[descriptor]]).

  2. If any of the following conditions are unsatisfied, make this invalid and stop.

  3. Set this.[[pipeline]] to be pipeline.

define what "equals" means for GPURenderPassLayout here.

setIndexBuffer(buffer, indexFormat, offset, size)

Sets the current index buffer.

Called on: GPURenderEncoderBase this.

Arguments:

Arguments for the GPURenderEncoderBase.setIndexBuffer(buffer, indexFormat, offset, size) method.
Parameter Type Nullable Optional Description
buffer GPUBuffer Buffer containing index data to use for subsequent drawing commands.
indexFormat GPUIndexFormat Format of the index data contained in buffer.
offset GPUSize64 Offset in bytes into buffer where the index data begins. Defaults to 0.
size GPUSize64 Size in bytes of the index data in buffer. Defaults to the size of the buffer minus the offset.

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. If size is missing, set size to max(0, buffer.[[size]] - offset).

  2. If any of the following conditions are unsatisfied, make this invalid and stop.

  3. Add buffer to the usage scope as input.

  4. Set this.[[index_buffer]] to be buffer.

  5. Set this.[[index_format]] to be indexFormat.

  6. Set this.[[index_buffer_size]] to be size.

setVertexBuffer(slot, buffer, offset, size)

Sets the current vertex buffer for the given slot.

Called on: GPURenderEncoderBase this.

Arguments:

Arguments for the GPURenderEncoderBase.setVertexBuffer(slot, buffer, offset, size) method.
Parameter Type Nullable Optional Description
slot GPUIndex32 The vertex buffer slot to set the vertex buffer for.
buffer GPUBuffer Buffer containing vertex data to use for subsequent drawing commands.
offset GPUSize64 Offset in bytes into buffer where the vertex data begins. Defaults to 0.
size GPUSize64 Size in bytes of the vertex data in buffer. Defaults to the size of the buffer minus the offset.

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. If size is missing, set size to max(0, buffer.[[size]] - offset).

  2. If any of the following conditions are unsatisfied, make this invalid and stop.

  3. Add buffer to the usage scope as input.

  4. Set this.[[vertex_buffers]][slot] to be buffer.

  5. Set this.[[vertex_buffer_sizes]][slot] to be size.

draw(vertexCount, instanceCount, firstVertex, firstInstance)

Draws primitives. See § 22.3 Rendering for the detailed specification.

Called on: GPURenderEncoderBase this.

Arguments:

Arguments for the GPURenderEncoderBase.draw(vertexCount, instanceCount, firstVertex, firstInstance) method.
Parameter Type Nullable Optional Description
vertexCount GPUSize32 The number of vertices to draw.
instanceCount GPUSize32 The number of instances to draw.
firstVertex GPUSize32 Offset into the vertex buffers, in vertices, to begin drawing from.
firstInstance GPUSize32 First instance to draw.

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

If any of the following conditions are unsatisfied, make this invalid and stop.
drawIndexed(indexCount, instanceCount, firstIndex, baseVertex, firstInstance)

Draws indexed primitives. See § 22.3 Rendering for the detailed specification.

Called on: GPURenderEncoderBase this.

Arguments:

Arguments for the GPURenderEncoderBase.drawIndexed(indexCount, instanceCount, firstIndex, baseVertex, firstInstance) method.
Parameter Type Nullable Optional Description
indexCount GPUSize32 The number of indices to draw.
instanceCount GPUSize32 The number of instances to draw.
firstIndex GPUSize32 Offset into the index buffer, in indices, begin drawing from.
baseVertex GPUSignedOffset32 Added to each index value before indexing into the vertex buffers.
firstInstance GPUSize32 First instance to draw.

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

If any of the following conditions are unsatisfied, make this invalid and stop.
drawIndirect(indirectBuffer, indirectOffset)

Draws primitives using parameters read from a GPUBuffer. See § 22.3 Rendering for the detailed specification.

The indirect draw parameters encoded in the buffer must be a tightly packed block of four 32-bit unsigned integer values (16 bytes total), given in the same order as the arguments for draw(). For example:

let drawIndirectParameters = new Uint32Array(4);
drawIndirectParameters[0] = vertexCount;
drawIndirectParameters[1] = instanceCount;
drawIndirectParameters[2] = firstVertex;
drawIndirectParameters[3] = firstInstance;

The value corresponding to firstInstance must be 0, unless the "indirect-first-instance" feature is enabled. If the "indirect-first-instance" feature is not enabled and firstInstance is not zero the drawIndirect() call will be treated as a no-op.

Called on: GPURenderEncoderBase this.

Arguments:

Arguments for the GPURenderEncoderBase.drawIndirect(indirectBuffer, indirectOffset) method.
Parameter Type Nullable Optional Description
indirectBuffer GPUBuffer Buffer containing the indirect draw parameters.
indirectOffset GPUSize64 Offset in bytes into indirectBuffer where the drawing data begins.

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. If any of the following conditions are unsatisfied, make this invalid and stop.

  2. Add indirectBuffer to the usage scope as input.

drawIndexedIndirect(indirectBuffer, indirectOffset)

Draws indexed primitives using parameters read from a GPUBuffer. See § 22.3 Rendering for the detailed specification.

The indirect drawIndexed parameters encoded in the buffer must be a tightly packed block of five 32-bit unsigned integer values (20 bytes total), given in the same order as the arguments for drawIndexed(). For example:

let drawIndexedIndirectParameters = new Uint32Array(5);
drawIndexedIndirectParameters[0] = indexCount;
drawIndexedIndirectParameters[1] = instanceCount;
drawIndexedIndirectParameters[2] = firstIndex;
drawIndexedIndirectParameters[3] = baseVertex;
drawIndexedIndirectParameters[4] = firstInstance;

The value corresponding to firstInstance must be 0, unless the "indirect-first-instance" feature is enabled. If the "indirect-first-instance" feature is not enabled and firstInstance is not zero the drawIndexedIndirect() call will be treated as a no-op.

Called on: GPURenderEncoderBase this.

Arguments:

Arguments for the GPURenderEncoderBase.drawIndexedIndirect(indirectBuffer, indirectOffset) method.
Parameter Type Nullable Optional Description
indirectBuffer GPUBuffer Buffer containing the indirect drawIndexed parameters.
indirectOffset GPUSize64 Offset in bytes into indirectBuffer where the drawing data begins.

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. If any of the following conditions are unsatisfied, make this invalid and stop.

  2. Add indirectBuffer to the usage scope as input.

To determine if it’s valid to draw with GPURenderEncoderBase encoder run the following steps:

If any of the following conditions are unsatisfied, return false:

Otherwise return true.

To determine if it’s valid to draw indexed with GPURenderEncoderBase encoder run the following steps:

If any of the following conditions are unsatisfied, return false:

Otherwise return true.

16.1.3. Rasterization state

The GPURenderPassEncoder has several methods which affect how draw commands are rasterized to attachments used by this encoder.

setViewport(x, y, width, height, minDepth, maxDepth)

Sets the viewport used during the rasterization stage to linearly map from normalized device coordinates to viewport coordinates.

Called on: GPURenderPassEncoder this.

Arguments:

Arguments for the GPURenderPassEncoder.setViewport(x, y, width, height, minDepth, maxDepth) method.
Parameter Type Nullable Optional Description
x float Minimum X value of the viewport in pixels.
y float Minimum Y value of the viewport in pixels.
width float Width of the viewport in pixels.
height float Height of the viewport in pixels.
minDepth float Minimum depth value of the viewport.
maxDepth float Maximum depth value of the viewport.

Returns: undefined

Issue the following steps on the Device timeline of this:

  1. If any of the following conditions are unsatisfied, generate a validation error and stop.

    • x is greater than or equal to 0.

    • y is greater than or equal to 0.

    • width is greater than or equal to 0.

    • height is greater than or equal to 0.

    • x + width is less than or equal to this.[[attachment_size]].width.

    • y + height is less than or equal to this.[[attachment_size]].height.

    • minDepth is greater than or equal to 0.0 and less than or equal to 1.0.

    • maxDepth is greater than or equal to 0.0 and less than or equal to 1.0.

    • maxDepth is greater than minDepth.

  2. Set this.[[viewport]] to the extents x, y, width, height, minDepth, and maxDepth.

Allowed for GPUs to use fixed point or rounded viewport coordinates

setScissorRect(x, y, width, height)

Sets the scissor rectangle used during the rasterization stage. After transformation into viewport coordinates any fragments which fall outside the scissor rectangle will be discarded.

Called on: GPURenderPassEncoder this.

Arguments:

Arguments for the GPURenderPassEncoder.setScissorRect(x, y, width, height) method.
Parameter Type Nullable Optional Description
x GPUIntegerCoordinate Minimum X value of the scissor rectangle in pixels.
y GPUIntegerCoordinate Minimum Y value of the scissor rectangle in pixels.
width GPUIntegerCoordinate Width of the scissor rectangle in pixels.
height GPUIntegerCoordinate Height of the scissor rectangle in pixels.

Returns: undefined

Issue the following steps on the Device timeline of this:

  1. If any of the following conditions are unsatisfied, generate a validation error and stop.

  2. Set the scissor rectangle to the extents x, y, width, and height.

setBlendConstant(color)

Sets the constant blend color and alpha values used with "constant" and "one-minus-constant" GPUBlendFactors.

Called on: GPURenderPassEncoder this.

Arguments:

Arguments for the GPURenderPassEncoder.setBlendConstant(color) method.
Parameter Type Nullable Optional Description
color GPUColor The color to use when blending.
setStencilReference(reference)

Sets the stencil reference value used during stencil tests with the the "replace" GPUStencilOperation.

Called on: GPURenderPassEncoder this.

Arguments:

Arguments for the GPURenderPassEncoder.setStencilReference(reference) method.
Parameter Type Nullable Optional Description
reference GPUStencilValue The stencil reference value.

16.1.4. Queries

beginOcclusionQuery(queryIndex)
Called on: GPURenderPassEncoder this.

Arguments:

Arguments for the GPURenderPassEncoder.beginOcclusionQuery(queryIndex) method.
Parameter Type Nullable Optional Description
queryIndex GPUSize32 The index of the query in the query set.

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. If any of the following conditions are unsatisfied, generate a validation error and stop.

  2. Set this.[[occlusion_query_active]] to true.

endOcclusionQuery()
Called on: GPURenderPassEncoder this.

Returns: undefined

Issue the following steps on the Device timeline of this.[[device]]:

  1. If any of the following conditions are unsatisfied, generate a validation error and stop.

  2. Set this.[[occlusion_query_active]] to false.

16.1.5. Bundles

executeBundles(bundles)

Executes the commands previously recorded into the given GPURenderBundles as part of this render pass.

When a GPURenderBundle is executed, it does not inherit the render pass’s pipeline, bind groups, or vertex and index buffers. After a GPURenderBundle has executed, the render pass’s pipeline, bind groups, and vertex and index buffers are cleared.

Note: state is cleared even if zero GPURenderBundles are executed.

Called on: GPURenderPassEncoder this.

Arguments:

Arguments for the GPURenderPassEncoder.executeBundles(bundles) method.
Parameter Type Nullable Optional Description
bundles sequence<GPURenderBundle> List of render bundles to execute.

Returns: undefined

Issue the following steps on the Device timeline of this:

  1. If any of the following conditions are unsatisfied, generate a validation error and stop.

16.1.6. Finalization

The render pass encoder can be ended by calling endPass() once the user has finished recording commands for the pass. Once endPass() has been called the render pass encoder can no longer be used.

endPass()

Completes recording of the render pass commands sequence.

Called on: GPURenderPassEncoder this.

Returns: undefined

Issue the following steps on the Device timeline of this:

  1. If any of the following requirements are unmet, generate a GPUValidationError and stop.

  2. Assert: this.[[command_encoder]].[[state]] is "locked".

  3. Set this.[[state]] to "ended".

  4. Set this.[[command_encoder]].[[state]] to "open".

  5. If any of the following requirements are unmet, make this.[[command_encoder]] invalid and stop.

  6. Extend this.[[command_encoder]].[[commands]] with this.[[commands]].

  7. For each timestampWrite in this.[[endTimestampWrites]]:

    1. Assert: timestampWrite.location is "end".

    2. Append a GPU command to this.[[command_encoder]].[[commands]] that writes the GPU’s timestamp value into the timestampWrite.queryIndexth index in timestampWrite.querySet.

  8. Enqueue the attachment stores/discards.

17. Bundles

17.1. GPURenderBundle

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPURenderBundle {
};
GPURenderBundle includes GPUObjectBase;
[[command_list]] of type list<GPU command>.

A list of GPU commands to be submitted to the GPURenderPassEncoder when the GPURenderBundle is executed.

[[layout]], of type GPURenderPassLayout

The layout of the render bundle.

[[depthReadOnly]], of type boolean?

If present, indicates that the depth component is not modified by executing this render bundle.

[[stencilReadOnly]], of type boolean?

If present, indicates that the stencil component is not modified by executing this render bundle.

17.1.1. Creation

dictionary GPURenderBundleDescriptor : GPUObjectDescriptorBase {
};
[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPURenderBundleEncoder {
    GPURenderBundle finish(optional GPURenderBundleDescriptor descriptor = {});
};
GPURenderBundleEncoder includes GPUObjectBase;
GPURenderBundleEncoder includes GPUCommandsMixin;
GPURenderBundleEncoder includes GPUDebugCommandsMixin;
GPURenderBundleEncoder includes GPUProgrammablePassEncoder;
GPURenderBundleEncoder includes GPURenderEncoderBase;
createRenderBundleEncoder(descriptor)

Creates a GPURenderBundleEncoder.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createRenderBundleEncoder(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPURenderBundleEncoderDescriptor Description of the GPURenderBundleEncoder to create.

Returns: GPURenderBundleEncoder

Issue the following steps on the Device timeline of this:

  1. If any of the following conditions are unsatisfied, generate a validation error and stop.

  2. Let e be a new GPURenderBundleEncoder object.

  3. Set e.[[layout]] to descriptor.GPURenderPassLayout.

  4. Set e.[[depthReadOnly]] to descriptor.depthReadOnly.

  5. Set e.[[stencilReadOnly]] to descriptor.stencilReadOnly.

  6. Set e.[[state]] to "open".

  7. Return e.

Describe the reset of the steps for createRenderBundleEncoder().

17.1.2. Encoding

dictionary GPURenderBundleEncoderDescriptor : GPURenderPassLayout {
    boolean depthReadOnly = false;
    boolean stencilReadOnly = false;
};

17.1.3. Finalization

finish(descriptor)

Completes recording of the render bundle commands sequence.

Called on: GPURenderBundleEncoder this.

Arguments:

Arguments for the GPURenderBundleEncoder.finish(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPURenderBundleDescriptor

Returns: GPURenderBundle

  1. Let renderBundle be a new GPURenderBundle.

  2. Issue the following steps on the Device timeline of this:

    1. Let validationFailed be true if all of the following requirements are met, and false otherwise.

    2. Set this.[[state]] to "ended".

    3. If validationFailed, then:

      1. Generate a GPUValidationError in the current scope with appropriate error message.

      2. Return a new invalid GPURenderBundle.

    4. Set renderBundle.[[command_list]] to this.[[commands]].

  3. Return renderBundle.

18. Queues

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUQueue {
    undefined submit(sequence<GPUCommandBuffer> commandBuffers);

    Promise<undefined> onSubmittedWorkDone();

    undefined writeBuffer(
        GPUBuffer buffer,
        GPUSize64 bufferOffset,
        [AllowShared] BufferSource data,
        optional GPUSize64 dataOffset = 0,
        optional GPUSize64 size);

    undefined writeTexture(
        GPUImageCopyTexture destination,
        [AllowShared] BufferSource data,
        GPUImageDataLayout dataLayout,
        GPUExtent3D size);

    undefined copyExternalImageToTexture(
        GPUImageCopyExternalImage source,
        GPUImageCopyTextureTagged destination,
        GPUExtent3D copySize);
};
GPUQueue includes GPUObjectBase;

GPUQueue has the following methods:

writeBuffer(buffer, bufferOffset, data, dataOffset, size)

Issues a write operation of the provided data into a GPUBuffer.

Called on: GPUQueue this.

Arguments:

Arguments for the GPUQueue.writeBuffer(buffer, bufferOffset, data, dataOffset, size) method.
Parameter Type Nullable Optional Description
buffer GPUBuffer The buffer to write to.
bufferOffset GPUSize64 Offset in bytes into buffer to begin writing at.
data BufferSource Data to write into buffer.
dataOffset GPUSize64 Offset in into data to begin writing from. Given in elements if data is a TypedArray and bytes otherwise.
size GPUSize64 Size of content to write from data to buffer. Given in elements if data is a TypedArray and bytes otherwise.

Returns: undefined

  1. If data is an ArrayBuffer or DataView, let the element type be "byte". Otherwise, data is a TypedArray; let the element type be the type of the TypedArray.

  2. Let dataSize be the size of data, in elements.

  3. If size is missing, let contentsSize be dataSizedataOffset. Otherwise, let contentsSize be size.

  4. If any of the following conditions are unsatisfied, throw OperationError and stop.

    • contentsSize ≥ 0.

    • dataOffset + contentsSizedataSize.

    • contentsSize, converted to bytes, is a multiple of 4 bytes.

  5. Let dataContents be a copy of the bytes held by the buffer source.

  6. Let contents be the contentsSize elements of dataContents starting at an offset of dataOffset elements.

  7. Issue the following steps on the Queue timeline of this:

    1. If any of the following conditions are unsatisfied, generate a validation error and stop.

    2. Write contents into buffer starting at bufferOffset.

writeTexture(destination, data, dataLayout, size)

Issues a write operation of the provided data into a GPUTexture.

Called on: GPUQueue this.

Arguments:

Arguments for the GPUQueue.writeTexture(destination, data, dataLayout, size) method.
Parameter Type Nullable Optional Description
destination GPUImageCopyTexture The texture subresource and origin to write to.
data BufferSource Data to write into destination.
dataLayout GPUImageDataLayout Layout of the content in data.
size GPUExtent3D Extents of the content to write from data to destination.

Returns: undefined

  1. Let dataBytes be a copy of the bytes held by the buffer source data.

  2. Let dataByteSize be the number of bytes in dataBytes.

  3. Let textureDesc be destination.texture.[[descriptor]].

  4. If any of the following conditions are unsatisfied, throw OperationError and stop.

  5. Let contents be the contents of the images seen by viewing dataBytes with dataLayout and size.

    Specify more formally.

  6. Issue the following steps on the Queue timeline of this:

    1. If any of the following conditions are unsatisfied, generate a validation error and stop.

      Note: unlike GPUCommandEncoder.copyBufferToTexture(), there is no alignment requirement on either dataLayout.bytesPerRow or dataLayout.offset.

    2. Write contents into destination.

      Specify more formally.

copyExternalImageToTexture(source, destination, copySize)

Issues a copy operation of the contents of a platform image/canvas into the destination texture.

This operation performs color encoding into the destination encoding according to the parameters of GPUImageCopyTextureTagged.

Copying into a -srgb texture results in the same texture bytes, not the same decoded values, as copying into the corresponding non--srgb format. Thus, after a copy operation, sampling the destination texture has different results depending on whether its format is -srgb, all else unchanged.

If an srgb-linear color space is added, explain here how it interacts.

Called on: GPUQueue this.

Arguments:

Arguments for the GPUQueue.copyExternalImageToTexture(source, destination, copySize) method.
Parameter Type Nullable Optional Description
source GPUImageCopyExternalImage source image and origin to copy to destination.
destination GPUImageCopyTextureTagged The texture subresource and origin to write to, and its encoding metadata.
copySize GPUExtent3D Extents of the content to write from source to destination.

Returns: undefined

  1. Let sourceImage be source.source

  2. Run Check the usability of the image argument(sourceImage). If it throws an exception, stop. If it does not return good, throw an InvalidStateError and stop.

  3. If sourceImage is not origin-clean, throw a SecurityError and stop.

  4. If any of the following requirements are unmet, throw an OperationError and stop.

  5. Issue the following steps on the Device timeline of this:

    1. Let textureDesc be destination.texture.[[descriptor]].

    2. If any of the following requirements are unmet, generate a validation error and stop.

    3. Do the actual copy.

submit(commandBuffers)

Schedules the execution of the command buffers by the GPU on this queue.

Called on: GPUQueue this.

Arguments:

Arguments for the GPUQueue.submit(commandBuffers) method.
Parameter Type Nullable Optional Description
commandBuffers sequence<GPUCommandBuffer>

Returns: undefined

Issue the following steps on the Device timeline of this:

  1. If any of the following conditions are unsatisfied, generate a validation error and stop.

  2. Issue the following steps on the Queue timeline of this:

    1. For each commandBuffer in commandBuffers:

      1. Execute each command in commandBuffer.[[command_list]].

onSubmittedWorkDone()

Returns a Promise that resolves once this queue finishes processing all the work submitted up to this moment.

Called on: GPUQueue this.

Arguments:

Arguments for the GPUQueue.onSubmittedWorkDone() method.
Parameter Type Nullable Optional Description

Returns: Promise<undefined>

Describe onSubmittedWorkDone() algorithm steps.

19. Queries

19.1. GPUQuerySet

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUQuerySet {
    undefined destroy();
};
GPUQuerySet includes GPUObjectBase;

GPUQuerySet has the following internal slots:

[[descriptor]], of type GPUQuerySetDescriptor

The GPUQuerySetDescriptor describing this query set.

All optional fields of GPUTextureViewDescriptor are defined.

[[state]] of type query set state.

The current state of the GPUQuerySet.

Each GPUQuerySet has a current query set state on the Device timeline which is one of the following:

19.1.1. QuerySet Creation

A GPUQuerySetDescriptor specifies the options to use in creating a GPUQuerySet.

dictionary GPUQuerySetDescriptor : GPUObjectDescriptorBase {
    required GPUQueryType type;
    required GPUSize32 count;
};
type, of type GPUQueryType

The type of queries managed by GPUQuerySet.

count, of type GPUSize32

The number of queries managed by GPUQuerySet.

createQuerySet(descriptor)

Creates a GPUQuerySet.

Called on: GPUDevice this.

Arguments:

Arguments for the GPUDevice.createQuerySet(descriptor) method.
Parameter Type Nullable Optional Description
descriptor GPUQuerySetDescriptor Description of the GPUQuerySet to create.

Returns: GPUQuerySet

  1. If descriptor.type is "timestamp", but this.[[device]].[[features]] does not contain "timestamp-query", throw a TypeError.

  2. If any of the following requirements are unmet, return an error query set and stop.

  3. Let q be a new GPUQuerySet object.

  4. Set q.[[descriptor]] to descriptor.

  5. Set q.[[state]] to available.

  6. Return q.

19.1.2. QuerySet Destruction

An application that no longer requires a GPUQuerySet can choose to lose access to it before garbage collection by calling destroy().

destroy()

Destroys the GPUQuerySet.

Called on: GPUQuerySet this.

Returns: undefined

  1. Set this.[[state]] to destroyed.

19.2. QueryType

enum GPUQueryType {
    "occlusion",
    "timestamp",
};

19.3. Occlusion Query

Occlusion query is only available on render passes, to query the number of fragment samples that pass all the per-fragment tests for a set of drawing commands, including scissor, sample mask, alpha to coverage, stencil, and depth tests. Any non-zero result value for the query indicates that at least one sample passed the tests and reached the output merging stage of the render pipeline, 0 indicates that no samples passed the tests.

When beginning a render pass, GPURenderPassDescriptor.occlusionQuerySet must be set to be able to use occlusion queries during the pass. An occlusion query is begun and ended by calling beginOcclusionQuery() and endOcclusionQuery() in pairs that cannot be nested.

19.4. Timestamp Query

Timestamp query allows application to write timestamp values to a GPUQuerySet by calling writeTimestamp() on GPUCommandEncoder, and then resolve timestamp values in nanoseconds (type of GPUSize64) to a GPUBuffer (using resolveQuerySet()).

Timestamp query requires "timestamp-query" is available on the device.

Note: The timestamp values may be zero if the physical device reset timestamp counter, please ignore it and the following values.

Write normative text about timestamp value resets.

Because timestamp query provides high-resolution GPU timestamp, we need to decide what constraints, if any, are on its availability.

20. Canvas Rendering

20.1. HTMLCanvasElement.getContext()

A GPUCanvasContext object can be obtained via the getContext() method of an HTMLCanvasElement instance by passing the string literal 'webgpu' as its contextType argument.

Get a GPUCanvasContext from an offscreen HTMLCanvasElement:
const canvas = document.createElement('canvas');
const context = canvas.getContext('webgpu');
context.configure(/* ... */);
// ...

20.2. GPUCanvasContext

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUCanvasContext {
    readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;

    undefined configure(GPUCanvasConfiguration configuration);
    undefined unconfigure();

    GPUTextureFormat getPreferredFormat(GPUAdapter adapter);
    GPUTexture getCurrentTexture();
};

GPUCanvasContext has the following attributes:

canvas, of type (HTMLCanvasElement or OffscreenCanvas), readonly

The canvas this context was created from.

GPUCanvasContext has the following internal slots:

[[validConfiguration]] of type boolean, initially false.

Indicates if the context currently has a valid configuration.

[[configuration]] of type GPUCanvasConfiguration, initially null.

The options this context is configured with. null if the context has not been configured or the configuration has been removed.

[[size]] of type GPUExtent3D

The size of GPUTextures returned from this context. depthOrArrayLayers is always 1.

[[currentTexture]] of type GPUTexture?, initially null

The current texture that will be returned by the context when calling getCurrentTexture(), and the next one to be composited to the document. Initially set to the result of allocating a new context texture for this context.

GPUCanvasContext has the following methods:

configure(configuration)

Configures the context for this canvas. Destroys any textures produced with a previous configuration.

Called on: GPUCanvasContext this.

Arguments:

Arguments for the GPUCanvasContext.configure(configuration) method.
Parameter Type Nullable Optional Description
configuration GPUCanvasConfiguration Desired configuration for the context.

Returns: undefined

  1. Set this.[[validConfiguration]] to false.

  2. Set this.[[configuration]] to configuration.

  3. If this.[[currentTexture]] is not null call destroy() on this.[[currentTexture]].

  4. Set this.[[currentTexture]] to null.

  5. Let device be configuration.device.

  6. Let canvas be this.canvas.

  7. If configuration.size is undefined set this.[[size]] to [canvas.width, canvas.height, 1], otherwise set this.[[size]] to configuration.size.

  8. Issue the following steps on the Device timeline of device:

    1. If any of the following conditions are unsatisfied:

      Then:

      1. Generate a GPUValidationError in the current scope with appropriate error message.

      2. Return.

  9. Set this.[[validConfiguration]] to true.

unconfigure()

Removes the context configuration. Destroys any textures produced while configured.

Called on: GPUCanvasContext this.

Returns: undefined

  1. Set this.[[validConfiguration]] to false.

  2. Set this.[[configuration]] to null.

  3. If this.[[currentTexture]] is not null call destroy() on this.[[currentTexture]].

  4. Set this.[[currentTexture]] to null.

getPreferredFormat(adapter)

Returns an optimal GPUTextureFormat to use with this context and devices created from the given adapter.

Called on: GPUCanvasContext this.

Arguments:

Arguments for the GPUCanvasContext.getPreferredFormat(adapter) method.
Parameter Type Nullable Optional Description
adapter GPUAdapter Adapter the format should be queried for.

Returns: GPUTextureFormat

  1. Return an optimal GPUTextureFormat to use when calling configure() with the given adapter. Must be one of the supported context formats.

getCurrentTexture()

Get the GPUTexture that will be composited to the document by the GPUCanvasContext next.

Called on: GPUCanvasContext this.

Returns: GPUTexture

  1. If this.[[configuration]] is null:

    1. Throw an OperationError and stop.

  2. If this.[[currentTexture]] is null or this.[[currentTexture]].[[destroyed]] is true:

    1. Set this.[[currentTexture]] to the result of allocating a new context texture for this.

  3. Return this.[[currentTexture]].

Note: Developers can expect that the same GPUTexture object will be returned by every call to getCurrentTexture() made within the same frame (i.e. between invocations of Update the rendering) unless configure() is called.

During the "update the rendering [of the] Document" step of the "Update the rendering" HTML processing model, each GPUCanvasContext context must present the context content to the canvas by running the following steps:
  1. If context.[[currentTexture]] is not null and context.[[currentTexture]].[[destroyed]] is false:

    1. Let imageContents be a copy of the image contents of context.

    2. Update context.canvas with imageContents.

    3. Call context.[[currentTexture]].destroy().

  2. Set context.[[currentTexture]] to null.

When transferToImageBitmap() is called on a canvas with GPUCanvasContext context:
  1. Let imageContents be a copy of the image contents of context.

  2. If context.[[currentTexture]] is not null:

    1. Call context.[[currentTexture]].destroy().

  3. Set context.[[currentTexture]] to null.

  4. Return a new ImageBitmap containing the imageContents.

When WebGPU canvas contents are read using other Web APIs, like drawImage(), texImage2D(), texSubImage2D(), toDataURL(), toBlob(), and so on, they get a copy of the image contents of a context:

Arguments: context: the GPUCanvasContext

Returns: image contents

  1. Let texture be context.[[currentTexture]].

  2. If any of the following requirements is unmet, return a transparent black image of size context.[[size]] and stop.

  3. Ensure that all submitted work items (e.g. queue submissions) have completed writing to texture.

  4. Return the contents of texture, tagged as having alpha premultiplied, and with the color space context.[[configuration]].colorSpace.

    Does compositingAlphaMode=opaque make this return opaque contents? [Issue #gpuweb/gpuweb#1847]

To allocate a new context texture for GPUCanvasContext context run the following steps:
  1. Let device be context.[[configuration]].device.

  2. If context.[[validConfiguration]] is false:

    1. Generate a GPUValidationError in the current scope of device with an appropriate error message.

    2. Return a new invalid GPUTexture.

  3. Let textureDescriptor be a new GPUTextureDescriptor.

  4. Set textureDescriptor.size to context.[[size]].

  5. Set textureDescriptor.format to context.[[configuration]].format.

  6. Set textureDescriptor.usage to context.[[configuration]].usage.

  7. Let texture be a new GPUTexture created as if device.createTexture() were called with textureDescriptor.

    If a previously presented texture from context matches the required criteria, its GPU memory may be re-used.
  8. Ensure texture is cleared to (0, 0, 0, 0).

  9. Return texture.

20.3. GPUCanvasConfiguration

The supported context formats are a set of GPUTextureFormats that must be supported when specified as a GPUCanvasConfiguration.format regardless of the given GPUCanvasConfiguration.device, initially set to: «"bgra8unorm", "bgra8unorm-srgb", "rgba8unorm", "rgba8unorm-srgb"».

enum GPUCanvasCompositingAlphaMode {
    "opaque",
    "premultiplied",
};

dictionary GPUCanvasConfiguration {
    required GPUDevice device;
    required GPUTextureFormat format;
    GPUTextureUsageFlags usage = 0x10;  // GPUTextureUsage.RENDER_ATTACHMENT
    GPUPredefinedColorSpace colorSpace = "srgb";
    GPUCanvasCompositingAlphaMode compositingAlphaMode = "opaque";
    GPUExtent3D size;
};

20.3.1. Canvas Color Space

During presentation, the chrominance of color values outside of the [0, 1] range is not to be clamped to that range; extended values may be used to display colors outside of the gamut defined by the canvas' color space’s primaries, when permitted by the configured format and the user’s display capabilities. This is in contrast with luminance, which is to be clamped to the maximum standard dynamic range luminance.

Unless high dynamic range is explicitly enabled for the canvas element.

20.3.2. Canvas Context sizing

A GPUCanvasContext's [[size]] is set by the GPUCanvasConfiguration passed to configure(), and remains the same until configure() is called again with a new size. If a size is not specified then the width and height attributes of the GPUCanvasContext.canvas at the time configure() is called will be used. If GPUCanvasContext.[[size]] does not match the dimensions of the canvas the textures produced by the GPUCanvasContext will be scaled to fit the canvas element.

Note: Unlike 'webgl' or '2d' contexts, width and height attributes of canvases with a 'webgpu' context only affect:

If it is desired to match the dimensions of the canvas after it is resized, the GPUCanvasContext must be reconfigured by calling configure() again with the new dimensions.

Reconfigure a GPUCanvasContext in response to canvas resize, monitored using ResizeObserver to get the exact pixel dimensions of the canvas:
const canvas = document.createElement('canvas');
const context =  canvas.getContext('webgpu');

const resizeObserver = new ResizeObserver(entries => {
    for (const entry of entries) {
        if (entry != canvas) { continue; }
        context.configure({
            device: someDevice,
            format: context.getPreferredFormat(someDevice.adapter),
            size: {
                // This reports the size of the canvas element in pixels
                width: entry.devicePixelContentBoxSize[0].inlineSize,
                height: entry.devicePixelContentBoxSize[0].blockSize,
            }
        });
    }
});
resizeObserver.observe(canvas);

20.4. GPUCanvasCompositingAlphaMode

This enum selects how the contents of the canvas' context will paint onto the page.

GPUCanvasCompositingAlphaMode Description dst.rgb dst.a
opaque Paint RGB as opaque and ignore alpha values. If the content is not already opaque, implementations may need to clear alpha to opaque during presentation. |dst.rgb = src.rgb| |dst.a = 1|
premultiplied Composite assuming color values are premultiplied by their alpha value. 100% red 50% opaque is [0.5, 0, 0, 0.5]. Color values must be less than or equal to their alpha value. [1.0, 0, 0, 0.5] is "super-luminant" and cannot reliably be displayed. |dst.rgb = src.rgb + dst.rgb*(1-src.a)| |dst.a = src.a + dst.a*(1-src.a)|

21. Errors & Debugging

21.1. Fatal Errors

enum GPUDeviceLostReason {
    "destroyed",
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUDeviceLostInfo {
    readonly attribute (GPUDeviceLostReason or undefined) reason;
    readonly attribute DOMString message;
};

partial interface GPUDevice {
    readonly attribute Promise<GPUDeviceLostInfo> lost;
};

GPUDevice has the following additional attributes:

lost, of type Promise<GPUDeviceLostInfo>, readonly

A promise which is created with the device, remains pending for the lifetime of the device, then resolves when the device is lost.

This attribute is backed by an immutable internal slot of the same name, initially set to a new promise, and always returns its value.

21.2. Error Scopes

enum GPUErrorFilter {
    "out-of-memory",
    "validation",
};
[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUOutOfMemoryError {
    constructor();
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUValidationError {
    constructor(DOMString message);
    readonly attribute DOMString message;
};

typedef (GPUOutOfMemoryError or GPUValidationError) GPUError;
partial interface GPUDevice {
    undefined pushErrorScope(GPUErrorFilter filter);
    Promise<GPUError?> popErrorScope();
};
pushErrorScope(filter)

Define pushErrorScope.

popErrorScope()

Define popErrorScope.

Rejects with OperationError if:

  • The device is lost.

  • There are no error scopes on the stack.

21.3. Telemetry

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUUncapturedErrorEvent : Event {
    constructor(
        DOMString type,
        GPUUncapturedErrorEventInit gpuUncapturedErrorEventInitDict
    );
    readonly attribute GPUError error;
};

dictionary GPUUncapturedErrorEventInit : EventInit {
    required GPUError error;
};
error, of type GPUError, readonly

Object representing the error that was uncaptured. This has the same type as errors returned by popErrorScope().

This attribute is backed by an immutable internal slot of the same name, and always returns its value.

This attribute should be [SameObject]. (If GPUError becomes an interface then we can do this without resolving the WebIDL issue.) [Issue #whatwg/webidl#1077]

partial interface GPUDevice {
    [Exposed=(Window, DedicatedWorker)]
    attribute EventHandler onuncapturederror;
};

22. Detailed Operations

This section describes the details of various GPU operations.

22.1. Transfer

describe the transfers at the high level

22.2. Computing

Computing operations provide direct access to GPU’s programmable hardware. Compute shaders do not have pipeline inputs or outputs, their results are side effects from writing data into storage bindings bound as GPUBufferBindingType."storage" and GPUStorageTextureBindingLayout. These operations are encoded within GPUComputePassEncoder as:

describe the computing algorithm

22.3. Rendering

Rendering is done by a set of GPU operations that are executed within GPURenderPassEncoder, and result in modifications of the texture data, viewed by the render pass attachments. These operations are encoded with:

Note: rendering is the traditional use of GPUs, and is supported by multiple fixed-function blocks in hardware.

A RenderState is an internal object representing the state of the current GPURenderPassEncoder during command encoding. RenderState is a spec namespace for the following definitions:

For a given GPURenderPassEncoder pass, the syntax:

The main rendering algorithm:

render(descriptor, drawCall, state)

Arguments:

  1. Resolve indices. See § 22.3.1 Index Resolution.

    Let vertexList be the result of resolve indices(drawCall, state).

  2. Process vertices. See § 22.3.2 Vertex Processing.

    Execute process vertices(vertexList, drawCall, descriptor.vertex, state).

  3. Assemble primitives. See § 22.3.3 Primitive Assembly.

    Execute assemble primitives(vertexList, drawCall, descriptor.primitive).

  4. Clip primitives. See § 22.3.4 Primitive Clipping.

    Let primitiveList be the result of this stage.

  5. Rasterize. See § 22.3.5 Rasterization.

    Let rasterizationList be the result of rasterize(primitiveList, state).

  6. Process fragments. See § 22.3.6 Fragment Processing.

    Gather a list of fragments, resulting from executing process fragment(rasterPoint, descriptor.fragment, state) for each rasterPoint in rasterizationList.

  7. Process depth/stencil.

    fill out the section, using fragments

  8. Write pixels.

    fill out the section

22.3.1. Index Resolution

At the first stage of rendering, the pipeline builds a list of vertices to process for each instance.

resolve indices(drawCall, state)

Arguments:

Returns: list of integer indices.

  1. Let vertexIndexList be an empty list of indices.

  2. If drawCall is an indexed draw call:

    1. Initialize the vertexIndexList with drawCall.indexCount integers.

    2. For i in range 0 .. drawCall.indexCount (non-inclusive):

      1. Let relativeVertexIndex be fetch index(i + drawCall.firstIndex, state.indexBuffer).

      2. If relativeVertexIndex has the special value "out of bounds", stop and return the empty list.

        Note: Implementations may choose to display a warning when this occurs, especially when it is easy to detect (like in non-indirect indexed draw calls).

      3. Append drawCall.baseVertex + relativeVertexIndex to the vertexIndexList.

  3. Otherwise:

    1. Initialize the vertexIndexList with drawCall.vertexCount integers.

    2. Set each vertexIndexList item i to the value drawCall.firstVertex + i.

  4. Return vertexIndexList.

Note: in case of indirect draw calls, the indexCount, vertexCount, and other properties of drawCall are read from the indirect buffer instead of the draw command itself.

specify indirect commands better.

fetch index(i, buffer, offset, format)

Arguments:

Returns: unsigned integer or "out of bounds"

  1. Let indexSize be defined by the indexBufferState.format:

    "uint16"

    2

    "uint32"

    4

  2. If indexBufferState.offset + |i + 1| × indexSize > indexBufferState.size, return the special value "out of bounds".

  3. Interpret the data in indexBufferState.buffer, starting at offset indexBufferState.offset + i × indexSize, of size indexSize bytes, as an unsigned integer and return it.

22.3.2. Vertex Processing

Vertex processing stage is a programmable stage of the render pipeline that processes the vertex attribute data, and produces clip space positions for § 22.3.4 Primitive Clipping, as well as other data for the § 22.3.6 Fragment Processing.

process vertices(vertexIndexList, drawCall, desc, state)

Arguments:

Each vertex vertexIndex in the vertexIndexList, in each instance of index rawInstanceIndex, is processed independently. The rawInstanceIndex is in range from 0 to drawCall.instanceCount - 1, inclusive. This processing happens in parallel, and any side effects, such as writes into GPUBufferBindingType."storage" bindings, may happen in any order.

  1. Let instanceIndex be rawInstanceIndex + drawCall.firstInstance.

  2. For each non-null vertexBufferLayout in the list of desc.buffers:

    1. Let i be the index of the buffer layout in this list.

    2. Let vertexBuffer, vertexBufferOffset, and vertexBufferBindingSize be the buffer, offset, and size at slot i of state.vertexBuffers.

    3. Let vertexElementIndex be dependent on vertexBufferLayout.stepMode:

      "vertex"

      vertexIndex

      "instance"

      instanceIndex

    4. For each attributeDesc in vertexBufferLayout.attributes:

      1. Let attributeOffset be vertexBufferOffset + vertexElementIndex * vertexBufferLayout.arrayStride + attributeDesc.offset.

      2. Load the attribute data of format attributeDesc.format from vertexBuffer starting at offset attributeOffset. The components are loaded in the order x, y, z, w from buffer memory.

        If this results in an out-of-bounds access, the resulting value is determined according to WGSL’s invalid memory reference behavior.

      3. Optionally (implementation-defined): If attributeOffset + sizeof(attributeDesc.format) > vertexBufferOffset + vertexBufferBindingSize, empty vertexIndexList and stop, cancelling the draw call.

        Note: This allows implementations to detect out-of-bounds values in the index buffer before issuing a draw call, instead of using invalid memory reference behavior.

      4. Convert the data into a shader-visible format, according to channel formats rules.

        An attribute of type "snorm8x2" and byte values of [0x70, 0xD0] will be converted to vec2<f32>(0.88, -0.38) in WGSL.
      5. Adjust the data size to the shader type:

        • if both are scalar, or both are vectors of the same dimensionality, no adjustment is needed.

        • if data is vector but the shader type is scalar, then only the first component is extracted.

        • if both are vectors, and data has a higher dimension, the extra components are dropped.

          An attribute of type "float32x3" and value vec3<f32>(1.0, 2.0, 3.0) will exposed to the shader as vec2<f32>(1.0, 2.0) if a 2-component vector is expected.
        • if the shader type is a vector of higher dimensionality, or the data is a scalar, then the missing components are filled from vec4<*>(0, 0, 0, 1) value.

          An attribute of type "sint32" and value 5 will be exposed to the shader as vec4<i32>(5, 0, 0, 1) if a 4-component vector is expected.
      6. Bind the data to vertex shader input location attributeDesc.shaderLocation.

  3. For each GPUBindGroup group at index in state.bindGroups:

    1. For each resource GPUBindingResource in the bind group:

      1. Let entry be the corresponding GPUBindGroupLayoutEntry for this resource.

      2. If entry.GPUBindGroupLayoutEntry.visibility includes VERTEX:

  4. Set the shader builtins:

    • Set the vertex_index builtin, if any, to vertexIndex.

    • Set the instance_index builtin, if any, to instanceIndex.

  5. Invoke the vertex shader entry point described by desc.

    Note: The target platform caches the results of vertex shader invocations. There is no guarantee that any vertexIndex that repeats more than once will result in multiple invocations. Similarly, there is no guarantee that a single vertexIndex will only be processed once.

22.3.3. Primitive Assembly

Primitives are assembled by a fixed-function stage of GPUs.

assemble primitives(vertexIndexList, drawCall, desc)

Arguments:

For each instance, the primitives get assembled from the vertices that have been processed by the shaders, based on the vertexIndexList.

  1. First, if the primitive topology is a strip, (which means that desc.stripIndexFormat is not undefined) and the drawCall is indexed, the vertexIndexList is split into sub-lists using the maximum value of desc.stripIndexFormat as a separator.

    Example: a vertexIndexList with values [1, 2, 65535, 4, 5, 6] of type "uint16" will be split in sub-lists [1, 2] and [4, 5, 6].

  2. For each of the sub-lists vl, primitive generation is done according to the desc.topology:

    "line-list"

    Line primitives are composed from (vl.0, vl.1), then (vl.2, vl.3), then (vl.4 to vl.5), etc. Each subsequent primitive takes 2 vertices.

    "line-strip"

    Line primitives are composed from (vl.0, vl.1), then (vl.1, vl.2), then (vl.2, vl.3), etc. Each subsequent primitive takes 1 vertex.

    "triangle-list"

    Triangle primitives are composed from (vl.0, vl.1, vl.2), then (vl.3, vl.4, vl.5), then (vl.6, vl.7, vl.8), etc. Each subsequent primitive takes 3 vertices.

    "triangle-strip"

    Triangle primitives are composed from (vl.0, vl.1, vl.2), then (vl.2, vl.1, vl.3), then (vl.2, vl.3, vl.4), then (vl.4, vl.3, vl.5), etc. Each subsequent primitive takes 1 vertices.

    should this be defined more formally?

    Any incomplete primitives are dropped.

22.3.4. Primitive Clipping

Vertex shaders have to produce a built-in "position" (of type vec4<f32>), which denotes the clip position of a vertex.

link to WGSL built-ins

Primitives are clipped to the clip volume, which, for any clip position p inside a primitive, is defined by the following inequalities:

If descriptor.primitive.unclippedDepth is true, depth clipping is not applied: the clip volume is not bounded in the z dimension.

A primitive passes through this stage unchanged if every one of its edges lie entirely inside the clip volume. If the edges of a primitives intersect the boundary of the clip volume, the intersecting edges are reconnected by new edges that lie along the boundary of the clip volume. For triangular primitives (descriptor.primitive.topology is "triangle-list" or "triangle-strip"), this reconnection may result in introduction of new vertices into the polygon, internally.

If a primitive intersects an edge of the clip volume’s boundary, the clipped polygon must include a point on this boundary edge.

If the vertex shader outputs other floating-point values (scalars and vectors), qualified with "perspective" interpolation, they also get clipped. The output values associated with a vertex that lies within the clip volume are unaffected by clipping. If a primitive is clipped, however, the output values assigned to vertices produced by clipping are clipped.

Considering an edge between vertices a and b that got clipped, resulting in the vertex c, let’s define t to be the ratio between the edge vertices: c.p = t × a.p + (1 − t) × b.p, where x.p is the output clip position of a vertex x.

For each vertex output value "v" with a corresponding fragment input, a.v and b.v would be the outputs for a and b vertices respectively. The clipped shader output c.v is produced based on the interpolation qualifier:

"flat"

Flat interpolation is unaffected, and is based on provoking vertex, which is the first vertex in the primitive. The output value is the same for the whole primitive, and matches the vertex output of the provoking vertex: c.v = provoking vertex.v

"linear"

The interpolation ratio gets adjusted against the perspective coordinates of the clip positions, so that the result of interpolation is linear in screen space.

provide more specifics here, if possible

"perspective"

The value is linearly interpolated in clip space, producing perspective-correct values:

c.v = t × a.v + (1 − t) × b.v

link to interpolation qualifiers in WGSL

The result of primitive clipping is a new set of primitives, which are contained within the clip volume.

22.3.5. Rasterization

Rasterization is the hardware processing stage that maps the generated primitives to the 2-dimensional rendering area of the framebuffer - the set of render attachments in the current GPURenderPassEncoder. This rendering area is split into an even grid of pixels.

Rasterization determines the set of pixels affected by a primitive. In case of multi-sampling, each pixel is further split into descriptor.multisample.count samples. The locations of samples are the same for each pixel, but not defined in this spec.

do we want to force-enable the "Standard sample locations" in Vulkan?

The framebuffer coordinates start from the top-left corner of the render targets. Each unit corresponds exactly to a pixel. See § 3.3 Coordinate Systems for more information.

Let’s define a FragmentDestination to contain:

position

the 2D pixel position in framebuffer space

sampleIndex

an integer in case § 22.3.10 Sample frequency shading is active, or null otherwise

We’ll also use a notion of NDC - normalized device coordinates. In this coordinate system, the viewport bounds range in X and Y from -1 to 1, and in Z from 0 to 1.

Rasterization produces a list of RasterizationPoints, each containing the following data:

destination

refers to FragmentDestination

coverageMask

refers to multisample coverage mask (see § 22.3.11 Sample Masking)

frontFacing

is true if it’s a point on the front face of a primitive

perspectiveDivisor

refers to interpolated 1.0 ÷ W across the primitive

depth

refers to the depth in NDC

primitiveVertices

refers to the list of vertex outputs forming the primitive

barycentricCoordinates

refers to § 22.3.5.3 Barycentric coordinates

define the depth computation algorithm

rasterize(primitiveList, state)

Arguments:

Returns: list of RasterizationPoint.

Each primitive in primitiveList is processed independently. However, the order of primitives affects later stages, such as depth/stencil operations and pixel writes.

  1. First, the clipped vertices are transformed into NDC - normalized device coordinates. Given the output position p, the NDC coordinates are computed as:

    divisor(p) = 1.0 ÷ p.w

    ndc(p) = vector(p.x ÷ p.w, p.y ÷ p.w, p.z ÷ p.w)

  2. Let vp be state.viewport. Then the NDC coordinates n are converted into framebuffer coordinates, based on the size of the render targets:

framebufferCoords(n) = vector(vp.x + 0.5 × (n.x + 1) × vp.width, vp.y + .5 × (n.y + 1) × vp.height)

specify the depth translation into viewport as well

  1. Let rasterizationPoints be an empty list.

specify that each rasterization point gets assigned an interpolated divisor(p) and framebufferCoords(n), as well as the other attributes.

  1. Proceed with a specific rasterization algorithm, depending on primitive.topology:

    "point-list"

    The point, if not filtered by § 22.3.4 Primitive Clipping, goes into § 22.3.5.1 Point Rasterization.

    "line-list" or "line-strip"

    The line cut by § 22.3.4 Primitive Clipping goes into § 22.3.5.2 Line Rasterization.

    "triangle-list" or "triangle-strip"

    The polygon produced in § 22.3.4 Primitive Clipping goes into § 22.3.5.4 Polygon Rasterization.

    reword the "goes into" part

  2. Remove all the points rp from rasterizationPoints that have rp.destination.position outside of state.scissorRect.

  3. Return rasterizationPoints.

22.3.5.1. Point Rasterization

A single FragmentDestination is selected within the pixel containing the framebuffer coordinates of the point.

The coverage mask depends on multi-sampling mode:

sample-frequency

coverageMask = 1 ≪ sampleIndex

pixel-frequency multi-sampling

coverageMask = 1 ≪ descriptor.multisample.count − 1

no multi-sampling

coverageMask = 1

22.3.5.2. Line Rasterization

fill out this section

22.3.5.3. Barycentric coordinates

Barycentric coordinates is a list of n numbers bi, defined for a point p inside a convex polygon with n vertices vi in framebuffer space. Each bi is in range 0 to 1, inclusive, and represents the proximity to vertex vi. Their sum is always constant:

∑ (bi) = 1

These coordinates uniquely specify any point p within the polygon (or on its boundary) as:

p = ∑ (bi × pi)

For a polygon with 3 vertices - a triangle, barycentric coordinates of any point p can be computed as follows:

Apolygon = A(v1, v2, v3) b1 = A(p, b2, b3) ÷ Apolygon b2 = A(b1, p, b3) ÷ Apolygon b3 = A(b1, b2, p) ÷ Apolygon

Where A(list of points) is the area of the polygon with the given set of vertices.

For polygons with more than 3 vertices, the exact algorithm is implementation-dependent. One of the possible implementations is to triangulate the polygon and compute the barycentrics of a point based on the triangle it falls into.

22.3.5.4. Polygon Rasterization

A polygon is front-facing if it’s oriented towards the projection. Otherwise, the polygon is back-facing.

rasterize polygon()

Arguments:

Returns: list of RasterizationPoint.

  1. Let rasterizationPoints be an empty list.

  2. Let v(i) be the framebuffer coordinates for the clipped vertex number i (starting with 1) in a rasterized polygon of n vertices.

    Note: this section uses the term "polygon" instead of a "triangle", since § 22.3.4 Primitive Clipping stage may have introduced additional vertices. This is non-observable by the application.

  3. Determine if the polygon is front-facing, which depends on the sign of the area occupied by the polygon in framebuffer coordinates:

    area = 0.5 × ((v1.x × vn.y − vn.x × v1.y) + ∑ (vi+1.x × vi.y − vi.x × vi+1.y))

    The sign of area is interpreted based on the primitive.frontFace:

    "ccw"

    area > 0 is considered front-facing, otherwise back-facing

    "cw"

    area < 0 is considered front-facing, otherwise back-facing

  4. Cull based on primitive.cullMode:

    "none"

    All polygons pass this test.

    "front"

    The front-facing polygons are discarded, and do not process in later stages of the render pipeline.

    "back"

    The back-facing polygons are discarded.

  5. Determine a set of fragments inside the polygon in framebuffer space - these are locations scheduled for the per-fragment operations. This operation is known as "point sampling". The logic is based on descriptor.multisample:

    disabled

    Fragments are associated with pixel centers. That is, all the points with coordinates C, where fract(C) = vector2(0.5, 0.5) in the framebuffer space, enclosed into the polygon, are included. If a pixel center is on the edge of the polygon, whether or not it’s included is not defined.

    Note: this becomes a subject of precision for the rasterizer.

    enabled

    Each pixel is associated with descriptor.multisample.count locations, which are implementation-defined. The locations are ordered, and the list is the same for each pixel of the framebuffer. Each location corresponds to one fragment in the multisampled framebuffer.

    The rasterizer builds a mask of locations being hit inside each pixel and provides is as "sample-mask" built-in to the fragment shader.

  6. For each produced fragment of type FragmentDestination:

    1. Let rp be be a new RasterizationPoint object

    2. Compute the list b as § 22.3.5.3 Barycentric coordinates of that fragment. Set rp.barycentricCoordinates to b.

    3. Let di be the depth value of vi.

      define how this value is constructed.

    4. Set rp.depth to ∑ (bi × di)

    5. Append rp to rasterizationPoints.

  7. Return rasterizationPoints.

22.3.6. Fragment Processing

The fragment processing stage is a programmable stage of the render pipeline that computes the fragment data (often a color) to be written into render targets.

This stage produces a Fragment for each RasterizationPoint:

process fragment(rp, desc, state)

Arguments:

Returns: Fragment or null.

  1. Let fragment be a new Fragment object.

  2. Set fragment.destination to rp.destination.

  3. Set fragment.coverageMask to rp.coverageMask.

  4. Set fragment.depth to rp.depth.

  5. If desc is not null:

    1. Set the shader input builtins. For each non-composite argument of the entry point, annotated as a builtin, set its value based on the annotation:

      position

      vec4<f32>(rp.destination.position, rp.depth, rp.perspectiveDivisor)

      front_facing

      rp.frontFacing

      sample_index

      rp.destination.sampleIndex

      sample_mask

      rp.coverageMask

    2. For each user-specified pipeline input of the fragment stage:

      1. Let value be the interpolated fragment input, based on rp.barycentricCoordinates, rp.primitiveVertices, and the interpolation qualifier on the input.

        describe the exact equations.

      2. Set the corresponding fragment shader location input to value.

    3. Invoke the fragment shader entry point described by desc.

    4. If the fragment issued discard, return null.

    5. Set fragment.colors to the user-specified pipeline output values from the shader.

    6. Take the shader output builtins:

      1. If frag_depth builtin is produced by the shader as value:

        1. Let vp be state.viewport.

        2. Set fragment.depth to clamp(value, vp.minDepth, vp.maxDepth).

    7. If sample_mask builtin is produced by the shader as value:

      1. Set fragment.coverageMask to fragment.coverageMaskvalue.

    Otherwise we are in § 22.3.8 No Color Output mode, and fragment.colors is empty.

  6. Return fragment.

Processing of fragments happens in parallel, while any side effects, such as writes into GPUBufferBindingType."storage" bindings, may happen in any order.

22.3.7. Output Merging

fill out this section

The depth input to this stage, if any, is clamped to the current [[viewport]] depth range (regardless of whether the fragment shader stage writes the frag_depth builtin).

22.3.8. No Color Output

In no-color-output mode, pipeline does not produce any color attachment outputs.

The pipeline still performs rasterization and produces depth values based on the vertex position output. The depth testing and stencil operations can still be used.

22.3.9. Alpha to Coverage

In alpha-to-coverage mode, an additional alpha-to-coverage mask of MSAA samples is generated based on the alpha component of the fragment shader output value of the fragment.targets[0].

The algorithm of producing the extra mask is platform-dependent and can vary for different pixels. It guarantees that:

22.3.10. Sample frequency shading

fill out the section

22.3.11. Sample Masking

The final sample mask for a pixel is computed as: rasterization mask & mask & shader-output mask.

Only the lower count bits of the mask are considered.

If the least-significant bit at position N of the final sample mask has value of "0", the sample color outputs (corresponding to sample N) to all attachments of the fragment shader are discarded. Also, no depth test or stencil operations are executed on the relevant samples of the depth-stencil attachment.

Note: the color output for sample N is produced by the fragment shader execution with SV_SampleIndex == N for the current pixel. If the fragment shader doesn’t use this semantics, it’s only executed once per pixel.

The rasterization mask is produced by the rasterization stage, based on the shape of the rasterized polygon. The samples included in the shape get the relevant bits 1 in the mask.

The shader-output mask takes the output value of "sample_mask" builtin in the fragment shader. If the builtin is not output from the fragment shader, and alphaToCoverageEnabled is enabled, the shader-output mask becomes the alpha-to-coverage mask. Otherwise, it defaults to 0xFFFFFFFF.

23. Type Definitions

typedef [EnforceRange] unsigned long GPUBufferDynamicOffset;
typedef [EnforceRange] unsigned long GPUStencilValue;
typedef [EnforceRange] unsigned long GPUSampleMask;
typedef [EnforceRange] long GPUDepthBias;

typedef [EnforceRange] unsigned long long GPUSize64;
typedef [EnforceRange] unsigned long GPUIntegerCoordinate;
typedef [EnforceRange] unsigned long GPUIndex32;
typedef [EnforceRange] unsigned long GPUSize32;
typedef [EnforceRange] long GPUSignedOffset32;

typedef unsigned long GPUFlagsConstant;

23.1. Colors & Vectors

dictionary GPUColorDict {
    required double r;
    required double g;
    required double b;
    required double a;
};
typedef (sequence<double> or GPUColorDict) GPUColor;

Note: double is large enough to precisely hold 32-bit signed/unsigned integers and single-precision floats.

dictionary GPUOrigin2DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin2DDict) GPUOrigin2D;

An Origin2D is a GPUOrigin2D. Origin2D is a spec namespace for the following definitions:

For a given GPUOrigin2D value origin, depending on its type, the syntax:
dictionary GPUOrigin3DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
    GPUIntegerCoordinate z = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin3DDict) GPUOrigin3D;

An Origin3D is a GPUOrigin3D. Origin3D is a spec namespace for the following definitions:

For a given GPUOrigin3D value origin, depending on its type, the syntax:
dictionary GPUExtent3DDict {
    required GPUIntegerCoordinate width;
    GPUIntegerCoordinate height = 1;
    GPUIntegerCoordinate depthOrArrayLayers = 1;
};
typedef (sequence<GPUIntegerCoordinate> or GPUExtent3DDict) GPUExtent3D;

An Extent3D is a GPUExtent3D. Extent3D is a spec namespace for the following definitions:

For a given GPUExtent3D value extent, depending on its type, the syntax:

24. Feature Index

24.1. "depth-clip-control"

Define functionality when the "depth-clip-control" feature is enabled.

Feature Dictionary Values

The following dictionary values are supported if and only if the "depth-clip-control" feature is enabled; otherwise, they must be set to their default values:

GPUPrimitiveState

24.2. "depth24unorm-stencil8"

Allows for explicit creation of textures of format "depth24unorm-stencil8".

Feature Enums

The following enums are supported if and only if the "depth24unorm-stencil8" feature is enabled:

GPUTextureFormat

24.3. "depth32float-stencil8"

Allows for explicit creation of textures of format "depth32float-stencil8".

Feature Enums

The following enums are supported if and only if the "depth32float-stencil8" feature is enabled:

GPUTextureFormat

24.4. "texture-compression-bc"

Allows for explicit creation of textures of BC compressed formats.

Feature Enums

The following enums are supported if and only if the "texture-compression-bc" feature is enabled:

GPUTextureFormat

24.5. "texture-compression-etc2"

Allows for explicit creation of textures of ETC2 compressed formats.

Feature Enums

The following enums are supported if and only if the "texture-compression-etc2" feature is enabled:

GPUTextureFormat

24.6. "texture-compression-astc"

Allows for explicit creation of textures of ASTC compressed formats.

Feature Enums

The following enums are supported if and only if the "texture-compression-astc" feature is enabled:

GPUTextureFormat

24.7. "timestamp-query"

Define functionality when the "timestamp-query" feature is enabled.

Feature Enums

The following enums are supported if and only if the "timestamp-query" feature is enabled:

GPUQueryType

24.8. "indirect-first-instance"

Removes the zero value restriction on firstInstance in indirect draw parameters and indirect drawIndexed parameters. firstInstance is allowed to be non-zero if and only if the "indirect-first-instance" feature is enabled.

25. Appendices

25.1. Texture Format Capabilities

25.1.1. Plain color formats

All plain color formats support COPY_SRC, COPY_DST, and TEXTURE_BINDING usage.

Only formats with GPUTextureSampleType "float" can be blended.

The RENDER_ATTACHMENT and STORAGE_BINDING columns specify support for GPUTextureUsage.RENDER_ATTACHMENT and GPUTextureUsage.STORAGE_BINDING usage respectively.

Format GPUTextureSampleType RENDER_ATTACHMENT multisampling resolve STORAGE_BINDING
8-bit per component
r8unorm "float",
"unfilterable-float"
r8snorm "float",
"unfilterable-float"
r8uint "uint"
r8sint "sint"
rg8unorm "float",
"unfilterable-float"
rg8snorm "float",
"unfilterable-float"
rg8uint "uint"
rg8sint "sint"
rgba8unorm "float",
"unfilterable-float"
rgba8unorm-srgb "float",
"unfilterable-float"
rgba8snorm "float",
"unfilterable-float"
rgba8uint "uint"
rgba8sint "sint"
bgra8unorm "float",
"unfilterable-float"
bgra8unorm-srgb "float",
"unfilterable-float"
16-bit per component
r16uint "uint"
r16sint "sint"
r16float "float",
"unfilterable-float"
rg16uint "uint"
rg16sint "sint"
rg16float "float",
"unfilterable-float"
rgba16uint "uint"
rgba16sint "sint"
rgba16float "float",
"unfilterable-float"
32-bit per component
r32uint "uint"
r32sint "sint"
r32float "unfilterable-float"
rg32uint "uint"
rg32sint "sint"
rg32float "unfilterable-float"
rgba32uint "uint"
rgba32sint "sint"
rgba32float "unfilterable-float"
mixed component width
rgb10a2unorm "float",
"unfilterable-float"
rg11b10ufloat "float",
"unfilterable-float"

25.1.2. Depth-stencil formats

A depth-or-stencil format is any format with depth and/or stencil aspects. A combined depth-stencil format is a depth-or-stencil format that has both depth and stencil aspects.

All depth-or-stencil formats support the COPY_SRC, COPY_DST, TEXTURE_BINDING, and RENDER_ATTACHMENT usages. All of these formats support multisampling. However, certain copy operations also restrict the source and destination formats.

None of the depth formats can be filtered.

Format Bytes per texel Aspect GPUTextureSampleType Valid image copy source Valid image copy destination
stencil8 1 − 4 stencil "uint"
depth16unorm 2 depth "depth"
depth24plus 4 depth "depth"
depth24plus-stencil8 4 − 8 depth "depth"
stencil "uint"
depth32float 4 depth "depth"
depth24unorm-stencil8 4 depth "depth"
stencil "uint"
depth32float-stencil8 5 − 8 depth "depth"
stencil "uint"
25.1.2.1. Reading and Sampling Depth/Stencil Textures

When reading or sampling a depth component via a texture_depth_*-typed binding, the value is returned as an f32 value.

Depending on the resolution of this issue, allow reading/sampling via texture_2d etc. in the table above and specify the behavior. (vec4<f32>(D, X, X, X)?) Update the note below which would become slightly outdated. [Issue #gpuweb/gpuweb#2094]

Reading or sampling a stencil component must be done via a normal texture binding (texture_2d, texture_2d_array, texture_cube, or texture_cube_array). When doing so, the value is returned as vec4<u32>(S, X, X, X), where S is the stencil value and each X is an implementation-defined unspecified value. Authors must not rely on these .y, .z, and .w components, as their behavior is non-portable.

Note: Short of adding a new more constrained stencil sampler type (like depth), it’s infeasible for implementations to efficiently paper over the driver differences for stencil reads. As this was not a portability pain point for WebGL, it’s not expected to be problematic in WebGPU. In practice, expect either vec4<u32>(S, S, S, S) or vec4<u32>(S, 0, 0, 1), depending on hardware.

25.1.2.2. Copying Depth/Stencil Textures

The texel values of depth32float formats ("depth32float" and "depth32float-stencil8" have a limited range. As a result, copies into such textures are only valid from other textures of the same format.

The depth aspects of depth24plus formats ("depth24plus" and "depth24plus-stencil8") have opaque representations (implemented as either "depth24unorm" or "depth32float"). The depth aspect of "depth24unorm-stencil8" doesn’t have a aligned tightly-packed representation (because its size is 3 bytes). As a result, depth-aspect image copies are not allowed with these formats.

It is possible to imitate these disallowed copies:

25.1.3. Packed formats

All packed texture formats support COPY_SRC, COPY_DST, and TEXTURE_BINDING usages. All of these formats have "float" type and can be filtered on sampling. None of these formats support multisampling.

A compressed format is any format with a block size greater than 1 × 1.

Format Bytes per block GPUTextureSampleType Block Size Feature
rgb9e5ufloat 4 "float",
"unfilterable-float"
1 × 1
bc1-rgba-unorm 8 "float",
"unfilterable-float"
4 × 4 texture-compression-bc
bc1-rgba-unorm-srgb
bc2-rgba-unorm 16
bc2-rgba-unorm-srgb
bc3-rgba-unorm 16
bc3-rgba-unorm-srgb
bc4-r-unorm 8
bc4-r-snorm
bc5-rg-unorm 16
bc5-rg-snorm
bc6h-rgb-ufloat 16
bc6h-rgb-float
bc7-rgba-unorm 16
bc7-rgba-unorm-srgb
etc2-rgb8unorm 8 "float",
"unfilterable-float"
4 × 4 texture-compression-etc2
etc2-rgb8unorm-srgb
etc2-rgb8a1unorm 8
etc2-rgb8a1unorm-srgb
etc2-rgba8unorm 16
etc2-rgba8unorm-srgb
eac-r11unorm 8
eac-r11snorm
eac-rg11unorm 16
eac-rg11snorm
astc-4x4-unorm 16 "float",
"unfilterable-float"
4 × 4 texture-compression-astc
astc-4x4-unorm-srgb
astc-5x4-unorm 16 5 × 4
astc-5x4-unorm-srgb
astc-5x5-unorm 16 5 × 5
astc-5x5-unorm-srgb
astc-6x5-unorm 16 6 × 5
astc-6x5-unorm-srgb
astc-6x6-unorm 16 6 × 6
astc-6x6-unorm-srgb
astc-8x5-unorm 16 8 × 5
astc-8x5-unorm-srgb
astc-8x6-unorm 16 8 × 6
astc-8x6-unorm-srgb
astc-8x8-unorm 16 8 × 8
astc-8x8-unorm-srgb
astc-10x5-unorm 16 10 × 5
astc-10x5-unorm-srgb
astc-10x6-unorm 16 10 × 6
astc-10x6-unorm-srgb
astc-10x8-unorm 16 10 × 8
astc-10x8-unorm-srgb
astc-10x10-unorm 16 10 × 10
astc-10x10-unorm-srgb
astc-12x10-unorm 16 12 × 10
astc-12x10-unorm-srgb
astc-12x12-unorm 16 12 × 12
astc-12x12-unorm-srgb

25.2. Temporary usages of non-exported dfns

x y renderExtent

Eventually all of these should disappear but they are useful to avoid warning while building the specification.

vertex buffer

Conformance

Document conventions

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Conformant Algorithms

Requirements phrased in the imperative as part of algorithms (such as "strip any leading space characters" or "return false and abort these steps") are to be interpreted with the meaning of the key word ("must", "should", "may", etc) used in introducing the algorithm.

Conformance requirements phrased as algorithms or specific steps can be implemented in any manner, so long as the end result is equivalent. In particular, the algorithms defined in this specification are intended to be easy to understand and are not intended to be performant. Implementers are encouraged to optimize.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[CSS-COLOR-4]
Tab Atkins Jr.; Chris Lilley; Lea Verou. CSS Color Module Level 4. 15 December 2021. WD. URL: https://www.w3.org/TR/css-color-4/
[DOM]
Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
[ECMASCRIPT]
ECMAScript Language Specification. URL: https://tc39.es/ecma262/multipage/
[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[INFRA]
Anne van Kesteren; Domenic Denicola. Infra Standard. Living Standard. URL: https://infra.spec.whatwg.org/
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://datatracker.ietf.org/doc/html/rfc2119
[WEBIDL]
Edgar Chen; Timothy Gu. Web IDL Standard. Living Standard. URL: https://webidl.spec.whatwg.org/
[WGSL]
David Neto; Myles C. Maxfield. WebGPU Shading Language. Editor's Draft. URL: https://gpuweb.github.io/gpuweb/wgsl/

Informative References

[SourceMap]
John Lenz; Nick Fitzgerald. Source Map Revision 3 Proposal. URL: https://sourcemaps.info/spec.html

IDL Index

interface mixin GPUObjectBase {
    attribute USVString? label;
};

dictionary GPUObjectDescriptorBase {
    USVString label;
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUSupportedLimits {
    readonly attribute unsigned long maxTextureDimension1D;
    readonly attribute unsigned long maxTextureDimension2D;
    readonly attribute unsigned long maxTextureDimension3D;
    readonly attribute unsigned long maxTextureArrayLayers;
    readonly attribute unsigned long maxBindGroups;
    readonly attribute unsigned long maxDynamicUniformBuffersPerPipelineLayout;
    readonly attribute unsigned long maxDynamicStorageBuffersPerPipelineLayout;
    readonly attribute unsigned long maxSampledTexturesPerShaderStage;
    readonly attribute unsigned long maxSamplersPerShaderStage;
    readonly attribute unsigned long maxStorageBuffersPerShaderStage;
    readonly attribute unsigned long maxStorageTexturesPerShaderStage;
    readonly attribute unsigned long maxUniformBuffersPerShaderStage;
    readonly attribute unsigned long long maxUniformBufferBindingSize;
    readonly attribute unsigned long long maxStorageBufferBindingSize;
    readonly attribute unsigned long minUniformBufferOffsetAlignment;
    readonly attribute unsigned long minStorageBufferOffsetAlignment;
    readonly attribute unsigned long maxVertexBuffers;
    readonly attribute unsigned long maxVertexAttributes;
    readonly attribute unsigned long maxVertexBufferArrayStride;
    readonly attribute unsigned long maxInterStageShaderComponents;
    readonly attribute unsigned long maxComputeWorkgroupStorageSize;
    readonly attribute unsigned long maxComputeInvocationsPerWorkgroup;
    readonly attribute unsigned long maxComputeWorkgroupSizeX;
    readonly attribute unsigned long maxComputeWorkgroupSizeY;
    readonly attribute unsigned long maxComputeWorkgroupSizeZ;
    readonly attribute unsigned long maxComputeWorkgroupsPerDimension;
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUSupportedFeatures {
    readonly setlike<DOMString>;
};

enum GPUPredefinedColorSpace {
    "srgb",
};

interface mixin NavigatorGPU {
    [SameObject, SecureContext] readonly attribute GPU gpu;
};
Navigator includes NavigatorGPU;
WorkerNavigator includes NavigatorGPU;

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPU {
    Promise<GPUAdapter?> requestAdapter(optional GPURequestAdapterOptions options = {});
};

dictionary GPURequestAdapterOptions {
    GPUPowerPreference powerPreference;
    boolean forceFallbackAdapter = false;
};

enum GPUPowerPreference {
    "low-power",
    "high-performance",
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUAdapter {
    readonly attribute DOMString name;
    [SameObject] readonly attribute GPUSupportedFeatures features;
    [SameObject] readonly attribute GPUSupportedLimits limits;
    readonly attribute boolean isFallbackAdapter;

    Promise<GPUDevice> requestDevice(optional GPUDeviceDescriptor descriptor = {});
};

dictionary GPUDeviceDescriptor : GPUObjectDescriptorBase {
    sequence<GPUFeatureName> requiredFeatures = [];
    record<DOMString, GPUSize64> requiredLimits = {};
};

enum GPUFeatureName {
    "depth-clip-control",
    "depth24unorm-stencil8",
    "depth32float-stencil8",
    "texture-compression-bc",
    "texture-compression-etc2",
    "texture-compression-astc",
    "timestamp-query",
    "indirect-first-instance",
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUDevice : EventTarget {
    [SameObject] readonly attribute GPUSupportedFeatures features;
    [SameObject] readonly attribute GPUSupportedLimits limits;

    [SameObject] readonly attribute GPUQueue queue;

    undefined destroy();

    GPUBuffer createBuffer(GPUBufferDescriptor descriptor);
    GPUTexture createTexture(GPUTextureDescriptor descriptor);
    GPUSampler createSampler(optional GPUSamplerDescriptor descriptor = {});
    GPUExternalTexture importExternalTexture(GPUExternalTextureDescriptor descriptor);

    GPUBindGroupLayout createBindGroupLayout(GPUBindGroupLayoutDescriptor descriptor);
    GPUPipelineLayout createPipelineLayout(GPUPipelineLayoutDescriptor descriptor);
    GPUBindGroup createBindGroup(GPUBindGroupDescriptor descriptor);

    GPUShaderModule createShaderModule(GPUShaderModuleDescriptor descriptor);
    GPUComputePipeline createComputePipeline(GPUComputePipelineDescriptor descriptor);
    GPURenderPipeline createRenderPipeline(GPURenderPipelineDescriptor descriptor);
    Promise<GPUComputePipeline> createComputePipelineAsync(GPUComputePipelineDescriptor descriptor);
    Promise<GPURenderPipeline> createRenderPipelineAsync(GPURenderPipelineDescriptor descriptor);

    GPUCommandEncoder createCommandEncoder(optional GPUCommandEncoderDescriptor descriptor = {});
    GPURenderBundleEncoder createRenderBundleEncoder(GPURenderBundleEncoderDescriptor descriptor);

    GPUQuerySet createQuerySet(GPUQuerySetDescriptor descriptor);
};
GPUDevice includes GPUObjectBase;

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUBuffer {
    Promise<undefined> mapAsync(GPUMapModeFlags mode, optional GPUSize64 offset = 0, optional GPUSize64 size);
    ArrayBuffer getMappedRange(optional GPUSize64 offset = 0, optional GPUSize64 size);
    undefined unmap();

    undefined destroy();
};
GPUBuffer includes GPUObjectBase;

dictionary GPUBufferDescriptor : GPUObjectDescriptorBase {
    required GPUSize64 size;
    required GPUBufferUsageFlags usage;
    boolean mappedAtCreation = false;
};

typedef [EnforceRange] unsigned long GPUBufferUsageFlags;
[Exposed=(Window, DedicatedWorker)]
namespace GPUBufferUsage {
    const GPUFlagsConstant MAP_READ      = 0x0001;
    const GPUFlagsConstant MAP_WRITE     = 0x0002;
    const GPUFlagsConstant COPY_SRC      = 0x0004;
    const GPUFlagsConstant COPY_DST      = 0x0008;
    const GPUFlagsConstant INDEX         = 0x0010;
    const GPUFlagsConstant VERTEX        = 0x0020;
    const GPUFlagsConstant UNIFORM       = 0x0040;
    const GPUFlagsConstant STORAGE       = 0x0080;
    const GPUFlagsConstant INDIRECT      = 0x0100;
    const GPUFlagsConstant QUERY_RESOLVE = 0x0200;
};

typedef [EnforceRange] unsigned long GPUMapModeFlags;
[Exposed=(Window, DedicatedWorker)]
namespace GPUMapMode {
    const GPUFlagsConstant READ  = 0x0001;
    const GPUFlagsConstant WRITE = 0x0002;
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUTexture {
    GPUTextureView createView(optional GPUTextureViewDescriptor descriptor = {});

    undefined destroy();
};
GPUTexture includes GPUObjectBase;

dictionary GPUTextureDescriptor : GPUObjectDescriptorBase {
    required GPUExtent3D size;
    GPUIntegerCoordinate mipLevelCount = 1;
    GPUSize32 sampleCount = 1;
    GPUTextureDimension dimension = "2d";
    required GPUTextureFormat format;
    required GPUTextureUsageFlags usage;
};

enum GPUTextureDimension {
    "1d",
    "2d",
    "3d",
};

typedef [EnforceRange] unsigned long GPUTextureUsageFlags;
[Exposed=(Window, DedicatedWorker)]
namespace GPUTextureUsage {
    const GPUFlagsConstant COPY_SRC          = 0x01;
    const GPUFlagsConstant COPY_DST          = 0x02;
    const GPUFlagsConstant TEXTURE_BINDING   = 0x04;
    const GPUFlagsConstant STORAGE_BINDING   = 0x08;
    const GPUFlagsConstant RENDER_ATTACHMENT = 0x10;
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUTextureView {
};
GPUTextureView includes GPUObjectBase;

dictionary GPUTextureViewDescriptor : GPUObjectDescriptorBase {
    GPUTextureFormat format;
    GPUTextureViewDimension dimension;
    GPUTextureAspect aspect = "all";
    GPUIntegerCoordinate baseMipLevel = 0;
    GPUIntegerCoordinate mipLevelCount;
    GPUIntegerCoordinate baseArrayLayer = 0;
    GPUIntegerCoordinate arrayLayerCount;
};

enum GPUTextureViewDimension {
    "1d",
    "2d",
    "2d-array",
    "cube",
    "cube-array",
    "3d",
};

enum GPUTextureAspect {
    "all",
    "stencil-only",
    "depth-only",
};

enum GPUTextureFormat {
    // 8-bit formats
    "r8unorm",
    "r8snorm",
    "r8uint",
    "r8sint",

    // 16-bit formats
    "r16uint",
    "r16sint",
    "r16float",
    "rg8unorm",
    "rg8snorm",
    "rg8uint",
    "rg8sint",

    // 32-bit formats
    "r32uint",
    "r32sint",
    "r32float",
    "rg16uint",
    "rg16sint",
    "rg16float",
    "rgba8unorm",
    "rgba8unorm-srgb",
    "rgba8snorm",
    "rgba8uint",
    "rgba8sint",
    "bgra8unorm",
    "bgra8unorm-srgb",
    // Packed 32-bit formats
    "rgb9e5ufloat",
    "rgb10a2unorm",
    "rg11b10ufloat",

    // 64-bit formats
    "rg32uint",
    "rg32sint",
    "rg32float",
    "rgba16uint",
    "rgba16sint",
    "rgba16float",

    // 128-bit formats
    "rgba32uint",
    "rgba32sint",
    "rgba32float",

    // Depth/stencil formats
    "stencil8",
    "depth16unorm",
    "depth24plus",
    "depth24plus-stencil8",
    "depth32float",

    // "depth24unorm-stencil8" feature
    "depth24unorm-stencil8",

    // "depth32float-stencil8" feature
    "depth32float-stencil8",

    // BC compressed formats usable if "texture-compression-bc" is both
    // supported by the device/user agent and enabled in requestDevice.
    "bc1-rgba-unorm",
    "bc1-rgba-unorm-srgb",
    "bc2-rgba-unorm",
    "bc2-rgba-unorm-srgb",
    "bc3-rgba-unorm",
    "bc3-rgba-unorm-srgb",
    "bc4-r-unorm",
    "bc4-r-snorm",
    "bc5-rg-unorm",
    "bc5-rg-snorm",
    "bc6h-rgb-ufloat",
    "bc6h-rgb-float",
    "bc7-rgba-unorm",
    "bc7-rgba-unorm-srgb",

    // ETC2 compressed formats usable if "texture-compression-etc2" is both
    // supported by the device/user agent and enabled in requestDevice.
    "etc2-rgb8unorm",
    "etc2-rgb8unorm-srgb",
    "etc2-rgb8a1unorm",
    "etc2-rgb8a1unorm-srgb",
    "etc2-rgba8unorm",
    "etc2-rgba8unorm-srgb",
    "eac-r11unorm",
    "eac-r11snorm",
    "eac-rg11unorm",
    "eac-rg11snorm",

    // ASTC compressed formats usable if "texture-compression-astc" is both
    // supported by the device/user agent and enabled in requestDevice.
    "astc-4x4-unorm",
    "astc-4x4-unorm-srgb",
    "astc-5x4-unorm",
    "astc-5x4-unorm-srgb",
    "astc-5x5-unorm",
    "astc-5x5-unorm-srgb",
    "astc-6x5-unorm",
    "astc-6x5-unorm-srgb",
    "astc-6x6-unorm",
    "astc-6x6-unorm-srgb",
    "astc-8x5-unorm",
    "astc-8x5-unorm-srgb",
    "astc-8x6-unorm",
    "astc-8x6-unorm-srgb",
    "astc-8x8-unorm",
    "astc-8x8-unorm-srgb",
    "astc-10x5-unorm",
    "astc-10x5-unorm-srgb",
    "astc-10x6-unorm",
    "astc-10x6-unorm-srgb",
    "astc-10x8-unorm",
    "astc-10x8-unorm-srgb",
    "astc-10x10-unorm",
    "astc-10x10-unorm-srgb",
    "astc-12x10-unorm",
    "astc-12x10-unorm-srgb",
    "astc-12x12-unorm",
    "astc-12x12-unorm-srgb",
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUExternalTexture {
};
GPUExternalTexture includes GPUObjectBase;

dictionary GPUExternalTextureDescriptor : GPUObjectDescriptorBase {
    required HTMLVideoElement source;
    GPUPredefinedColorSpace colorSpace = "srgb";
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUSampler {
};
GPUSampler includes GPUObjectBase;

dictionary GPUSamplerDescriptor : GPUObjectDescriptorBase {
    GPUAddressMode addressModeU = "clamp-to-edge";
    GPUAddressMode addressModeV = "clamp-to-edge";
    GPUAddressMode addressModeW = "clamp-to-edge";
    GPUFilterMode magFilter = "nearest";
    GPUFilterMode minFilter = "nearest";
    GPUFilterMode mipmapFilter = "nearest";
    float lodMinClamp = 0;
    float lodMaxClamp = 32;
    GPUCompareFunction compare;
    [Clamp] unsigned short maxAnisotropy = 1;
};

enum GPUAddressMode {
    "clamp-to-edge",
    "repeat",
    "mirror-repeat",
};

enum GPUFilterMode {
    "nearest",
    "linear",
};

enum GPUCompareFunction {
    "never",
    "less",
    "equal",
    "less-equal",
    "greater",
    "not-equal",
    "greater-equal",
    "always",
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUBindGroupLayout {
};
GPUBindGroupLayout includes GPUObjectBase;

dictionary GPUBindGroupLayoutDescriptor : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayoutEntry> entries;
};

typedef [EnforceRange] unsigned long GPUShaderStageFlags;
[Exposed=(Window, DedicatedWorker)]
namespace GPUShaderStage {
    const GPUFlagsConstant VERTEX   = 0x1;
    const GPUFlagsConstant FRAGMENT = 0x2;
    const GPUFlagsConstant COMPUTE  = 0x4;
};

dictionary GPUBindGroupLayoutEntry {
    required GPUIndex32 binding;
    required GPUShaderStageFlags visibility;

    GPUBufferBindingLayout buffer;
    GPUSamplerBindingLayout sampler;
    GPUTextureBindingLayout texture;
    GPUStorageTextureBindingLayout storageTexture;
    GPUExternalTextureBindingLayout externalTexture;
};

enum GPUBufferBindingType {
    "uniform",
    "storage",
    "read-only-storage",
};

dictionary GPUBufferBindingLayout {
    GPUBufferBindingType type = "uniform";
    boolean hasDynamicOffset = false;
    GPUSize64 minBindingSize = 0;
};

enum GPUSamplerBindingType {
    "filtering",
    "non-filtering",
    "comparison",
};

dictionary GPUSamplerBindingLayout {
    GPUSamplerBindingType type = "filtering";
};

enum GPUTextureSampleType {
    "float",
    "unfilterable-float",
    "depth",
    "sint",
    "uint",
};

dictionary GPUTextureBindingLayout {
    GPUTextureSampleType sampleType = "float";
    GPUTextureViewDimension viewDimension = "2d";
    boolean multisampled = false;
};

enum GPUStorageTextureAccess {
    "write-only",
};

dictionary GPUStorageTextureBindingLayout {
    GPUStorageTextureAccess access = "write-only";
    required GPUTextureFormat format;
    GPUTextureViewDimension viewDimension = "2d";
};

dictionary GPUExternalTextureBindingLayout {
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUBindGroup {
};
GPUBindGroup includes GPUObjectBase;

dictionary GPUBindGroupDescriptor : GPUObjectDescriptorBase {
    required GPUBindGroupLayout layout;
    required sequence<GPUBindGroupEntry> entries;
};

typedef (GPUSampler or GPUTextureView or GPUBufferBinding or GPUExternalTexture) GPUBindingResource;

dictionary GPUBindGroupEntry {
    required GPUIndex32 binding;
    required GPUBindingResource resource;
};

dictionary GPUBufferBinding {
    required GPUBuffer buffer;
    GPUSize64 offset = 0;
    GPUSize64 size;
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUPipelineLayout {
};
GPUPipelineLayout includes GPUObjectBase;

dictionary GPUPipelineLayoutDescriptor : GPUObjectDescriptorBase {
    required sequence<GPUBindGroupLayout> bindGroupLayouts;
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUShaderModule {
    Promise<GPUCompilationInfo> compilationInfo();
};
GPUShaderModule includes GPUObjectBase;

dictionary GPUShaderModuleCompilationHint {
    required GPUPipelineLayout layout;
};

dictionary GPUShaderModuleDescriptor : GPUObjectDescriptorBase {
    required USVString code;
    object sourceMap;
    record<USVString, GPUShaderModuleCompilationHint> hints;
};

enum GPUCompilationMessageType {
    "error",
    "warning",
    "info",
};

[Exposed=(Window, DedicatedWorker), Serializable, SecureContext]
interface GPUCompilationMessage {
    readonly attribute DOMString message;
    readonly attribute GPUCompilationMessageType type;
    readonly attribute unsigned long long lineNum;
    readonly attribute unsigned long long linePos;
    readonly attribute unsigned long long offset;
    readonly attribute unsigned long long length;
};

[Exposed=(Window, DedicatedWorker), Serializable, SecureContext]
interface GPUCompilationInfo {
    readonly attribute FrozenArray<GPUCompilationMessage> messages;
};

dictionary GPUPipelineDescriptorBase : GPUObjectDescriptorBase {
    GPUPipelineLayout layout;
};

interface mixin GPUPipelineBase {
    GPUBindGroupLayout getBindGroupLayout(unsigned long index);
};

dictionary GPUProgrammableStage {
    required GPUShaderModule module;
    required USVString entryPoint;
    record<USVString, GPUPipelineConstantValue> constants;
};

typedef double GPUPipelineConstantValue; // May represent WGSL’s bool, f32, i32, u32.

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUComputePipeline {
};
GPUComputePipeline includes GPUObjectBase;
GPUComputePipeline includes GPUPipelineBase;

dictionary GPUComputePipelineDescriptor : GPUPipelineDescriptorBase {
    required GPUProgrammableStage compute;
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPURenderPipeline {
};
GPURenderPipeline includes GPUObjectBase;
GPURenderPipeline includes GPUPipelineBase;

dictionary GPURenderPipelineDescriptor : GPUPipelineDescriptorBase {
    required GPUVertexState vertex;
    GPUPrimitiveState primitive = {};
    GPUDepthStencilState depthStencil;
    GPUMultisampleState multisample = {};
    GPUFragmentState fragment;
};

enum GPUPrimitiveTopology {
    "point-list",
    "line-list",
    "line-strip",
    "triangle-list",
    "triangle-strip",
};

dictionary GPUPrimitiveState {
    GPUPrimitiveTopology topology = "triangle-list";
    GPUIndexFormat stripIndexFormat;
    GPUFrontFace frontFace = "ccw";
    GPUCullMode cullMode = "none";

    // Requires "depth-clip-control" feature.
    boolean unclippedDepth = false;
};

enum GPUFrontFace {
    "ccw",
    "cw",
};

enum GPUCullMode {
    "none",
    "front",
    "back",
};

dictionary GPUMultisampleState {
    GPUSize32 count = 1;
    GPUSampleMask mask = 0xFFFFFFFF;
    boolean alphaToCoverageEnabled = false;
};

dictionary GPUFragmentState : GPUProgrammableStage {
    required sequence<GPUColorTargetState> targets;
};

dictionary GPUColorTargetState {
    required GPUTextureFormat format;

    GPUBlendState blend;
    GPUColorWriteFlags writeMask = 0xF;  // GPUColorWrite.ALL
};

dictionary GPUBlendState {
    required GPUBlendComponent color;
    required GPUBlendComponent alpha;
};

typedef [EnforceRange] unsigned long GPUColorWriteFlags;
[Exposed=(Window, DedicatedWorker)]
namespace GPUColorWrite {
    const GPUFlagsConstant RED   = 0x1;
    const GPUFlagsConstant GREEN = 0x2;
    const GPUFlagsConstant BLUE  = 0x4;
    const GPUFlagsConstant ALPHA = 0x8;
    const GPUFlagsConstant ALL   = 0xF;
};

dictionary GPUBlendComponent {
    GPUBlendOperation operation = "add";
    GPUBlendFactor srcFactor = "one";
    GPUBlendFactor dstFactor = "zero";
};

enum GPUBlendFactor {
    "zero",
    "one",
    "src",
    "one-minus-src",
    "src-alpha",
    "one-minus-src-alpha",
    "dst",
    "one-minus-dst",
    "dst-alpha",
    "one-minus-dst-alpha",
    "src-alpha-saturated",
    "constant",
    "one-minus-constant",
};

enum GPUBlendOperation {
    "add",
    "subtract",
    "reverse-subtract",
    "min",
    "max",
};

dictionary GPUDepthStencilState {
    required GPUTextureFormat format;

    boolean depthWriteEnabled = false;
    GPUCompareFunction depthCompare = "always";

    GPUStencilFaceState stencilFront = {};
    GPUStencilFaceState stencilBack = {};

    GPUStencilValue stencilReadMask = 0xFFFFFFFF;
    GPUStencilValue stencilWriteMask = 0xFFFFFFFF;

    GPUDepthBias depthBias = 0;
    float depthBiasSlopeScale = 0;
    float depthBiasClamp = 0;
};

dictionary GPUStencilFaceState {
    GPUCompareFunction compare = "always";
    GPUStencilOperation failOp = "keep";
    GPUStencilOperation depthFailOp = "keep";
    GPUStencilOperation passOp = "keep";
};

enum GPUStencilOperation {
    "keep",
    "zero",
    "replace",
    "invert",
    "increment-clamp",
    "decrement-clamp",
    "increment-wrap",
    "decrement-wrap",
};

enum GPUIndexFormat {
    "uint16",
    "uint32",
};

enum GPUVertexFormat {
    "uint8x2",
    "uint8x4",
    "sint8x2",
    "sint8x4",
    "unorm8x2",
    "unorm8x4",
    "snorm8x2",
    "snorm8x4",
    "uint16x2",
    "uint16x4",
    "sint16x2",
    "sint16x4",
    "unorm16x2",
    "unorm16x4",
    "snorm16x2",
    "snorm16x4",
    "float16x2",
    "float16x4",
    "float32",
    "float32x2",
    "float32x3",
    "float32x4",
    "uint32",
    "uint32x2",
    "uint32x3",
    "uint32x4",
    "sint32",
    "sint32x2",
    "sint32x3",
    "sint32x4",
};

enum GPUVertexStepMode {
    "vertex",
    "instance",
};

dictionary GPUVertexState : GPUProgrammableStage {
    sequence<GPUVertexBufferLayout?> buffers = [];
};

dictionary GPUVertexBufferLayout {
    required GPUSize64 arrayStride;
    GPUVertexStepMode stepMode = "vertex";
    required sequence<GPUVertexAttribute> attributes;
};

dictionary GPUVertexAttribute {
    required GPUVertexFormat format;
    required GPUSize64 offset;

    required GPUIndex32 shaderLocation;
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUCommandBuffer {
};
GPUCommandBuffer includes GPUObjectBase;

dictionary GPUCommandBufferDescriptor : GPUObjectDescriptorBase {
};

interface mixin GPUCommandsMixin {
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUCommandEncoder {
    GPURenderPassEncoder beginRenderPass(GPURenderPassDescriptor descriptor);
    GPUComputePassEncoder beginComputePass(optional GPUComputePassDescriptor descriptor = {});

    undefined copyBufferToBuffer(
        GPUBuffer source,
        GPUSize64 sourceOffset,
        GPUBuffer destination,
        GPUSize64 destinationOffset,
        GPUSize64 size);

    undefined copyBufferToTexture(
        GPUImageCopyBuffer source,
        GPUImageCopyTexture destination,
        GPUExtent3D copySize);

    undefined copyTextureToBuffer(
        GPUImageCopyTexture source,
        GPUImageCopyBuffer destination,
        GPUExtent3D copySize);

    undefined copyTextureToTexture(
        GPUImageCopyTexture source,
        GPUImageCopyTexture destination,
        GPUExtent3D copySize);

    undefined clearBuffer(
        GPUBuffer buffer,
        optional GPUSize64 offset = 0,
        optional GPUSize64 size);

    undefined writeTimestamp(GPUQuerySet querySet, GPUSize32 queryIndex);

    undefined resolveQuerySet(
        GPUQuerySet querySet,
        GPUSize32 firstQuery,
        GPUSize32 queryCount,
        GPUBuffer destination,
        GPUSize64 destinationOffset);

    GPUCommandBuffer finish(optional GPUCommandBufferDescriptor descriptor = {});
};
GPUCommandEncoder includes GPUObjectBase;
GPUCommandEncoder includes GPUCommandsMixin;
GPUCommandEncoder includes GPUDebugCommandsMixin;

dictionary GPUCommandEncoderDescriptor : GPUObjectDescriptorBase {
};

dictionary GPUImageDataLayout {
    GPUSize64 offset = 0;
    GPUSize32 bytesPerRow;
    GPUSize32 rowsPerImage;
};

dictionary GPUImageCopyBuffer : GPUImageDataLayout {
    required GPUBuffer buffer;
};

dictionary GPUImageCopyTexture {
    required GPUTexture texture;
    GPUIntegerCoordinate mipLevel = 0;
    GPUOrigin3D origin = {};
    GPUTextureAspect aspect = "all";
};

dictionary GPUImageCopyTextureTagged : GPUImageCopyTexture {
    GPUPredefinedColorSpace colorSpace = "srgb";
    boolean premultipliedAlpha = false;
};

dictionary GPUImageCopyExternalImage {
    required (ImageBitmap or HTMLCanvasElement or OffscreenCanvas) source;
    GPUOrigin2D origin = {};
    boolean flipY = false;
};

interface mixin GPUProgrammablePassEncoder {
    undefined setBindGroup(GPUIndex32 index, GPUBindGroup bindGroup,
                      optional sequence<GPUBufferDynamicOffset> dynamicOffsets = []);

    undefined setBindGroup(GPUIndex32 index, GPUBindGroup bindGroup,
                      Uint32Array dynamicOffsetsData,
                      GPUSize64 dynamicOffsetsDataStart,
                      GPUSize32 dynamicOffsetsDataLength);
};

interface mixin GPUDebugCommandsMixin {
    undefined pushDebugGroup(USVString groupLabel);
    undefined popDebugGroup();
    undefined insertDebugMarker(USVString markerLabel);
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUComputePassEncoder {
    undefined setPipeline(GPUComputePipeline pipeline);
    undefined dispatch(GPUSize32 x, optional GPUSize32 y = 1, optional GPUSize32 z = 1);
    undefined dispatchIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);

    undefined endPass();
};
GPUComputePassEncoder includes GPUObjectBase;
GPUComputePassEncoder includes GPUCommandsMixin;
GPUComputePassEncoder includes GPUDebugCommandsMixin;
GPUComputePassEncoder includes GPUProgrammablePassEncoder;

enum GPUComputePassTimestampLocation {
    "beginning",
    "end",
};

dictionary GPUComputePassTimestampWrite {
    required GPUQuerySet querySet;
    required GPUSize32 queryIndex;
    required GPUComputePassTimestampLocation location;
};

typedef sequence<GPUComputePassTimestampWrite> GPUComputePassTimestampWrites;

dictionary GPUComputePassDescriptor : GPUObjectDescriptorBase {
    GPUComputePassTimestampWrites timestampWrites = [];
};

interface mixin GPURenderEncoderBase {
    undefined setPipeline(GPURenderPipeline pipeline);

    undefined setIndexBuffer(GPUBuffer buffer, GPUIndexFormat indexFormat, optional GPUSize64 offset = 0, optional GPUSize64 size);
    undefined setVertexBuffer(GPUIndex32 slot, GPUBuffer buffer, optional GPUSize64 offset = 0, optional GPUSize64 size);

    undefined draw(GPUSize32 vertexCount, optional GPUSize32 instanceCount = 1,
              optional GPUSize32 firstVertex = 0, optional GPUSize32 firstInstance = 0);
    undefined drawIndexed(GPUSize32 indexCount, optional GPUSize32 instanceCount = 1,
                     optional GPUSize32 firstIndex = 0,
                     optional GPUSignedOffset32 baseVertex = 0,
                     optional GPUSize32 firstInstance = 0);

    undefined drawIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
    undefined drawIndexedIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPURenderPassEncoder {
    undefined setViewport(float x, float y,
                     float width, float height,
                     float minDepth, float maxDepth);

    undefined setScissorRect(GPUIntegerCoordinate x, GPUIntegerCoordinate y,
                        GPUIntegerCoordinate width, GPUIntegerCoordinate height);

    undefined setBlendConstant(GPUColor color);
    undefined setStencilReference(GPUStencilValue reference);

    undefined beginOcclusionQuery(GPUSize32 queryIndex);
    undefined endOcclusionQuery();

    undefined executeBundles(sequence<GPURenderBundle> bundles);
    undefined endPass();
};
GPURenderPassEncoder includes GPUObjectBase;
GPURenderPassEncoder includes GPUCommandsMixin;
GPURenderPassEncoder includes GPUDebugCommandsMixin;
GPURenderPassEncoder includes GPUProgrammablePassEncoder;
GPURenderPassEncoder includes GPURenderEncoderBase;

enum GPURenderPassTimestampLocation {
    "beginning",
    "end",
};

dictionary GPURenderPassTimestampWrite {
    required GPUQuerySet querySet;
    required GPUSize32 queryIndex;
    required GPURenderPassTimestampLocation location;
};

typedef sequence<GPURenderPassTimestampWrite> GPURenderPassTimestampWrites;

dictionary GPURenderPassDescriptor : GPUObjectDescriptorBase {
    required sequence<GPURenderPassColorAttachment> colorAttachments;
    GPURenderPassDepthStencilAttachment depthStencilAttachment;
    GPUQuerySet occlusionQuerySet;
    GPURenderPassTimestampWrites timestampWrites = [];
};

dictionary GPURenderPassColorAttachment {
    required GPUTextureView view;
    GPUTextureView resolveTarget;

    required (GPULoadOp or GPUColor) loadValue;
    required GPUStoreOp storeOp;
};

dictionary GPURenderPassDepthStencilAttachment {
    required GPUTextureView view;

    required (GPULoadOp or float) depthLoadValue;
    required GPUStoreOp depthStoreOp;
    boolean depthReadOnly = false;

    required (GPULoadOp or GPUStencilValue) stencilLoadValue;
    required GPUStoreOp stencilStoreOp;
    boolean stencilReadOnly = false;
};

enum GPULoadOp {
    "load",
};

enum GPUStoreOp {
    "store",
    "discard",
};

dictionary GPURenderPassLayout: GPUObjectDescriptorBase {
    required sequence<GPUTextureFormat> colorFormats;
    GPUTextureFormat depthStencilFormat;
    GPUSize32 sampleCount = 1;
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPURenderBundle {
};
GPURenderBundle includes GPUObjectBase;

dictionary GPURenderBundleDescriptor : GPUObjectDescriptorBase {
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPURenderBundleEncoder {
    GPURenderBundle finish(optional GPURenderBundleDescriptor descriptor = {});
};
GPURenderBundleEncoder includes GPUObjectBase;
GPURenderBundleEncoder includes GPUCommandsMixin;
GPURenderBundleEncoder includes GPUDebugCommandsMixin;
GPURenderBundleEncoder includes GPUProgrammablePassEncoder;
GPURenderBundleEncoder includes GPURenderEncoderBase;

dictionary GPURenderBundleEncoderDescriptor : GPURenderPassLayout {
    boolean depthReadOnly = false;
    boolean stencilReadOnly = false;
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUQueue {
    undefined submit(sequence<GPUCommandBuffer> commandBuffers);

    Promise<undefined> onSubmittedWorkDone();

    undefined writeBuffer(
        GPUBuffer buffer,
        GPUSize64 bufferOffset,
        [AllowShared] BufferSource data,
        optional GPUSize64 dataOffset = 0,
        optional GPUSize64 size);

    undefined writeTexture(
        GPUImageCopyTexture destination,
        [AllowShared] BufferSource data,
        GPUImageDataLayout dataLayout,
        GPUExtent3D size);

    undefined copyExternalImageToTexture(
        GPUImageCopyExternalImage source,
        GPUImageCopyTextureTagged destination,
        GPUExtent3D copySize);
};
GPUQueue includes GPUObjectBase;

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUQuerySet {
    undefined destroy();
};
GPUQuerySet includes GPUObjectBase;

dictionary GPUQuerySetDescriptor : GPUObjectDescriptorBase {
    required GPUQueryType type;
    required GPUSize32 count;
};

enum GPUQueryType {
    "occlusion",
    "timestamp",
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUCanvasContext {
    readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;

    undefined configure(GPUCanvasConfiguration configuration);
    undefined unconfigure();

    GPUTextureFormat getPreferredFormat(GPUAdapter adapter);
    GPUTexture getCurrentTexture();
};

enum GPUCanvasCompositingAlphaMode {
    "opaque",
    "premultiplied",
};

dictionary GPUCanvasConfiguration {
    required GPUDevice device;
    required GPUTextureFormat format;
    GPUTextureUsageFlags usage = 0x10;  // GPUTextureUsage.RENDER_ATTACHMENT
    GPUPredefinedColorSpace colorSpace = "srgb";
    GPUCanvasCompositingAlphaMode compositingAlphaMode = "opaque";
    GPUExtent3D size;
};

enum GPUDeviceLostReason {
    "destroyed",
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUDeviceLostInfo {
    readonly attribute (GPUDeviceLostReason or undefined) reason;
    readonly attribute DOMString message;
};

partial interface GPUDevice {
    readonly attribute Promise<GPUDeviceLostInfo> lost;
};

enum GPUErrorFilter {
    "out-of-memory",
    "validation",
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUOutOfMemoryError {
    constructor();
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUValidationError {
    constructor(DOMString message);
    readonly attribute DOMString message;
};

typedef (GPUOutOfMemoryError or GPUValidationError) GPUError;

partial interface GPUDevice {
    undefined pushErrorScope(GPUErrorFilter filter);
    Promise<GPUError?> popErrorScope();
};

[Exposed=(Window, DedicatedWorker), SecureContext]
interface GPUUncapturedErrorEvent : Event {
    constructor(
        DOMString type,
        GPUUncapturedErrorEventInit gpuUncapturedErrorEventInitDict
    );
    readonly attribute GPUError error;
};

dictionary GPUUncapturedErrorEventInit : EventInit {
    required GPUError error;
};

partial interface GPUDevice {
    [Exposed=(Window, DedicatedWorker)]
    attribute EventHandler onuncapturederror;
};

typedef [EnforceRange] unsigned long GPUBufferDynamicOffset;
typedef [EnforceRange] unsigned long GPUStencilValue;
typedef [EnforceRange] unsigned long GPUSampleMask;
typedef [EnforceRange] long GPUDepthBias;

typedef [EnforceRange] unsigned long long GPUSize64;
typedef [EnforceRange] unsigned long GPUIntegerCoordinate;
typedef [EnforceRange] unsigned long GPUIndex32;
typedef [EnforceRange] unsigned long GPUSize32;
typedef [EnforceRange] long GPUSignedOffset32;

typedef unsigned long GPUFlagsConstant;

dictionary GPUColorDict {
    required double r;
    required double g;
    required double b;
    required double a;
};
typedef (sequence<double> or GPUColorDict) GPUColor;

dictionary GPUOrigin2DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin2DDict) GPUOrigin2D;

dictionary GPUOrigin3DDict {
    GPUIntegerCoordinate x = 0;
    GPUIntegerCoordinate y = 0;
    GPUIntegerCoordinate z = 0;
};
typedef (sequence<GPUIntegerCoordinate> or GPUOrigin3DDict) GPUOrigin3D;

dictionary GPUExtent3DDict {
    required GPUIntegerCoordinate width;
    GPUIntegerCoordinate height = 1;
    GPUIntegerCoordinate depthOrArrayLayers = 1;
};
typedef (sequence<GPUIntegerCoordinate> or GPUExtent3DDict) GPUExtent3D;

Issues Index

define the host API distinct from the shader API
This section will need to be revised to support multiple queues.
The above should probably talk about GPU commands. But we don’t have a way to reference specific GPU commands (like dispatch) yet.
Define "ownership".
explain how to get from device to its "primary" GPUDevice.
Do we need to have a max per-pixel render target size?
Possibly replace this with PredefinedColorSpace, but note that doing so would mean new WebGPU functionality gets added automatically when items are added to that enum in the upstream spec.
Consider a path for uploading srgb-encoded images into linearly-encoded textures. [Issue #gpuweb/gpuweb#1715]
Need a robust example like the one in ErrorHandling.md, which handles all situations. Possibly also include a simple example with no handling.
Update here if an adaptersadded/adapterschanged event is introduced.
Finish defining multithreading API and add [Serializable] back to the interface. [Issue #gpuweb/gpuweb#354]
GPUDevice doesn’t really need the cross-origin policy restriction. It should be usable from multiple agents regardless. Once we describe the serialization of buffers, textures, and queues - the COOP+COEP logic should be moved in there.
define buffer (internal object)
Specify [[mapping]] in term of DataBlock similarly to AllocateArrayBuffer? [Issue #gpuweb/gpuweb#605]
[[usage]] is differently named from [[descriptor]].usage. We should make it consistent.
Finish defining multithreading API and add [Serializable] back to the interface. [Issue #gpuweb/gpuweb#354]
Explain what are a GPUDevice's [[allowed buffer usages]]. [Issue #gpuweb/gpuweb#605]
Add client-side validation that a mapped buffer can only be unmapped and destroyed on the worker on which it was mapped. Likewise getMappedRange can only be called on that worker. [Issue #gpuweb/gpuweb#605]
Handle error buffers once we have a description of the error monad. [Issue #gpuweb/gpuweb#605]
Do we validate that mode contains only valid flags?
Consider aligning mapAsync offset to 8 to match this.
define texture (internal object)
define mipmap level, array layer, aspect, slice (concepts)
share this definition with the part of the specification that describes sampling.
Allow for creating views with compatible formats as well.
add something on GPUAdapter(?) that gives an estimate of the bytes per texel of "stencil8", "depth24plus-stencil8", and "depth32float-stencil8".
Update this description with canvas.
Update this description with canvas.
Is this too restrictive?
explain how LOD is calculated and if there are differences here between platforms.
explain what anisotropic sampling is
Describe a "sample footprint" in greater detail.
describe how filtering interacts with comparison sampling.
consider making sampleType truly optional.
consider making format truly optional.
should this example and the note be moved to some "best practices" document?
Finish defining multithreading API and add [Serializable] back to the interface. [Issue #gpuweb/gpuweb#354]
Describe createShaderModule() algorithm steps.
Reference WGSL spec when it defines what a line is. [Issue #gpuweb/gpuweb#2435]
Describe compilationInfo() algorithm steps.
Specify this more properly once we have internal objects for GPUBindGroupLayout. Alternatively only spec is as a new internal objects that’s group-equivalent
Define the reflection information concept so that this spec can interface with the WGSL spec and get information what the interface is for a GPUShaderModule for a specific entrypoint.
link to a definition for "minimum buffer binding size" in the "reflection information".
This fills the pipeline layout with empty bindgroups. Revisit once the behavior of empty bindgroups is specified.
Better define using static use, etc.
need description of the render states.
should we validate that cullMode is none for points and lines?
define what "compatible" means for render target formats.
need a proper limit for the maximum number of color targets.
define the area of reach for "statically used" things of GPUProgrammableStage
how can this algorithm support depth/stencil formats that are added in extensions?
Describe createCommandEncoder() algorithm steps.
Enqueue attachment loads/clears.
specify the behavior of read-only depth/stencil
these dictionary definitions should be inside the image copies section.
Define images more precisely. In particular, define them as being comprised of texel blocks.
Define the exact copy semantics, by reference to common algorithms shared by the copy methods.
Define the copies with 1d and 3d textures. [Issue #gpuweb/gpuweb#69]
Define (and test) the encoding of color values into the various encodings allowed by copyExternalImageToTexture().
figure out how to handle overflows in the spec. [Issue #gpuweb/gpuweb#69]
Describe and enqueue the GPU command.
Does the term "image copy" include copyTextureToTexture?
define this as an algorithm with (texture, mipmapLevel) parameters and use the call syntax instead of referring to the definition by label.
Define the copies with 1d and 3d textures. [Issue #gpuweb/gpuweb#69]
Additional restrictions on rowsPerImage if needed. [Issue #gpuweb/gpuweb#537]
Define the copies with "depth24plus", "depth24plus-stencil8", and "stencil8". [Issue #gpuweb/gpuweb#652]
convert "Valid Texture Copy Range" into an algorithm with parameters, similar to "validating linear texture data"
Describe writeTimestamp() algorithm steps.
Describe resolveQuerySet() algorithm steps.
Resolve bikeshed conflict when using argumentdef with overloaded functions that prevents us from defining dynamicOffsets.
Add validation that, for buffer bindings that weren’t prevalidated with minBindingSize, the binding ranges are large enough for the shader’s minimum binding size requirements.
support for no attachments [Issue #gpuweb/gpuweb#503]
make it a define once we reference to this from other places
define what "equals" means for GPURenderPassLayout here.
Allowed for GPUs to use fixed point or rounded viewport coordinates
Enqueue the attachment stores/discards.
Describe the reset of the steps for createRenderBundleEncoder().
Specify more formally.
Specify more formally.
If an srgb-linear color space is added, explain here how it interacts.
Do the actual copy.
Describe onSubmittedWorkDone() algorithm steps.
Write normative text about timestamp value resets.
Because timestamp query provides high-resolution GPU timestamp, we need to decide what constraints, if any, are on its availability.
If added, canvas must also not be desynchronized.
Does compositingAlphaMode=opaque make this return opaque contents? [Issue #gpuweb/gpuweb#1847]
Unless high dynamic range is explicitly enabled for the canvas element.
Define pushErrorScope.
Define popErrorScope.
This attribute should be [SameObject]. (If GPUError becomes an interface then we can do this without resolving the WebIDL issue.) [Issue #whatwg/webidl#1077]
describe the transfers at the high level
describe the computing algorithm
fill out the section, using fragments
fill out the section
specify indirect commands better.
should this be defined more formally?
link to WGSL built-ins
provide more specifics here, if possible
link to interpolation qualifiers in WGSL
do we want to force-enable the "Standard sample locations" in Vulkan?
define the depth computation algorithm
specify the depth translation into viewport as well
specify that each rasterization point gets assigned an interpolated divisor(p) and framebufferCoords(n), as well as the other attributes.
reword the "goes into" part
fill out this section
define how this value is constructed.
describe the exact equations.
fill out this section
fill out the section
Define functionality when the "depth-clip-control" feature is enabled.
Define functionality when the "timestamp-query" feature is enabled.
Depending on the resolution of this issue, allow reading/sampling via texture_2d etc. in the table above and specify the behavior. (vec4<f32>(D, X, X, X)?) Update the note below which would become slightly outdated. [Issue #gpuweb/gpuweb#2094]