Copyright © 2026 World Wide Web Consortium. W3C® liability, trademark and document use rules apply.
As Large Language Models (LLMs) become increasingly synonymous with “AI”, and are used by people within our community, we want to highlight considerations around different ways in which LLMs can be useful or problematic when it comes to leveraging them in standards work at W3C.
This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C standards and drafts index.
This document summarizes the Advisory Board’s current thinking, as of 24 March 2026. New initiatives by the W3C Team on AI & the Web, various AI-related Working Groups such as the WebML WG and GPU for the Web WG, and the Web & AI Interest Group show that this conversation is timely.
This document was published by the Advisory Board as a Group Note using the Note track.
This Group Note is endorsed by the Advisory Board, but is not endorsed by W3C itself nor its Members.
The W3C Patent Policy does not carry any licensing requirements or commitments on this document.
This document is governed by the 18 August 2025 W3C Process Document.
So far, LLMs have shown to benefit the standards process in several ways:
When paired with deep domain expertise, LLM coding assistants can help to build proof-of-concept level demos or write tests of specific web platform features. The ability to quickly generate and show a group many examples during a meeting helps verify concepts faster, reduces manual effort, and makes discussions more productive.
In addition to basic spellchecking, and editing, which are especially facilitative for our international community, LLMs can be used to ask questions about web standards to identify problems, gaps, and possibilities to improve.
New standards work often needs new names. Those names should be easy to understand while still fitting platform conventions and avoiding conflicts with existing terms. Because LLMs are trained on a very large sample of human language, they're a great tool to use when trying to come up with human friendly names for novel concepts.
In some cases, there are serious risks.
First, there is a possibility of copyright infringement. As LLMs may be trained on copyrighted materials, even well-meaning contributions could create associated liability.
There could also be security risks. Providing Member Confidential information to LLMs could result in data leakage (we suggest particular caution with free products).
Sometimes there’s subtle incorrectness. LLM-generated text and code may contain subtle falsehoods, which are hard to spot and difficult to debug.
If a specification concerns something that does not yet exist, like the decade-old debate of defining what the scope of a “view” is, LLMs are unlikely to be helpful, as they’re often trained on stale data that doesn’t necessarily incorporate the most recent discussions, especially if they took place over email or meetings.
Verbosity is a possible issue too. At W3C, we’ve always exchanged lengthy arguments, but if we automate writing them we could end up with more text that is less intentional and less meaningful. This can result in “second-hand burdening” that can thwart our community’s effectiveness.
W3C has a long tradition of scribing discussions manually. It lets us take a step back, and reflect on and interpret how a room understood a discussion. Fully outsourcing this to LLMs risks losing that tradition, and introduces risks of misattribution, biased attribution, factual incorrectness, and increase of minute length with irrelevancies.
Lastly, there is a climate impact. Production and usage of LLMs has a climate footprint (including energy and water use) that is disproportionately high, when compared to the production and usage of alternatives, such as manually writing text. As the web platform is built with commitment to sustainability in mind, it is worth considering this in the tools we use while developing the platform’s standards.
Ultimately, each individual is responsible for their own output. Whether or not an LLM is used is a matter of personal choice — the work produced remains the responsibility of the person behind it.
When LLMs are used, we suggest implementing these guardrails: