W3C

– DRAFT –
AI-generated software and Web standardization

25 March 2026

Attendees

Present
Anssi_Kostiainen, Dom, fscholz, Ian, martinA, mgifford2, MikeGifford, PaolaDm, pchampin, TiffanyBurtin, xiaoqian
Regrets
-
Chair
Dom
Scribe
dom

Meeting minutes

Slideset: https://www.w3.org/2026/Talks/dhm-ai-software/

<mgifford2> Copilot is also integrated into the GitHub environment with https://github.com/codespaces and https://docs.github.com/en/copilot/get-started/quickstart

<mgifford2> Even just GitHub's Accessibility team Accessibility Scanner - github/accessibility-scanner

<anssik> One Human + One Agent = One Browser From Scratch

<mgifford2> Specs are the new code https://www.linkedin.com/pulse/specs-new-code-tyler-dane-lbz1c/ &. https://medium.com/@aiforhuman/specs-are-the-new-code-2f3ede491324

<anssik> re One Browser that's another example, "Over three days they drove a single Codex CLI agent to build 20,000 lines of Rust that successfully renders HTML+CSS with no Rust crate dependencies at all"

<anssik> https://simonwillison.net/2026/Jan/27/one-human-one-agent-one-browser/

<mgifford2> https://conesible.de/wab/whitepaper_webaccessbench.pdf note that I don't think there was any advanced prompting with this study to ensure that there is anything more instructive about standards.

Anssi: we had an AI generated implementation of WebNN based on spec, test suites and code samples

<mgifford2> There was also the idea of including RDFa into all of our specifications so that all of our existing specifications can be more semantically linked

Anssi: provided lots of new useful insights

<mgifford2> https://mgifford.github.io/ACCESSIBILITY.md

anssik: the Chrome implemetantions took 3 years from 20+ engineers; the rust implementation was done in a few weeks with one engineer and 1 agent

<TiffanyBurtin> May we have the link to the presentation?

anssik: spec-driven development is becomign the main way of driving AI assistants

anssik: in behavior-driven development, you describe expected behaviors in natural language descriptions as specs
… humans write the specs as primary artefacts
… agents then implement them; humans verify they meet expectations
… humans define what "correct" means, with business cases and other considerations (e.g. privacy, etc)

<mgifford2> I've been using AI to build tools like this for accessibility like https://mgifford.github.io/open-scans/reports.html

qck mgifford2

Mike: the changes of the past 3 months about what's possible is striking
… gave pointers above to example of making AI-practices replicable en masse

<paola> such an interesting topic, and YES DOM we must lern how to work with AI/agents

anssi: how does the fact that the cost of developing software approaches zero impact our community?
… writing a browser engine from scratch didn't seem possible any more; now it may become achievable again
… can this completely change the pace of iteration and power balance on spec-ing/implementing?

<paola> The cost.effort will be allocated differently, but human expertise remains expensive to gather. and remains necessary

<mgifford2> Sorry, wifi turned off.

<mgifford2> Also how do we do it responsibly? Looking at the W3C's ethics statement and the web sustainability guidelines. I've also created https://mgifford.github.io/SUSTAINABILITY.md to try to help.

<paola> yes of course, everything is completely changing, but humans will have to continue to train, and human expertise costs a lot of effort

<paola> so when we go back to the code or documentation next morning we still have a lot of very high skilled work to do

<paola> evaluation, integration, training humans on what AI is doing and how, is going to be expensive

dom: story of the continued value of interoperability in a world of marginal cost of software development

anssi: broad adoption still needs wide consensus from humans

francois: specs are not written today as describing the outcome of code, but now they're almost code - with algorithmic steps
… to ensure interop
… these algorithms are actually hard to review, tale a lot of WG participants time
… is this an opportunity to get back to a less algorithmic approach to spec writing?
… and the tools may deal with detect interop issue

<paola> the role of humans is going to change, I think, from programmers to systems engineers of very complex and sophisticated tools, reviewing deploying and using the code is what humans are going to require a lot of effort. there is a shift. but AI can and must neveer operate without the humans in the loop

mikeG: to use AI responsibly (incl. from sustainability perspective), we need to find ways to share the infrastructure responsibly
… I'm hoping we can feed web sustainability guidelines in how these AI tools build apps & pages - e.g. making it easier to build dark mode

<mgifford2> the keynote I mentioned https://events.drupal.org/chicago2026/session/keynote-alexandra-bell-security-implications-ai

<paola> so for example, we need to continue to develop tests and make sure the resutls of the testing and evaluations are valid., testing can also be automated, but only up to a point. @dom let me evaluate if the existing accessibility guidelines written for humans are OK also for machines, thats an important question- I ll look into it

dom: does AI tool change the way we relate to complexity of Web technologies due to their long legacy

<mgifford2> https://almanac.httparchive.org/en/2025/accessibility

<paola> thank you @mgifford2 and @dom I ll make a note of these points. look forward to the transcript

mgifford2: comparing to self-driving cars - leaving control to a machine creates anxiety initially
… Web almanach is a good source of data about how well our standards are adopted - incl in terms of quality
… most accessible TLDs are .gov, .edu, and ... .ai - which can be assumed to have a lot of AI-generated content
… it's only matter of time before AI agents create better code following specs than humans
… we need to get better measures on this

Roy: I tried to use AI agents to generate Web sites & apps - the paradigm of creating software is shifting
… there is another layer between developers and their apps, the agent who will read the spec
… maybe our specs need to consider how to make AI agents create better code, e.g. via skills
… harness engineering

<paola> Roy_Ruoxi in my experience, we canot generalize, because every coding agent is really different and good at different things, as a human engineers we must learn how to write the code spec so that the agent can just follow them basically, the human must write the prompt so that the AI coding agent knows exactly what to do. humans must learn how

<paola> to write the system specification so that the coding agent undertand how to execute them

Anssi: we already see that natural language is a universal programming language
… we can define our specs in English, or in algorithmic prose, or likely both
… with agents, we could start in natural language (e.g. explainers), and ask an agent to propose more algorithmic versions for review

<paola> the human must develop the skill to build precise briefs

Anssi: or the reverse - although natural->algo feels more natural

mgifford: there is a lot of anxiety about AI - incl in the sustainability IG, the accessibility community
… it's also unsettling open source communities
… it impacts people's jobs and livelihoods
… we need to be careful in how we approach these questions, both avoiding over-enthusiasm and over-avoidance

Minutes manually created (not a transcript), formatted by scribe.perl version 248 (Mon Oct 27 20:04:16 2025 UTC).

Diagnostics

Succeeded: s/@@@/harness

Maybe present: Anssi, anssik, francois, mgifford, Mike, mikeG, Roy

All speakers: Anssi, anssik, dom, francois, mgifford, mgifford2, Mike, mikeG, Roy

Active on IRC: anssik, breakout-bot, dom, fscholz, Ian, martinA, mgifford2, paola, pchampin, Roy_Ruoxi, tidoust, TiffanyBurtin, xiaoqian