Meeting minutes
dom: focus for this session is how to structure conversation on AI agents as they are emerging in the world of web browsers. What impact have we identified and how we should prepare for this
… I'm trying to understand the impact of AI on the web
dom: I'll give a quick intro on AI Agents impact on the web
… Anssi will present the work of the Web ML CG/WG
… and I'll invite broader discussion
… terms that emerge in the WG is browsers agents
… in 2024 I wrote a report of the impact AI system on the web
… I made a follow-up on github
<Jem> Do we have the url for the slides?
dom: content will produce challenges (monetization, etc)
… you probably need to know how many people are interested in your content
… analytics is also another area
… putting an agent in front of that can be very useful
… for instance, putting an agent in front of a shopping site will simplify the whole process
… there are interesting questions on how users give delegation to the agent
… talking about transaction, the question of risks/liability can get tricky
… you are creating technical business challenges
… it creates an environment with new questions
… the other topic my report addresses is AI agents as web AI agents
… an agent is embedded/attached to the browser
… this raises a huge number of questions
… what are the web user agents duties? what are the expectations?
… the level of trust of end users depends on regulation, etc
… understanding what adding a web user agent implies
… they are a lot of interesting questions on how we can keep the security variant in the area
… people see an increase of crawlers
… you wouldn't load a single page
… so it's worth taking this into consideration
… what are also the new interoperability needs emerging from this?
… WebMCP is looking at how we can bring MCP into web pages
… NLWeb brings a more semantic approach
… if you look at AI Agents today (bots), the ability to have the agents expose richer interfaces
… MCP-UI uses web technologies bring content in an AI system
… the final topic in the report is around the developer experience
… coding agents can become the primary reason for adoption
… understanding what we could do with coding agents is worth discussing
… the documentation CG is having a discussion tomorrow
… we welcome feedback on the report
… Roy and I are looking at organizing a workshop
… we think there's a real opportunity but we need signals from the community to know if this is useful
… workshops are a good opportunity to get people involved
<Igarashi> +1 to physical workshop
Web Machine Learning CG/WG
<dom> my slides are at https://
Slideset: https://
Anssi: "Sir Tim Berners Lee doesn't think AI will destroy the web"
… we started in 2018 exploring a space to prototyping
… people started asking when they can use
… 7 years later, I can say that time is soon
… on average it takes about 6years from ideas to shipping
… in 2020, we had a W3C workshop that attracted 400 participants
… we had a good discussion
… in 2021, we created the Web ML WG
… the idea was to focus on well scope problems and actuallly ship things
… since then, we've been pushing the work on the standard track
… we have been exploring other features (built-in API or task based API)
… just yesterday, we had a meeting and reach consensus @1
… huge thanks to the participants
… we also care about the risks and we pushed the ethical principles document
[Anssi comparing WebNN PAI and Build-in AI APIs]
Anssi: WebNN API is low lever while Built-in AI API is higher level
… Built-in API are simpler but there's a tradeoff
[Agentic browsers vs WebMCP]
… with WebMCP, you decide what is expose. The important thing here is who has the agent
… the agentic browsers take a screenshot and send it to the cloud. You can imagine it burns a lot of tokens (CO2)
… with the WebMCP, it's a programatic API so it's lightweight and produces less CO2
… Web ML is really for everyone
… I'd like to thank all the participants and the ecosystem leaders
David: we use the open AI and we found the more calls we make, and the more risks we have with lagging
Anssi: we have 2 open source projects for WebMCP
@@: we have an agent that can connect to your model
@3
Anssi: the agentic API is the most developed but is currently in incubation
@@: you talked about UI. Can you talk about standardization ?
Dom: the discussion on MCP-UI is done in a fairly collaborative manner
… MCP-UI offers the options to generate an iframe where the content provider can bring their own UI. The agent can detect when the user clicks.
<anssik> webmachinelearning/
Dom: it's about integrating interactive UI component
… either the work is done by the content provider or the agent based on the informations
<Sun> can you share the slide link if possible anssik?
Dom: I don't believe there's any standardization work yet
… for the iframe piece, MCP-UI identified some standardization needs
Roy: for generative UI, the chinese IG will discuss this as 2 chinese companies have interests in the topic
Anssi: about MCP UI, there's a WG. I'd love to figure out how to collaborate with the MCP community
… we have members of the Web ML WG that are also part of the MCP community
Dom: anyone has thoughts about the workshop?
gkok (from netflix): we've been thinking about how we should approach these changes
… it's hard for us to know what we should focus on
… I'd love to have a better understanding. We are trying to understand the protocols
dom: the real challenge is that there's so much happening that the best we can do is to experiment
… I'm struggling exactly the same way
DavidFazio: my company is one of the 3 partners with the Michigan University
… our role in the AI space with them is to incorporate AI in the unversity environment
… I'm interesting to know how we can use this in our project
<alanbuxey> ... Helix Opportunity - https://
gkok: what do you need to develop the report?
<Roy_Ruoxi> https://
dom: I'm hoping, if the Web and AI IG gets chartered, we can get more clarity
chrisp: curious about the assumptions you are making with webnn?
dom: it's not made to run with predefined model. You will need to make your model
… the built-in API comes with pre-built models
… at the moment, the only assumption is that they are fit for specific tasks
… there's also the prompt API that's just an interface with @4
Anssi: Vision models are key use models we are targetting