The Update robots.txt standards Community Group has been proposed by Hans Petter Blindheim:
Robots.txt is currently based on opting out of what you do not want your website to be a part of.
This is hard to maintain (almost a full time job right now) if you do not wish for your websites content to be applied for e.g. training AI, be a part of market research (e.g. price robots), to be a part of non-search engine databases and more.
This proposal is to update what type of instructions robots.txt should support to rather be treated as an opt-in, where you can give instructions based on intent of robots rather than a wildcard or in granular detail.
Applies to all robots that seeks to update, process or maintain websites for search engine databases. Does not grant permission to apply scraped data for AI purposes (this should have its own Agent-group).
Also, the absence of instructions should be treated as not having opted in, and for robots working on behalf of AI, there might need to be additional instructions (e.g. max-snippet and if you require a citation if your content is applied to provide an answer).
You are invited to support the creation of this group. Once the group has a total of five supporters, it will be launched and people can join to begin work. In order to support the group, you will need a W3C account.
Once launched, the group will no longer be listed as “proposed”; it will be in the list of current groups.
If you believe that there is an issue with this group that requires the attention of the W3C staff, please send us email on email@example.com
W3C Community Development Team