Proposed Group: AI Safety Governance & Interoperability Community Group
The AI Safety Governance & Interoperability Community Group has been proposed by Amir Hameed Mir:
As software agents and AI-driven services are increasingly deployed on the Web, there is no agreed to mechanism for all these systems to express, discover, or negotiate safety-related constraints. This gap limits interoperability and complicates the integration of safety-related guarantees into web architecture.
This Community Group aims to address that gap by exploring safety metadata and discovery, safety negotiation protocols, and verifiable inference metadata.
This Community Group will not define or recommend AI ethics or governance policies, develop model training or alignment standards, create certification, auditing, or compliance frameworks, or define identity systems or replace existing identity standards.
The Group encourages contributions from individuals with expertise in Web standards and protocol design, AI/ML systems, information security and applied cryptography, and Semantic Web and linked data technologies.
This group may publish Specifications.
You are invited to support the creation of this group. Once the group has a total of 5 supporters, it will be launched and people can join to begin work. In order to support the group, you will need a W3C account.
Once launched, the group will no longer be listed as “proposed”; it will be in the list of current groups.
If you believe that there is an issue with this group that requires the attention of the W3C staff, please send us email on site-comments@w3.org
Thank you,
W3C Community Development Team