Agentic Integrity Verification Specification Community Group

AI agents increasingly perform consequential actions on behalf of humans —browsing websites, submitting forms, executing code, and making purchasing decisions. When things go wrong (an agent takes an unintended action, produces an incorrect output, or is used in a regulated context), there is currently no agreed upon way to prove what the agent actually did, in what sequence, and whether the record has been tampered with. Existing observability tools (such as OpenTelemetry and LangSmith) log agent behavior but provide no cryptographic guarantees of completeness or authenticity.

Regulatory frameworks are beginning to require audit trails for AI systems (EU AI Act Article 19, ISO/IEC 42001, NIST AI RMF), but none prescribe a format. This group will explore open formats for cryptographic proof of AI agent sessions —portable, self-verifiable records that any party can verify independently, without network access or external infrastructure.

The Agentic Integrity Verification Specification (AIVS) v1.0 is intended to serve as a starting point for discussion within the group, but is not intended to constrain the group's discussions or decisions about future deliverables.

Homepage
Homepage/Blog
Shortname
aivs

Participation

To join or leave this group, please sign in to your account.

Leadership

Chairs
  • Erik Newton
  • Ben Stone

Links

 Mailing List
public-aivs