Skip to toolbar

Community & Business Groups

AI KR (Artificial Intelligence Knowledge Representation) Community Group

The overall goal/mission of this community group is to explore the requirements, best practices and implementation options for the conceptualization and specification of domain knowledge in AI.

We plan to place particular emphasis on the identification and the representation of AI facets and various aspects (technology, legislation, ethics etc) with the purpose to facilitate knowledge exchange and reuse.

Therefore the proposed outcomes could be instrumental to research and advancement of science and inquiry, as well as to increase the level of public awareness in general to enable learning and participation.

Proposed outcomes:

  • A comprehensive list of open access resources in both AI and KR (useful to teaching and research)
  • A set of metadata derived from these resources
  • A concept map of the domain
  • A natural language vocabulary to represent various aspects of AI
  • One or more encoding/implementations/ machine language version of the vocabulary, such as ChatBot Natural Language Understanding & Natural Language Generation
  • Methods for KR management, especially Natural Language Learning / Semantic Memory

WHO SHOULD JOIN: researchers and practitioners with an interest in developing AI KR artifacts (ontology, machine learning, markup languages)

Editable doc.

Group's public email, repo and wiki activity over time

Note: Community Groups are proposed and run by the community. Although W3C hosts these conversations, the groups do not necessarily represent the views of the W3C Membership or staff.

final reports / licensing info

date name commitments
aikrcgfirstreportfinalvaug22 Licensing commitments

drafts / licensing info

name
First Draft Report

Chairs, when logged in, may publish draft and final reports. Please see report requirements.

Publish Reports

Toward A Web Standard for Explainable AI?

David Gunning at DARPA called the wave of proliferation in intelligent, autonomous systems “a torrent of Artificial Intelligence”.  Some of the problems with the surge of AI  are reliability,  transparency, accountability.

If ordinary lives are likely to depend on highly engineered technical systems, then users, as well as the general public should be able to have some grasp of the workings behind such powerful machines.

The AI Black box, as it is referred to,  raises concerns  of being very powerful and well beyond public scrutiny.

DARPA addresses these concerns with ‘Explainable AI‘ (Note: link opens a Pdf)  a program that reportedly aims to create a suite of machine learning techniques supported by explainable models, while maintaining prediction accuracy and that enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners, with initial outcomes expected around November 2018.

At the end of the program, the final delivery, says DARPA, will be a toolkit library consisting of machine learning and a human-computer interface  that could be used to develop future explainable AI systems. eventually the  tools would be available for further refinement and transition into defense or commercial applications.

Observers  say that it’s not just the algorithms that should be transparent to support explainability. Also the data powering the machines and the logic supporting the models. And that the tradeoff between transparency and performance could become an overhead.

These points bring up further issues with the explainability of M: AI depends on appropriately represented knowledge and conceptual modelling, and on high level knowledge representation and explicit reasoning. Machine learning techniques, generally speaking,  as we know them today, exist largely at computational execution level, not at system, nor knowledge modelling level.

Should Explainable AI be applied to more general artificial intelligence  – such as AGI but not restricted to – rather than just machine learning, which is just one of many ways of  implementing AI solutions?

Should web facing explainable AI benefit from a  web standard, so that developers and users could identify, or even automate through the use of schemas and parsers the validation of explainable AI,  and check that algorithms comply with explainability criteria?

An open W3C community group called AIKR  which stands for Artificial Intelligence Knowledge Representation was started around the time when DARPA brought up XAI.

The community’s aim at this stage is still exploratory, to gather thoughts and inputs,, based on the assumption that explicit and shared knowledge representation is a necessary requirement for any kind of explainability, and above all, it’s a public affair that should be discussed in an open forum, and be open for consultation.

Typically, W3C standards define an Open Web Platform for application development that enables developers to build rich interactive experiences and although the boundaries of the platform continue to evolve,

Web standards are  technical specifications and guidelines developed through a process designed to maximize consensus about the technical content and to ensure high technical and editorial quality, and to earn endorsement by W3C and the broader community.

Since the Web is increasingly controlled by automated routines, much intelligent functions are likely to be powered by various layers of AI, some more explainable than others.

Could an XAI web standard  support DARPA’s vision of explainability, and  ensure that the machines running the web, which has become so essential to every aspect of life for most of us, remains  transparent and accountable?

JOIN THIS IMPORTANT DISCUSSION

Call for Participation in AI KR (Artificial Intelligence Knowledge Representation) Community Group

The AI KR (Artificial Intelligence Knowledge Representation) Community Group has been launched:


The overall goal/mission of this community group is to explore the requirements, best practices and  implementation options for the conceptualization and specification of domain knowledge in AI.

We plan to place particular emphasis on the identification and the representation of AI facets and various aspects (technology, legislation, ethics etc) with the purpose to facilitate knowledge exchange and reuse.

Therefore the proposed outcomes could be instrumental to research and advancement of science and inquiry, as well as to increase the level of awareness public in general to enable learning and participation.

Proposed outcomes:

  • A comprehensive list of open access resources in both AI and KR (useful to teaching and research)
  • A set of metadata derived from these resources
  • A concept map of the domain
  • A natural language vocabulary to represent various aspects of AI
  • One or more encoding/implementations/ machine language version of the vocabulary, such as ChatBot Natural Language Understanding & Natural Language Generation
  • Methods for KR management, especially Natural Language Learning / Semantic Memory

WHO SHOULD JOIN: researchers and practitioners with an interest in developing AI KR artefacts (ontology, machine learning, markup languages)

Editable doc.


In order to join the group, you will need a W3C account. Please note, however, that W3C Membership is not required to join a Community Group.

This is a community initiative. This group was originally proposed on 2018-07-02 by Paola Di Maio. The following people supported its creation: Paola Di Maio, Michael Johnson, Andrea Perego, brandon whitehead, Roman Evstifeev. W3C’s hosting of this group does not imply endorsement of the activities.

Read more about how to get started in a new group and good practice for running a group.

We invite you to share news of this new group in social media and other channels.

If you believe that there is an issue with this group that requires the attention of the W3C staff, please email us at site-comments@w3.org

Thank you,
W3C Community Development Team