David Gunning at DARPA called the wave of proliferation in intelligent, autonomous systems “a torrent of Artificial Intelligence”. Some of the problems with the surge of AI are reliability, transparency, accountability.
If ordinary lives are likely to depend on highly engineered technical systems, then users, as well as the general public should be able to have some grasp of the workings behind such powerful machines.
The AI Black box, as it is referred to, raises concerns of being very powerful and well beyond public scrutiny.
DARPA addresses these concerns with ‘Explainable AI‘ (Note: link opens a Pdf) a program that reportedly aims to create a suite of machine learning techniques supported by explainable models, while maintaining prediction accuracy and that enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners, with initial outcomes expected around November 2018.
At the end of the program, the final delivery, says DARPA, will be a toolkit library consisting of machine learning and a human-computer interface that could be used to develop future explainable AI systems. eventually the tools would be available for further refinement and transition into defense or commercial applications.
Observers say that it’s not just the algorithms that should be transparent to support explainability. Also the data powering the machines and the logic supporting the models. And that the tradeoff between transparency and performance could become an overhead.
These points bring up further issues with the explainability of M: AI depends on appropriately represented knowledge and conceptual modelling, and on high level knowledge representation and explicit reasoning. Machine learning techniques, generally speaking, as we know them today, exist largely at computational execution level, not at system, nor knowledge modelling level.
Should Explainable AI be applied to more general artificial intelligence – such as AGI but not restricted to – rather than just machine learning, which is just one of many ways of implementing AI solutions?
Should web facing explainable AI benefit from a web standard, so that developers and users could identify, or even automate through the use of schemas and parsers the validation of explainable AI, and check that algorithms comply with explainability criteria?
An open W3C community group called AIKR which stands for Artificial Intelligence Knowledge Representation was started around the time when DARPA brought up XAI.
The community’s aim at this stage is still exploratory, to gather thoughts and inputs,, based on the assumption that explicit and shared knowledge representation is a necessary requirement for any kind of explainability, and above all, it’s a public affair that should be discussed in an open forum, and be open for consultation.
Typically, W3C standards define an Open Web Platform for application development that enables developers to build rich interactive experiences and although the boundaries of the platform continue to evolve,
Web standards are technical specifications and guidelines developed through a process designed to maximize consensus about the technical content and to ensure high technical and editorial quality, and to earn endorsement by W3C and the broader community.
Since the Web is increasingly controlled by automated routines, much intelligent functions are likely to be powered by various layers of AI, some more explainable than others.
Could an XAI web standard support DARPA’s vision of explainability, and ensure that the machines running the web, which has become so essential to every aspect of life for most of us, remains transparent and accountable?