Early systems could generate a trace of their inference steps, which could then become the basis for explanation. These early systems reasoned by performing some form of logical inference on (somewhat) human readable symbols. In the early days of AI, the predominant reasoning methods were logical and symbolic. The problem of explainability is, to some extent, the result of AI's success. XAI will be essential for users to understand, appropriately trust, and effectively manage this emerging generation of artificially intelligent partners. This issue is especially important for the United States Department of Defense (DoD), which faces challenges that require the development of more intelligent, autonomous, and reliable systems. These systems offer tremendous benefits, but their effectiveness will be limited by the machine's inability to explain its decisions and actions to human users. Continued advances promise to produce autonomous systems that perceive, learn, decide, and act on their own. 1 CREATION OF XAIĭramatic success in machine learning has created an explosion of new AI capabilities. This article summarizes the goals, organization, and research progress of the XAI program. Now, as XAI comes to an end in 2021, it is time to reflect on what succeeded, what failed, and what was learned. In 2017, the 4-year XAI research program began. In the next post, we will cover the libraries and technologies available in the market for explainability work and will use some of these libraries to extract explanations from black-box models as well as white-box models.Defense Advanced Research Projects Agency (DARPA) formulated the explainable artificial intelligence (XAI) program in 2015 with the goal to enable end users to better understand, trust, and effectively manage artificially intelligent systems. Final Notesġ - Briefly covered the differences and similarities between XAI and NSC Ģ - Defined and compared black-box and white-box models ģ - Approaches to make black-box models explainable (model properties, local logic, global logic) Ĥ - Compared Rule-based Explanations and Case-based Explanations and exemplified them. In the upcoming parts of this series, we will have hands-on examples of these methods. Embedded symbols & extraction systems: more biology-inspired algorithms such as Neuro-Symbolic Computing.They make use of examples, cases, precedents, and/or counter-examples to explain system outputs and Case-based learning systems: algorithms based on case-based reasoning.Rule-based learning systems: algorithms to learn logical rules from data such as inductive logic programming, decision trees, etc.Including rule-based and case-based learning systems, we have four main classes of white-box designs: Then, it would try to customize with small variations on similar recipes. Therefore, it would look for the most similar desserts to apple pie in the available data. While the rule-based learning approach tries to come up with a set of general rules for making all types of desserts (i.e., eager approach), the case-based learning approach generalizes the information exactly as needed to cover particular tasks. We have the recipes for blueberry pie, cheesecake, shepherd’s pie, and a plain cake recipe. Let’s say our model needs to learn a recipe for how to make an apple pie. Rule-Based vs Case-Based Learning Algorithm Comparison Example: A Case-based Explanation Example (Figure by Author) (Images from Unsplash) Therefore, here is the diagram showing the sub-categories of AI models in terms of their explainability:įigure 7. Global Logic: A representation of the entire inner logic.Local Logic: A representation of the inner logic behind a single decision or prediction.Model Properties: Demonstrating specific properties of the model or its predictions such as (a) sensitivity to attribute changes or (b) identification of components of the model (e.g., neurons or nodes) for responsible for a given decision.Therefore, to make a black-box model explainable, we have to adopt several techniques to extract explanations from the inner logic or the outputs of the model.īlack-box models can be made explainable with A black-box model is not explainable by itself.Therefore, it does not require additional capabilities to be explainable. A white-box model is explainable by design.The training continues until the accuracy of the output of the reasoner is maximized by updating the representations.Īn AI model can be (i) white-box or (ii) black-box.Then, a symbolic reasoner checks the embedding similarities of symbol representations.Then, every sub-symbolic representation is associated with a human-understandable symbol. An image classifier learns to extract the sub-symbolic (numerical) representations from image or text segments.Neuro-Symbolic Concept Learner (Figure by Mao et al.)
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |