More than Just Reptiles: Exploring the Iguanas Toolkit for XAI Beyond Black Box Models | by Vegard Flovik | Aug, 2023


“AI thinking” Source: Created by author using Dall-E

Balancing complexity and transparency for effective decision making

As more and more industries adopt machine learning as part of their decision-making processes, an important question arises: How can we trust models where we cannot understand their reasoning, and how can we confidently make high-stakes decisions based on such information?

For applications within safety-critical heavy-asset industries, where errors can lead to disastrous outcomes, lack of transparency can be a major roadblock for adoption. This is where model interpretability and explainability is becoming increasingly important.

Think of models along a spectrum of understandability: complex deep neural networks occupy one end, while transparent rule-based systems reside on the other. In many cases, it’s equally important for a model’s output to be interpretable as to be perfectly accurate.

Interpretability vs. Accuracy. Source: created by author

In this blog post, we’ll explore a method for automatically generating rule sets directly from data, which enables building a decision support system that is fully transparent and interpretable. It’s important to note that not all cases can be satisfactorily solved by such basic models though. However, initiating any modeling endeavor with a simple baseline model offers several key advantages:

  • Swift Implementation: Quick setup to initiate a foundational mode
  • Comparative Reference: A benchmark for evaluating more advanced techniques
  • Human-Understandable Insights: Basic explainable models yield valuable human-interpretable insights

To my fellow Data Science practitioners reading this post: I acknowledge the resemblance of this method to simply fitting a decision tree model. However, as you continue reading, you’ll see that this method is tailored to mimic human rule creation, which makes it easier to interpret compared to the output from a typical decision tree model (which can often prove difficult in practice).



Source link

Leave a Comment