Neural Prototype Trees. Explainable image classification… | by Nakul Upadhya | Jun, 2023


Learning Leaf Distributions

In a normal decision tree, the label of a leaf is learned by looking at the samples that end up in that leaf, but in a soft tree, the distributions in the leafs are part of the global learning problem. However, the authors noticed that clumping together the learning of the leafs with learning the prototypes results in inferior classification results. To rectify this, they leveraged a derivative-free strategy to get an update scheme for the leaf probabilities:

Figuree 5: Update Scheme. c_l^t is the leaf probability in leaf l at epoch t. y is the truth, yhat is the prediction. pi is path probability to that leaf. (Figure from Nauta et. al 2021 [1])

This update scheme was intertwined with mini-batch gradient to learn the prototypes and the convolution parameters to create an efficient learning procedure.

Pruning

To aid in interpretability, the authors also introduced a pruning mechanism. If a leaf node contains an effectively uniform distribution, it doesn’t have much discriminating power therefore it is better to prune it since smaller trees are easier to read and interpret. Mathematically, the authors define a threshold t and remove all leaves where the highest class probability is less than t (max(c_l) ≤ t). If all leaves in a subtree are removed, that sub-tree and its associated prototypes can be removed, allowing for a more compact tree. Usually, t = 1/K+ epsilon where K is the number of classes and epsilon is a very small number representing a tolerance.

Figure 5: Pruning Visualization (Figure from Nauta et. al 2021 [1])

Performance

Figure 6: Mean accuracy and standard deviations. ProtoTree ens. is an ensemble of 3 or 5 prototype trees. (Figure from Nauta et. al 2021 [1])

The authors benchmarked their methods using the CARS and CUBS dataset against other interpretable image-recognition methods (Such as ones with attention-based interpretability). They found that they were able to get close to SOTA accuracy with a relatively small ensemble of trees with small heights (9 and 11).

Interpretable deep-learning image classifiers offer a number of advantages over black-box models. They can help to build trust, improve debugging, and explain predictions. Additionally, they can be used to explore the data and learn more about the relationships between different features.

Overall, Neural Prototype Trees are a promising new approach to image recognition in a trustworthy manner. A doctor is more likely to trust a cancer-detecting model if he can check the characteristics of the image the model is looking at. These prototype trees can even be augmented with measures like attention to increase the accuracy further!

  1. Github for the Neural Prototype Tree: https://github.com/M-Nauta/ProtoTree
  2. If you’re interested in Explainable Machine Learning and AI, consider giving me a follow: https://medium.com/@upadhyan.

References

[1] M. Nauta, R.v. Bree, C. Seifert. Neural Prototype Trees for Interpretable Fine-Grained Image Recognition (2021). IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021

[2] C. Chen, O. Li, C. Tao. A.J. Barnett, J. Su, C. Rudin. This Looks Like That: Deep Learning for Interpretable Image Recognition (2019). 33rd Conference on Neural Information Processing Systems.



Source link

Leave a Comment