An Analysis of the Explanatory Capacity of Committees of Explainers
Deep Learning; Image; Explainable Artificial Intelligence (XAI)
In recent years, Artificial Intelligence has established itself as an essential tool, raising new challenges, including the explainability of AI model decisions, which is crucial to making models more transparent and aligned with ethical and regulatory demands. However, visual explainability has advanced without a well-defined consensus among researchers, with one of the main challenges being the assessment of the best techniques and metrics. This topic has not yet been adequately explored in ensemble-type explainers for convolutional neural networks. Therefore, this work proposes to evaluate ensemble explainers, whether available or implemented on an exploratory basis, using a systematic evaluation methodology with a broad set of metrics and the application of clustering techniques. The approach seeks to identify the best ensemble explainers and how to build them to improve model interpretability.