Resumen |
Machine learning is a necessary and widely used tool nowadays in industry. Talking about the evaluation of its reliability, already known metrics are broadly used, but they are focused on how precise, accurate or sensitive the model is. Nevertheless, these metrics do not offer an overview of the consistency or stability of the predictions, that is, how much reliable the model is, which could be deduced if the reasons behind the predictions are understood. In the present work, we propose a novel method that can be applied to image classifiers and allows the understanding, in a non-subjective visual manner, of the background of a prediction. |