Understanding AI-driven systems has become fundamental, particularly when these systems are employed for critical decision-making, as is the case in the field of cybersecurity. In this regard, explainability has been extensively advocated as a cornerstone to comprehend the model, thereby enhancing trust and accountability in data-driven systems. Through the successful use-case of a risk exposure assessment framework which aims to proactively reduce an organization’s attack surface, we propose an explainable proxy which is founded on the generation of systematic evaluations of explanations. The proposed framework offers a swift and dependable method for assessing explanations specifically tailored for the cybersecurity domain.