Creating and sharing knowledge for telecommunications

Project Snapshot | XAIface – Measuring and Improving Explainability for AI-Based Face Recognition


by IT on 11-06-2025
Award Best Demo Award IEEE VNC 2025 Autonomous mobility Route 25 project Networks and Services
...

By Naima Bousnina, Paulo Lobato Correia & Fernando Pereira 

As face recognition technologies become increasingly integrated into our daily lives, from unlocking smartphones to enhancing security systems, concerns about fairness, transparency, and accountability have become more pressing than ever. The XAIface project (“Measuring and Improving Explainability for AI-Based Face Recognition”) was launched in direct response to these challenges, aiming to build AI systems that are not only high-performing, but also explainable, fair, and legally sound.

At its core, XAIface confronts a fundamental problem: modern face recognition systems, especially those based on deep learning, operate largely as ‘black boxes’. While their accuracy can be impressive, users, and often even developers, lack insight into how decisions are made. This opacity raises serious concerns about bias, discrimination, and misuse.

XAIface addressed these issues by exploring how facial representations can be disentangled from sensitive demographic attributes, such as gender, age, or ethnicity. This helps to reduce bias and increase fairness without compromising recognition performance. A key challenge is striking the right balance between performance and interoperability, a well-known trade-off in AI. To navigate this, the project developed model-agnostic and post-hoc explainability tools that can be applied across different face recognition models, without needing to alter their core architectures.

A key innovation was the creation of visual explanation tools that generate intuitive heatmaps, showing which parts of the face most influenced the model’s decision. These are complemented by a human-centered evaluation protocol, where users compare visual outputs in a pairwise fashion to assess clarity and usefulness. This approach allowed the project to derive statistically sound, user-driven metrics for explainability, a rare yet crucial dimension in current AI evaluation frameworks.

Beyond the technical scope, XAIface also acknowledged the legal and regulatory landscape surrounding biometric technologies. The project contributed to developing guidelines that align with data protection and anti-discrimination laws, addressing pressing issues like informed consent, transparency of automated decisions, and fairness across demographic groups.

To support both the research community and broader society, XAIface developed a comprehensive suite of open-source tools, evaluation protocols, and demonstrator applications. These include:

  • Custom benchmarks for evaluating the trade-offs between explainability and recognition accuracy;
  • Protocols for assessing face recognition under image compression, which is critical for deployment in bandwidth-constrained environments;
  • A web-based demonstrator, showcasing explainable face verification in action and providing an interactive platform for users to explore the project’s innovations.

Instituto de Telecomunicações (IT) played a central role in the project, contributing to the technical development of explainability methods, human-centered evaluations, and system-level performance analysis.

XAIface released several public outputs, including datasets, software libraries, scientific publications, and video explanation tools. These resources aim to promote a more responsible and transparent approach to AI-driven face recognition, empowering developers, researchers, regulators, and the general public alike.

Ultimately, XAIface represented a step forward in designing face recognition systems that can be trusted, and understood, paving the way for AI technologies that respect both human values and legal frameworks.

 

More on this project: 


https://www.it.pt/Projects/Index/4762
SHARE: