Creating and sharing knowledge for telecommunications

Project: Measuring and Improving Explainability for AI-based Face Recognition

Acronym: XAIface
Main Objective:
Face recognition has become a key technology in our society, frequently used in multiple applications, while creating an impact in terms of privacy. As face recognition solutions based on artificial intelligence (AI) are becoming popular, it is critical to fully understand and explain how these technologies work in order to make them more effective and accepted by
society. In this project, we focus on the analysis of the influencing factors relevant for the final decision of an AI-based face recognition system as an essential step to understand and
improve the underlying processes involved. The scientific approach pursued in the project is designed in such a way that it will be applicable to other use cases such as object detection and pattern recognition tasks in a wider set of applications.
Thanks to the interdisciplinary nature of the consortium, the outcomes of XAIface will affect many fields and can be summarized as follows: (i) develop clear legal guidelines on the use
and design of AI-based face recognition following the privacy-by-design approach; (ii) disentangling demographic information (age, gender, ethnicity) from the overall face
representation in order to understand the impact of such traits on face recognition but also to develop demographic-free face recognition; (iii) address fairness and non-discrimination issues by following the idea of de-biasing during the training; (iv) optimize the tradeoff between interpretability and performance; (v) create tools that will allow assessment and measurement of performance and explanation of decisions of AI-based face recognition systems; (vi) analyse image coding impact to better understand how future AI-based coding solutions may be different from a recognition explainability point of view. The achieved results will feed into the implementation of an end-to-end face recognition system for
studying the impact of the various system processes in terms of recognition performance
and explainability. This will provide a use case study on how to perform explainability analysis with the tools provided by our project.
Reference: CHIST-ERA/0003/2019
Funding: FCT
Approval Date: 24-02-2021
Start Date: 01-05-2021
End Date: 30-04-2024
Team: Fernando Manuel Bernardo Pereira, Paulo Luis Serras Lobato Correia, João Miguel Duarte Ascenso, Naima Bousnina
Groups: Multimedia Signal Processing
Partners: EPFL / MMSPG, EURECOM / Digital Security, JOANNEUM RESEARCH / DIGITAL, Univ. Vienna
Local Coordinator: Fernando Manuel Bernardo Pereira

Associated Publications