Centre for Responsible AI overall mission is to revolutionize the AI landscape responsibly and to develop the next generation of AI products that are transparent, fair, and trustworthy. It addresses crucial AI use cases through advanced machine learning and natural language processing approaches:
Distillation and Model Compression: Explore efficient strategies to compress large pre-trained models (e.g., BERT, GPT-3) without compromising accuracy; investigate one-model-fits-many strategies using adaptors and prompting, while quantifying the energy efficiency of production systems in terms of carbon emissions.
Explainable AI and Causal Inference: Develop self-explanatory deep models enhancing interpretability and stability; create a generic framework for post-hoc interpretability, allowing users to fine-tune explanations based on characteristics like size, complexity, vocabulary, and domain.
Reliability and Robustness of ML Models: Research strategies providing reliability measures, including confidence scores and uncertainty quantification; develop formal uncertainty quantification methods to streamline model development and quantify decision uncertainty.
Robust and Trustworthy Language and Translation Technologies: Address critical mistakes in NLP systems, ensuring robustness in healthcare and finance applications; develop "Responsible MT" systems for healthcare, translation quality estimation, and prevention of critical mistakes.
Brain-Computer Interfaces with Pretrained Language Models: Propose an innovative approach for language input using brain-computer interfaces and pre-trained neural language models; focus on controlled language generation for efficient communication and device control in rehabilitation scenarios.
Context-Aware MT for Conversational Data: Extend context-aware MT research to conversational data, incorporating dialogue history, metadata, and multimodal information; mitigate translation challenges related to ambiguity and gender bias in conversational contexts.
At Centre for Responsible AI, the team's commitment is to advance AI responsibly and ethically, ensuring transparency and trust in the development of innovative technologies.
Centre for Responsible AI is a huge undertaking, a collaborative initiative involving 11 startups (Unbabel, Feedzai, Sword Health, Apres, Automaise, Emotai, NeuralShift, Priberam, Visor.ai, YData, and YooniK), 8 research centers in Lisbon, Porto and Coimbra (Champalimaud Foundation, CISUC, Fraunhofer Portugal AICOS, FEUP, INESC-ID, IST, and IT) and several partners in the health area (Bial, Centro Hospitalar São João, Luz Saúde), from area of Tourism (Pestana Group) and Retail (Sonae).
The participation of Instituto de Telecomunicações in this project is centered around the following research activities: Energy-Efficient and Sustainable AI; Transparent, Fair, and Explainable AI; Language Technologies and Embodied Human-AI Interaction; and, Multilingual and Contextualized Conversational AI.
This is a project funded by PRR, with a total budget of € 78 000 000.00, with an IT local budget of € 1 503 276.00.
The Halo project is embedded within the Centre for Responsible AI. See also: https://www.it.pt/News/NewsPost/4956
Project official website: https://centerforresponsible.ai/
Project preview: