LIMPID
Leveraging Interpretable Machines for Performance
Improvement and Decision

A three-year interdisciplinary research program conducted by Télécom Paris and IDEMIA, funded in part by the French National Research Agency (ANR)

Read more

A three-year interdisciplinary research program on trustworthy AI

"The European approach for AI aims to promote Europe’s innovation capacity in the area of AI while supporting the development and uptake of ethical and trustworthy AI across the EU economy"   European Commission, 2020

Artificial Intelligence is developing fast and is still in its infancy. Its potential looks very high: it can help in the field of health, in the field of security, in the field of industrial processes and much more. However, to recommend its usage to citizens, or to rely on it as a professional, AI needs a trust framework—for instance like medicines have today with protocols that make doctors, insurance, and in fine patients confident to use them.

According to the European AI High Level Expert Group ethics guidelines, trustworthy AI requires, among other things, reliability, robustness, fairness and explainability. Social and legal acceptability can happen only if the quality of the algorithms can be demonstrated, including reliability, robustness, level of local interpretability, absence of discrimination, the rate of false positives, and level of human control and oversight.

A new generation of machine learning tools

The goal of LIMPID is to contribute to the design of a new generation of machine learning tools oriented towards trustworthiness and not only performance-driven. The project provides Trustworthy AI by design for innovative video analytics in two use cases with strong legal requirements.

Both of these use cases require full confidence of citizens and accountability, making the legal/ethical requirements particularly challenging. By developing technical solutions for reliability, robustness and fairness, and explainability in these use cases while at the same time developing legal/ethical requirements, LIMPID sets a best practice benchmark for trustworthy AI in many other kinds of applications.

3 research themes of trustworthiness

LIMPID addresses the main pillars of trustworthiness in AI along three axes: reliability & robustness, fairness, explainability. They are intertwined and studied through two use cases.

Research notes

Some of our scientific publications and position papers on various topics related to trustworthy AI are also presented in a way intended for a general audience.

Team & Partners

LIMPID (Leveraging Interpretable Machines for Performance Improvement and Decision) is a three-year interdisciplinary research program started in December 2020. It is conducted by Télécom Paris and mainly funded in part by the French National Research Agency (ANR, Project 20-CE23-0028). IDEMIA is scientific coordinator of the project.