Each of the use cases is supposed to eventually require a government order, or perhaps even a specific law. This can happen only if the quality of the algorithms can be demonstrated, including reliability, robustness, level of local interpretability, absence of discrimination, the rate of false positives, and level of human control and oversight. The LIMPID project helps formulate the legal/ethical requirements for reliability, fairness and local explainability, and then design or adapt tools to meet the identified requirements. Iteratively, each proposed technical solution is refined based on feedback from the legal specialists.
Based on international benchmarks and examples from use cases similar to ours, LIMPID develops and discusses with institutional stakeholders an initial set of legal and ethical requirements that one could reasonably expect to apply to an image recognition tool in our specific use cases. The requirements are refined as we experiment with different approaches to reliability and as a function of stakeholder input. At the end of the project, a final list of requirements will be proposed, and a gap analysis performed with the technical reliability solutions.