Demonstrate the reliability of your artificial intelligence systems

In order to meet the needs of trust, compliance and reliability of artificial intelligence solutions, LNE offers solutions for evaluating algorithms and systems embedding AI. Its AI Assessment Laboratory (LE. AI) is made up of several testing platforms that make it possible to validate the operation of an AI system, to secure its use, to improve its performance, and to ensure its ethical nature.

An AI system is a piece of hardware or a computer program that can make decisions, make recommendations, adapt to new situations, and learn from data in order to perform specific tasks autonomously or assisted.

Image
Bandeau-IA-VEN

Why and when to qualify the AI ​​features of an intelligent system?

In order to optimize its operation and energy consumption (frugality), to secure its use and to explain its decision-making process (explainability with a view to acceptability), it is essential to qualify the various AI functionalities of the intelligent system, as well as the evaluation methods used, before it is put into operation.

Analyzing the performance and robustness of AI features helps assess its reliability in performing the requested task and meeting the regulatory requirements related to bringing your product to market.

Validating an AI system will provide an opportunity to ensure that the model and the data used are reliable and interpretable to obtain a quality and robust analysis.

infoThanks to the structured analysis methods developed by our technical experts and adapted to your product, which put the technical functioning of AI to the test, it then becomes possible to challenge the AI decision-making process and identify all possible improvements.

 

Integrating this approach as early as possible in the integration or development phase of your product to identify the improvements to be made will limit development costs but also to be able to reassure about the performance of the AI model used in your system, to make the right decision.

The analysis of this performance is essential in the most critical areas of activity, such as the high-risk systems defined in the AI Act, such as health, surveillance or defense.

infoThanks to the test reports sent by LNE, you will be able to document, for example, the robustness and explainability of the algorithms developed or used.

 

An AI system qualification is part of your product's risk management analysis in order to:

  • Attest to its reliability
  • Identify to mitigate biases by determining their cause
  • Define the security of the algorithm so that it can be used in critical sectors
  • Meet the need for trust of integrators and users, and thus reduce complaints for inappropriate response

Validating the operation of an AI system through a methodical qualification thus makes it possible to provide a set of proofs (documentation) and to bring you into compliance with the standards as could be required in the MDR, the Machine Directive or the AI Act.

Going through a third-party organization helps to strengthen the confidence of end-users and regulators in the AI system developed, and to facilitate its deployment to the market.

A complete proposal:

To counter cyberattacks, LNE also offers to carry out a security audit of your AI system in order to identify its various vulnerabilities and strengthen its security.

Setting up a process certification of your AI and/or ISO 42001 is also a way to meet regulatory requirements, thus attesting to the implementation of a quality process and controlled organizational management.

AI Assessment and AI Act

The AI Act aims to ensure that AI systems used in Europe meet standards of safety, transparency, protection of fundamental rights and reliability commensurate with the level of risk they pose. Standardization aims to mitigate the risks inherent in AI and make AI trustworthy.

infoThe approach proposed by LNE aims to cover the various requirements of the AI Act in order to document and explain the functioning of the AI system and facilitate its validation with a notified body.

Data: a qualitative and environmental impact

In order to reduce the energy impact of the development of AI systems, it is necessary to use the right amount of evaluation or training data and to optimize the size of the models used. However, the reduction in the amount of data must not impact the quality of the result.

The quality and representativeness of the data used during the validation of AI are therefore essential, and must be controlled.

infoThanks to their expertise in the field, our technical experts will be able to assist you in guiding you on the use, at the right level, of controlled and relevant data that will allow you to obtain a quality, reliable and robust analysis.

 

By doing so, you will also contribute to reducing your energy consumption in this area.

What solutions can be used to qualify an AI system?

Your algorithm or AI system can be autonomous (software, application, etc.) or integrated into a piece of hardware or a device (medical device, personal assistance robot, smart camera, etc.).

As a third-party organization, LNE has been working since 2008 through its AI Assessment Laboratory (LE. AI) to the deployment of technical solutions and the development of methods for qualifying AI systems in order to secure their use and build confidence.

Image
Plateformation laboratoires LEIA - évaluation intelligence articielle

Within the LE. AI, you have access to a range of services that allow you to carry out simulations or immerse your system in a simulated dynamic reality, with different levels of realism. Depending on your need for validation or support, our experts will guide you on the most appropriate solution.

 

How can you ensure the reliability of an evaluation of an AI system?

LNE supports end users, integrators and developers of AI systems during the different phases of product development.

infoThanks to our expert's expertise in data completeness and representativeness, we can support you in the development, or adjustment, of your analysis methods and the evaluation plan of your AI system.

This analysis can be completed by carrying out:

  • An assessment of the performance and robustness of your system's AI features to validate it
  • An analysis of the qualification of the learning or assessment database (real or synthetic)
  • Technical assistance on defining the metrics to be used

infoAttentive to regulatory developments and able to provide an answer to your questions about the various regulations that apply to your AI system, we offer:

  • Support in the interpretation of regulatory and normative requirements via a conformity assessment (e.g. literature review according to the requirements of the AI Act) and by answering your specific questions
  • An assessment of the learning process or assessment of AI functionality in relation to AI certification needs
  • A literature and methodological analysis of an AI feature according to its application domain (influencing factors, learning protocol and limits, evaluation protocol and defined metrics, etc.)
  • An analysis of the cybersecurity of your AI system

infocatalogue or tailor-made training can be offered to you depending on the practical questions you need to answer.

Examples of products qualified by LNE: video protection systems, air traffic management, voice assistants, medical devices integrating AI, etc.

Namely:

  1. Metrics serve as basic metrics and help trace the origin of identified underperformance, so it's essential that they are well-defined based on the AI system and its usage.
  2. The data from the product will also need to be qualified and annotated to evaluate the product as accurately as possible.

Technical expertise in general-purpose generative AI

With in-depth expertise in audio, text, images and videos, our technical team can meet your various needs for the qualification and validation of your AI system:

  • Civil applications in automatic speech understanding or transcription, machine translation, diarization and detection of named entities,
  • Dual voice comparison applications for forensics, translation,
  • Speech recognition, character recognition (OCR, text content analysis)
  • Recognition of images, people or objects, shapes in TV/video documents,
  • Checking for anomalies.

LNE has tools* to evaluate automatic speech transcription systems, and can support you in the development and integration of generative AI for general use.

Our skills are being expanded across all sectors of activity that require the secure use of AI, such as health, energy, defence, surveillance, among others.

*tools:

  • Datomatic for data organization and building a baseline
  • Evalomatic to test the relevance of software by comparing its results to the references provided

How do you qualify a robot with AI on board?

LNE has unique infrastructures that allow you to validate your AI model scenarios embedded in a hardware device.

Through a simulation or immersion in a virtual environment, you collect data that can be used to identify any weaknesses and explore the functional limits of your solution.

infoSimulation will save you time on data collection and optimize your development costs.

Typical products evaluated: intelligent sensors and surveillance cameras, drones, robots with 2D or 3D cameras, Lidar, Sonar, mobile devices (autonomous vehicles, personal assistance, etc.), etc.

 

Possible subsidies

Did you know? Subsidies are possible

Any SME, VSEs and start-ups based in Europe developing innovative solutions and needing to design, qualify or certify a system using AI is potentially eligible for grants under the European TEF programme: Testing and Experimentation Facilities.

This subsidy takes the form of an attractive discount directly applied to the price of the services offered by LNE.

LNE, an expert committed to the implementation of trusted AI

  • + 15 years of experience and + 1500+ evaluations of AI systems in fields as sensitive as medicine, defense or autonomous vehicles
  • Independent third-party organization fully committed to building a trust framework for AI systems,
  • A wide range of services to validate AI systems
  • Development of the first standard for the certification of AI processes to ensure that solutions are developed and brought to market in accordance with a set of best practices
  • Coordinator of the French partners of three European TEF (Testing and Experimentation Facilities) projects

INESIA: National Institute for the Evaluation and Security of Artificial Intelligence

With the creation of INESIA, France joins the network of 10 AI Safety Institutes around the world and, thanks to its recognized skills in the field, LNE is one of the four national players in AI evaluation and safety. This institute is led by the SGDSN[1] and the DGE[2], and will be technically coordinated by LNE, the ANSSI[3] , the Inria[4] and PEReN[5].

Its main missions will be based on the assessment of the security of AI models and systems, which covers several dimensions, ranging from the issue of the performance and reliability of the AI model to national security issues, with a strong look at the security of models and the risks qualified as "systemic" by the European AI Act (AI Act). This institute will also support the implementation of AI regulation.


[1] General Secretariat for Defence and National Security - [2] General Directorate for Enterprise - [3] National Agency for Information Systems Security - [4] National Institute for Research in Digital Science and Technology - [5] Digital Regulation Expertise Centre

Learn more

Check out our Expert Reviews: