
In order to meet the needs of trust, compliance and reliability of artificial intelligence solutions, LNE offers solutions for evaluating algorithms and systems embedding AI. Its AI Assessment Laboratory (LE. AI) is made up of several testing platforms that make it possible to validate the operation of an AI system, to secure its use, to improve its performance, and to ensure its ethical nature.
An AI system is a piece of hardware or a computer program that can make decisions, make recommendations, adapt to new situations, and learn from data in order to perform specific tasks autonomously or assisted.
In order to optimize its operation and energy consumption (frugality), to secure its use and to explain its decision-making process (explainability with a view to acceptability), it is essential to qualify the various AI functionalities of the intelligent system, as well as the evaluation methods used, before it is put into operation.
Analyzing the performance and robustness of AI features helps assess its reliability in performing the requested task and meeting the regulatory requirements related to bringing your product to market.
Validating an AI system will provide an opportunity to ensure that the model and the data used are reliable and interpretable to obtain a quality and robust analysis.
Thanks to the structured analysis methods developed by our technical experts and adapted to your product, which put the technical functioning of AI to the test, it then becomes possible to challenge the AI decision-making process and identify all possible improvements.
Integrating this approach as early as possible in the integration or development phase of your product to identify the improvements to be made will limit development costs but also to be able to reassure about the performance of the AI model used in your system, to make the right decision.
The analysis of this performance is essential in the most critical areas of activity, such as the high-risk systems defined in the AI Act, such as health, surveillance or defense.
Thanks to the test reports sent by LNE, you will be able to document, for example, the robustness and explainability of the algorithms developed or used.
An AI system qualification is part of your product's risk management analysis in order to:
Validating the operation of an AI system through a methodical qualification thus makes it possible to provide a set of proofs (documentation) and to bring you into compliance with the standards as could be required in the MDR, the Machine Directive or the AI Act.
Going through a third-party organization helps to strengthen the confidence of end-users and regulators in the AI system developed, and to facilitate its deployment to the market.
A complete proposal:
To counter cyberattacks, LNE also offers to carry out a security audit of your AI system in order to identify its various vulnerabilities and strengthen its security.
Setting up a process certification of your AI and/or ISO 42001 is also a way to meet regulatory requirements, thus attesting to the implementation of a quality process and controlled organizational management.
The AI Act aims to ensure that AI systems used in Europe meet standards of safety, transparency, protection of fundamental rights and reliability commensurate with the level of risk they pose. Standardization aims to mitigate the risks inherent in AI and make AI trustworthy.
The approach proposed by LNE aims to cover the various requirements of the AI Act in order to document and explain the functioning of the AI system and facilitate its validation with a notified body.
In order to reduce the energy impact of the development of AI systems, it is necessary to use the right amount of evaluation or training data and to optimize the size of the models used. However, the reduction in the amount of data must not impact the quality of the result.
The quality and representativeness of the data used during the validation of AI are therefore essential, and must be controlled.
Thanks to their expertise in the field, our technical experts will be able to assist you in guiding you on the use, at the right level, of controlled and relevant data that will allow you to obtain a quality, reliable and robust analysis.
By doing so, you will also contribute to reducing your energy consumption in this area.
Your algorithm or AI system can be autonomous (software, application, etc.) or integrated into a piece of hardware or a device (medical device, personal assistance robot, smart camera, etc.).
As a third-party organization, LNE has been working since 2008 through its AI Assessment Laboratory (LE. AI) to the deployment of technical solutions and the development of methods for qualifying AI systems in order to secure their use and build confidence.
Within the LE. AI, you have access to a range of services that allow you to carry out simulations or immerse your system in a simulated dynamic reality, with different levels of realism. Depending on your need for validation or support, our experts will guide you on the most appropriate solution.
LNE supports end users, integrators and developers of AI systems during the different phases of product development.
Thanks to our expert's expertise in data completeness and representativeness, we can support you in the development, or adjustment, of your analysis methods and the evaluation plan of your AI system.
This analysis can be completed by carrying out:
Attentive to regulatory developments and able to provide an answer to your questions about the various regulations that apply to your AI system, we offer:
A catalogue or tailor-made training can be offered to you depending on the practical questions you need to answer.
Examples of products qualified by LNE: video protection systems, air traffic management, voice assistants, medical devices integrating AI, etc.
Namely:
With in-depth expertise in audio, text, images and videos, our technical team can meet your various needs for the qualification and validation of your AI system:
LNE has tools* to evaluate automatic speech transcription systems, and can support you in the development and integration of generative AI for general use.
Our skills are being expanded across all sectors of activity that require the secure use of AI, such as health, energy, defence, surveillance, among others.
*tools:
LNE has unique infrastructures that allow you to validate your AI model scenarios embedded in a hardware device.
Through a simulation or immersion in a virtual environment, you collect data that can be used to identify any weaknesses and explore the functional limits of your solution.
Simulation will save you time on data collection and optimize your development costs.
Typical products evaluated: intelligent sensors and surveillance cameras, drones, robots with 2D or 3D cameras, Lidar, Sonar, mobile devices (autonomous vehicles, personal assistance, etc.), etc.
The « LE.IA Simulation » plateform allows the device to be tested in simulation, according to several scenarios, in order to evaluate its performance.
In the case of a modelled robot, to simulate its movements, only the algorithm is evaluated, without access to a control loop, or to the robot's data processing time, for example.
Benefits :
The platform "LE. AI Immersion" allows the real-world device to be put in a virtual environment in order to test its servo and decision-making characteristics in a given environment (HiL platform -hardware in the loop-). The robot is placed at the heart of a simulation projected on a 300° screen with a motion capture system. This data is integrated into the simulator in real time, so that the robot's digital twin follows the same movements.
Benefits:
Examples of tested performance
The« LE.IA Action »plateform allows the device to be put in a real-life execution situation.
This modular platform covers several areas of robotics. In particular, it has a climatic chamber to validate the robot's operation at different temperatures, and tracking cameras to measure its movements.
The modules designed for displacement (rough terrain, labyrinths, etc.) and handling tests (handles, valves, 6D positioning) are built from NIST (National Institute of Standards and Technology).
Benefits :Test in a controlled reference environment to test the robot's physical capabilities, reproducibility, repeatability
Examples of tested performance
The performance of the robot that can be tested is related to :
Any SME, VSEs and start-ups based in Europe developing innovative solutions and needing to design, qualify or certify a system using AI is potentially eligible for grants under the European TEF programme: Testing and Experimentation Facilities.
This subsidy takes the form of an attractive discount directly applied to the price of the services offered by LNE.
With the creation of INESIA, France joins the network of 10 AI Safety Institutes around the world and, thanks to its recognized skills in the field, LNE is one of the four national players in AI evaluation and safety. This institute is led by the SGDSN[1] and the DGE[2], and will be technically coordinated by LNE, the ANSSI[3] , the Inria[4] and PEReN[5].
Its main missions will be based on the assessment of the security of AI models and systems, which covers several dimensions, ranging from the issue of the performance and reliability of the AI model to national security issues, with a strong look at the security of models and the risks qualified as "systemic" by the European AI Act (AI Act). This institute will also support the implementation of AI regulation.
[1] General Secretariat for Defence and National Security - [2] General Directorate for Enterprise - [3] National Agency for Information Systems Security - [4] National Institute for Research in Digital Science and Technology - [5] Digital Regulation Expertise Centre