Neural network validation

Adopting AI is about proving to your clients and your stakeholders that risks can be managed, that is why we provide you tools to ensure proper validation and documentation.

Improve your neural network validation

Moving a neural network to production is no easy task. Performance and reliability must be accounted for, as well as proper documentation for regulatory and standardization bodies when necessary. Today validation of neural networks cannot be only about testing, it also has to be about proving. Once the design and training phases are done, proper validation takes place. With the Saimple tool you can easily :

1- Specify against what your neural network should be robust

Having a neural network deployed in a real case scenario means exposing it to various conditions of use. The aim of training is to have a system that can adapt its behavior to changing conditions and maintain its performance. But a system is not meant to be exposed to every condition possible. The system needs to be specified on a domain and a list of possible perturbations on which it is expected to be confronted. Once it is done the validation needs to test in every case the performance of the system.

Modeling perturbations can be challenging. Usually they are not simple mathematical functions that everyone can find. They require an expertise that usually only few of your experts have. Even then, modeling them mathematically is no easy task.

With Saimple you can define your own noise using classical noising libraries, and test the robustness of your network against it.

2- Speed-up your validation plan

Any validation plan can benefit from more automation in its making. Validating is an iterative process where tests can be either generated or set in advance. When dealing with a finite combinatory of cases your test can be rolled out by generation every permutation possible. But when dealing with arbitrary large numbers of permutation tests are harder to generate efficiently since they need to perform both good coverage and good sampling, while not taking too much time.

Saimple works directly on whole domains which can contain an arbitrary number of points. While direct testing would need millions of evaluations and still be insufficient, abstract interpretation can validate the whole area at once. The whole process can be used through scripts which makes it suitable for your continuous integration scheme.

3- Find out regression sooner and quicker

Continuous delivery is a more and more used paradigm of software development. In the context of AI it is even better since they have to face changing conditions which require frequent adjustment. Since their environment is producing continuously new data it can train on, it is crucial to be able to adapt quickly to the product and ship it as soon as possible. However frequent modifications on black-box systems can introduce severe regression that can ultimately jeopardize the entire system and have an impact on the company. To avoid these risks it is important to manage your validation plan at each step of the new version.

With Saimple you can automate your tests but you can also compare from one version to another. Finding a change in the robustness properties of your neural network has never been simpler. Also you will know early on if your neural network changes its decision making process on the same data.

Each validation has a timestamp and you can compare at any moment the evolution on both the robustness and the explicability of your neural network.

4- Document your neural network robustness

Futur regulation and standards will include provision for the system manufacturer to demonstrate the robustness of its system. When neural networks will be involved this robustness can be asserted either through testing or formal proof. While testing will be enough in some cases, formal proof will be used to tackle examiner objections and ensure a smooth acceptance.

Producing the correct documentation is easy with Saimple. The whole process of robustness assessment using formal methods is currently being standardized by the 24029-2 ISO standard. Saimple will be the very first tool to natively implement the standard. Using Saimple you can trace every validation step and generate appropriate documentation for any quality process you plan to implement for your AI.

5- Document the explainability of neural networks decisions

Explainability of your neural network is a crucial step for your market access in many industrial sectors. But, explaining the decisions taken by your neural network can be difficult depending on who you are explaining it to. Some justifications can be purely statistical, others can be more formal, but in any cases you need to build a strong justification case.

Statistics are useful to reflect the amount of tests you performed, but they give very little insights in the inner mechanics of the neural network. Providing explainability justifications can be invaluable to demonstrate the validation that has been done but also to improve the understanding of your system to any examiner.

Saimple is specially designed to help both your data scientist and your quality engineers to document your model. Each explainability test done can be exported and packaged to be included in your study. This documentation can be generated at each step of your validation process, proving the traceability and correctness of your work.

Numalis

We are a French innovative software editor company providing tools and services to make your neural networks reliable and explainable.

Contact us

Follow us