Adopting AI is about proving to your clients and your stakeholders that risks can be managed, that is why we provide you tools to ensure proper validation and documentation.
Moving a neuronal network to production is no easy task. Performance and reliability must be accounted for, as well as proper documentation for regulatory and standardization bodies when necessary. Today validation of a recurrent neural network (RNN) or convolutional neural network (CNN) cannot be only about testing, it also has to be about proving. Once the design and neural network training phases are done, proper validation takes place. With the Saimple tool you can easily :
Whatever the neural network types (feedforward neural network, Bayesian neural network or a convolutional neural network), having one of these deployed in a real case scenario means exposing it to various conditions of use. The aim of neural network training is to have a system that can adapt its behavior to changing conditions and maintain its performance. But a system is not meant to be exposed to every condition possible. The system needs to be specified on a domain and a list of possible perturbations on which it is expected to be confronted. Once it is done the validation needs to test in every case the performance of the system.
Modeling perturbations can be challenging. Usually they are not simple mathematical functions that everyone can find. They require an expertise that usually only few of your experts have. Even then, modeling them mathematically is no easy task.
With Saimple you can define your own noise using classical noising libraries, and test the robustness of your neural network product against it.
Any artificial neural network validation plan can benefit from more automation in its making. Validating is an iterative process where tests can be either generated or set in advance. When dealing with a finite combinatory of cases your test can be rolled out by generating every permutation possible. But when dealing with arbitrary large numbers of permutation tests are harder to generate efficiently since they need to perform both good coverage and good sampling, while not taking too much time.
Saimple works directly on whole domains which can contain an arbitrary number of points. While direct testing would need millions of evaluations and still be insufficient, abstract interpretation can validate the whole area at once. The whole process can be used through scripts which makes it suitable for your continuous integration scheme.
Continuous delivery is a more and more used paradigm of software development. In the context of AI it is even better since they have to face changing conditions which require frequent adjustments. Since their environment is producing continuously new data it can train on, it is crucial to be able to adapt quickly to the product and ship it as soon as possible. However frequent modifications on black-box systems can introduce severe regression that can ultimately jeopardize the entire system and have an impact on the company. To avoid these risks it is important to manage your validation plan at each step of the new version.
With Saimple you can automate your tests but you can also compare from one version of your neural network to another. Finding a change in the robustness properties of your artificial neural network has never been simpler. Also you will know early on if your feedforward neural network or other neural network type changes its decision making process on the same data.
Each validation has a timestamp and you can compare at any moment the evolution on both the robustness and the explicability of your neural network.
Futur regulation and standards will include provision for the system manufacturer to demonstrate the robustness of its system. When artificial neural networks will be involved this robustness can be asserted either through testing or formal proof. While testing will be enough in some cases, formal proof will be used to tackle examiner objections and ensure a smooth acceptance.
Producing the correct documentation is easy with Saimple. The whole process of robustness assessment using formal methods is currently being standardized by the 24029-2 ISO standard. Saimple will be the very first tool to natively implement the standard. Using Saimple you can trace every validation step and generate appropriate documentation for any quality process you plan to implement for your AI.
Explainability of your artificial neural network is a crucial step for your market access in many industrial sectors. But, explaining the decisions taken by your neural network can be difficult depending on who you are explaining it to. Some justifications can be purely statistical, others can be more formal, but in any case you need to build a strong justification case.
Statistics are useful to reflect the amount of tests you performed, but they give very little insights in the inner mechanics of the different types of neural networks. Providing explainability justifications can be invaluable to demonstrate the validation that has been done but also to improve the understanding of your system to any examiner.
Saimple is specially designed to help both your data scientist and your quality engineers to document your model. Each explainability test done can be exported and packaged to be included in your study. This documentation can be generated at each step of your validation process, proving the traceability and correctness of your work.