Once your product is moving into production the performance and reliability of your AI have to be documented. Using only classical performance scores might not be enough to inspire trust in your product or discuss with regulatory or certification bodies.
Numalis is at the forefront of the standardization of AI. Working within the ISO group on AI to define what the robustness of neural networks is, see more about our work here. (in process)
Saimple allows your engineers to build quickly documentation and profit reproducible benchmarks of robustness and explainability. With the built-in features to export any result of Saimple you can assemble quickly any chart or graphics you need for your commercial or regulatory documentation.
Continuous integration is an important practice to ensure the quality of your future product all along its development cycle. Testing early on and at every step will help you avoid any regression in your training and integration. The ability to construct tests that are both relevant and sufficient to ensure the quality of your product is not easy. Each time you use Saimple to test something you can document and archive this test. From there you can build your own robustness and explainability test bench using Saimple that you can launch on any test server. Using a simple batch run methodology you can launch at any time every test you have and look for regression automatically. Batch of tests made using Saimple can be integrated into your continuous integration framework in order to add the robustness and explainability functional tests on top of others.
Testing every aspect of AI is crucial, with Saimple you can automate your testing of the robustness and the explainability within your current continuous integration framework