Reliable predictive maintenance
Predictive maintenance is a major opportunity to improve your operations margin and build long-term relationships with your customers, provided you build trustworthy systems with the adequate guarantee.
Maintenance is all about trust: trust in the process, trust in certification, trust in the material quality, trust in the skill of every worker… Moving from preventive maintenance towards predictive maintenance methods is a huge change of technology. Reducing costs of maintenance while increasing the overall availability of the equipment are some of the benefits that predictive maintenance tools propose.
But contrary to preventive maintenance which is based on accumulated experiences over time, predictive maintenance technology is rather new and has not yet an extensive track record.
Now the issue is about how to adopt such a technology while proposing sufficient guarantees to ensure trust.
Perform an optimized predictive maintenance software
Saimple can help your organization save time during the AI design process and manage the risks associated with AI by bringing guarantees on its behavior.
During the training phase Saimple can:
- Help you to find a suitable neural network architecture;
- Verify that your networks are discovering the same expertise you have.
During the validation phase Saimple can:
- Help design realistic variations of your inputs to challenge your neural network
- Construct guarantees on both prediction and RUL estimate
Let’s consider a project involving neural networks aiming at building a predictive maintenance AI to optimize a maintenance bay for a car rental company. The company has years of practical experience and data from its connected vehicles. To maximize the availability of the fleet, it wants to perform maintenance actions only when it will be useful instead of doing it on a predetermined schedule. A team of data scientists is tasked to build a predictive model, after a while they decide to use machine learning and especially recurrent neural networks. Now let us show how Saimple will help them all along the way from the design toward the validation of their model
Chose and adapt an architecture
Choosing an architecture is critical for system performance, but it’s important to ensure early on that it’s robust enough for your needs. Recurrent Neural Networks (RNNs) are widely used by data scientists for predictive maintenance tools, even though they’re less intuitive and harder to train than image processing networks. This is because it’s possible to guide the design of an image processing architecture by knowing the kind of features to be recognized. In contrast, a predictive maintenance system must discover new features in raw data that humans may not be able to understand.
Selecting and adapting an architecture is a trial-and-error process, which can be time-consuming and costly, as training RNNs can be complex. Saimple helps engineers improve the outcome of this process.
Each modification to the architecture can impact the neural network’s robustness. Detecting early on which modifications cause a setback in robustness saves a lot of time overall and avoids costly rollbacks of the architecture once it’s been trained, integrated, and almost put into production.
Confront your neural network to your expertise
Saimple can help engineers confront what the neural network has learned with their own expertise. It shows how each input drives the decision and which ones are decisive.

(Fig. Saimple evaluation of inputs to detect an anomaly)
For example, a data scientist in charge of training can check on very specific time series that are representative of a well-known problem to see which parts of the input the network is considering. They can then use their expertise to check that the events outlined by the network are in fact related to the problem the network is supposed to predict. Two cases are possible:
- When the AI and the expert are aligned, it’s possible to design a test to ensure this will hold in future versions of the network.
- When they are not, the expert can audit if the system’s decision is useful (and increase their expertise!) or if it’s a false alarm and the training set needs to be changed to force it to forget about this.
In either case, your project moves forward faster, either with a better quality network and for its documentation and testing roll out.
Explore your neural network’s behavior on realistic scenarios
Saimple helps you accelerate and strengthen your testing, by validating entire sets of realistic scenarios at once, which your engineers can set up very easily.
Testing is an essential part of the validation process for any neural network. Some of the data in your possession can be used as a test dataset instead of as a training set. This dataset should be diverse and representative, while also being sufficiently different from the training set. Creating such a dataset is not easy, and manually skimming through every time series to select some is not practical. What your engineers need is a way to easily test hypotheses about how the network would react to changes in its inputs.

(Detection of False and True anomalies by an expert)
Instead of testing one input at a time, Saimple allows your data scientists to explore many variations of their inputs all at once. For example, they can start with a single data model and then model a variety of modifications, such as higher noise, changes in frequency, or slower or faster growth of the trend. This allows engineers to quickly discover how much the neural network’s decision can change depending on the magnitude of the modification.
Using a simple script interface, engineers can stress test the neural network as much as they want and validate its performance as much as possible. Once they are confident in the neural network’s behavior, it can move to the next phase of the production process.
Construct guarantees on your neural network’s predictions
Saimple allows you to quickly study whether a prediction is a black swan event (i.e., unlikely or unrealistic) that may need further analysis, or if the prediction holds even when the inputs are varied. This allows your data scientists to quickly construct guarantees on the predictions.
Neural networks can be used to predict how a signal will change over the next period of time. This can be very useful for detecting a potential threshold violation in the near future and taking action to prevent it. However, the accuracy of neural network predictions can vary widely, and sometimes the slightest change in the inputs can lead to a radically different outcome.
When faced with such a system, it can be difficult to know whether the prediction is alarmist or not. One technique is to try changing the input slightly to see if the prediction holds. But knowing what to change and how to change it can be challenging and time-consuming.
Here’s how Saimple can help:
Saimple allows you to explore many variations of your inputs all at once. This allows you to quickly identify which inputs are most important to the prediction and how much the prediction can change depending on the magnitude of the modification.
For example, you could start with a single data model and then model a variety of modifications, such as higher noise, changes in frequency, or slower or faster growth of the trend. You could also explore different combinations of modifications.
By exploring a wide range of inputs, you can gain a better understanding of the neural network’s behavior and identify any potential pitfalls. This allows you to construct more reliable guarantees on the predictions.

For example if the neural network is predicting the pressure on some device and its prediction is that in a few minutes the safety threshold will be violated (Fig. 1).

With Saimple it is possible to check if depending on a variation of the inputs this prediction is accurate, and in fact whatever the variation the threshold will still be violated (Fig. 2).

Or if the neural network with a slight change of the inputs might predict the opposite and the system will still operate below the threshold (Fig. 3). Such an automated process can help to quickly prioritize and process the alarms raised by the AI.
Construct guarantees on the RUL estimate
One most important metric predictive maintenance is able to estimate is the Remaining Useful Life (RUL). It allows all of your maintenance to be planned and optimized in order to minimize your costs and optimize your workload planning. But in order to be useful the RUL needs to come with some optimistic and pessimistic estimate, in order to make an informed decision depending on the level of risk that is admitted. Let’s assume that in the use case the risk allowed is very low as the equipment should never become faulty at any moment. In that case the RUL estimate should be pessimistic in order to avoid failure at any cost.
Saimple can help secure the estimate your predictive maintenance system offers as RUL estimate, which helps you strengthen your decision and minimize your risks.

(Fig. 1 – Result of many simulations of the RUL model)
As neural networks can sometimes have unpredictable behaviors, it is important to verify that the value estimated by the system is reasonable. For that reason it is important to test some variation of the inputs in order to see if the decision seems stable or not. Constructing these variations is not easy and can be very time-consuming since it is not known how much should be tested to ensure some level of confidence in the result (Fig. 1).

(Fig. 2 – Result of formal analysis of the RUL model leading to a safe bounds for all simulations)
In order to build a guaranteed conservative estimate your engineer can use Saimple ability to check the entire sets of variation around one estimate. By doing so you can automatically have clear and definitive bounds on your estimate (Fig. 2). You can therefore be certain of the most pessimistic estimate your neural network could have produced regarding some variation of the input data. This can drastically help build confidence in the RUL estimate by the system and avoid costly misestimates.
Related Use Cases
Measure robustness of data augmentation
Evaluation of model robustness for traffic sign classification
Identify bias in cancer cells detection from ultrasounds
Discover more Saimple applications to help you build reliable and trustworthy AI
Image processing
Neural networks are a game changer technology for any use cases involving image processing. It might not be easy, but that’s clearly the way forward. Discover here how you can use our technology to improve all along the way your process to build your image processing systems.
Data augmentation
Data is the fuel of any neural network training process. Without enough of it your AI performance might not be there. Doing data augmentation is one solution for that. But it is a tradeoff between the time you invest in it and the quality of the AI you obtain. Numalis can help you find the adequate tradeoff.