Now that we understand the value and limitations of threshold analysis in Part 1, we’ll examine what machine learning and machine learning systems mean for industrial analytics and for manufacturers.
Level 2: Anomaly Detection and Proactive Service
The next tier of data analysis is the application of machine learning and algorithms to detect multivariate correlations, or the relationship of several data sets at once. With the application of algorithms, your reporting or observation system can make decisions in context, and detect anomalies.
In threshold detection, all we have are high and low values, removed from context. In its initial phase, an algorithm system will learn what appropriate conditions are in the context of each setting, then set these values accordingly. The system will also learn other conditions that lead to an anomaly which wouldn’t have been caught by a simple high-low threshold alert. An algorithm can establish the norm for expected behavior, and in the operational phase, apply its learning, and even learn from its own learning in adaptive models.
Let’s put our oil pressure sensor in a large outdoor generator used to keep greenhouses operating. The generator’s usage profile in the summer, when it’s not needing to work as hard, is going to be very different than its usage in the winter, when it’s running longer and harder to maintain the same level of heat and light. The algorithm will learn how the generator is used seasonally, correlate that data to normal functionality, and issue an alert when, for example, oil pressure that typically runs low in the summer, is now high.
For manufacturers and OEMs, detecting anomalies allows for fewer false negatives and false positives. In the case of the higher-than-normal oil pressure, there may not be a problem yet, but there are indicators of abnormal operation. Applying situational rules for action allows manufacturers to proactively deploy resources, such as technicians or software updates, appropriately and efficiently. Proactive service can help manufacturers prevent asset downtime from occurring, and possibly prevent costlier repairs where the anomaly is allowed to persist.
Looking into the future of data analytics in the service arena, predictive service will take on additional importance, as the next tier of customer-and-outcome focused service. Here, machine learning can analyze sets of conditions together, along with historical operation data and historical asset failure data, to predict that a problem is likely to occur. In the example of our generator’s oil pressure, a predictive service analysis would gather oil pressure, RPMs, temperature, and historical data to alert that when certain conditions occur together, there is a 90% chance of asset failure within 30 days. This allows manufacturers to tend to the asset, on-site or remotely, before a failure, maximizing asset uptime, and preventing more complex repairs.
The last frontier (for now) of service-focused analytics is prescriptive analytics and prescriptive service recommendations. Here, manufacturers can close the loop between product design and product service. Gathering the data used to detect anomalies and issue failure predictions, prescriptive analytics recommend what changes need to be made to stop root failures from even happening. Our oil pressure that runs low or high in certain conditions that lead to eventual failure may trigger a prescriptive diagnostic of changing times of operation, switching to a different viscosity, or using a different oil filter. Prescriptive service is the ultimate outcome-based solution – focusing on maximum asset uptime and minimal service calls and costs.
Having a lot of data is ultimately meaningless without an ability to put it to use in service of your assets and customers. Service organizations that evolve from threshold to anomaly detection can take their service business from an initial visibility phase to offering value-added options such as remote service, and proactive and predictive maintenance, all of which result in better customer outcomes, more profitable service contracts, and longer customer relationships.
By being able to detect true anomalies, out-of-phase signals that shouldn’t exist in a particular context, manufacturers can take preventive action, proactively deploy a technician or service solution, or otherwise attend to the asset before it causes downtime. Additionally, this becomes much more than just asset monitoring, it turns analytics into actionable items and business results. In fact, predictive and prescriptive equipment maintenance may be the most important application of industrial analytics.
And after that? Information gleaned from anomaly detections in context can be used to feed into the research and development cycle. If manufacturers know how equipment operates in real-world conditions, they can adjust their design accordingly. Adding in the exciting possibilities offered by edge and fog computing, manufacturers can further align their service decision-making process and their equipment.
Join the conversation on Twitter @PTC_SLM and use the hashtag #ServiceRev – how will your service organization use analytics and anomaly detection?