Predictive Maintenance
Future of Industrial Maintenance
A significant part of Industry 4.0 is the promise of Predictive maintenance. The cost of equipment in production lines is a substantial investment for the companies, and the price of stopping the line can be very significant.
Maintenance done right prevents equipment malfunction and saves money. The idea of predictive maintenance is to be able to monitor and estimate the need for repairs based on the monitoring data and machine usage.
We will consider here several approaches to this technique and look into a few examples of real-world applications.
Predictive maintenance
The majority of maintenance these days happens in a preventative fashion, which means stopping production lines at fixed intervals instead of when it is necessary. This leads to costly downtime. There are also other effects of unplanned downtime besides the actual downtime cost, namely things like lower production levels, labor overhead, and possible equipment loss and replacement. Fixed maintenance does not take into account actual equipment usage and monitoring parameters, which could result in equipment breaking before the scheduled repair time.
The path to adoption of predictive maintenance is not as complicated as it might seem, and with the help of an experienced team can be easily implemented.
- Start with what are the things that can be measured, what sorts of insights would allow predicting maintenance and improve monitoring.
- Based on these, determine what Data already exists and, if not, what is needed to start collecting the Data.
- Start with the lowest hanging fruit. Based on Machine learning and analytical insight, implement predictive maintenance in a single area or a process. One of the most common implementation mistakes is to try to do too much too soon.
One of the big decisions to be made early on is where and how to process the data, and what sorts of limitations in that respect might exist.
Edge computing in the framework of predictive maintenance
When talking about IIoT, Cloud and Cloud computing are often cited as the main prerequisite for the transformation. The power of the Cloud computation allows for the processing of large amounts of data and complex modeling. Cloud computing assumes that Data is sent to the cloud servers where the decisioning engine processes it, and actionable insights are produced. However, it is often the case that the stability of the connection, the price of the delay of the decision, or cybersecurity implications make it impossible to reliably use the Cloud infrastructure for that purpose. This is the case where Edge computing has come into play. Edge computing refers to the computing infrastructure that exists close to the sources of data. In this case, data collection and analytics happens at the source, rather than at the centralized location. Edge is not a new concept. However, with the advent of more computing power, and lower cost of such devices, it became possible not only to collect data in the Edge devices, to be sent to the Cloud for processing but to be able to process it at the source.
Examples of Edge computing in action would include remote warehouses with unstable connections, sawmill, or vehicle fleets. In cases like these, reliability and the speed of the connection do not allow for Cloud to be a viable option. Another example would be large scale manufacturing, where a price of delay of insights into machine operations would be forbiddingly high.
However, the adoption of Edge does not imply that Cloud is becoming obsolete for IIot. A combination of Cloud and Edge computing would allow the companies to take full advantage of the massive amounts of data produced. Edge would play a more prominent role in such cases as Predictive maintenance and performance monitoring. While Cloud would allow for operations requiring more substantial computing power or high data volumes, e.g., data from several plants or machine learning. Analytical models created using Cloud processing power are then deployed to edge devices to allow immediate insight to the asset operations team regardless of the consistency of connectivity.
The field of Machine Learning has seen significant advances in the last few years. A few of the most prominent techniques often used in Edge Analytics and Predictive Maintenance are Anomaly Detection, Supervised and Unsupervised Learning, Pattern Recognition.
A brief overview of Machine Learning in the field of Predictive Maintenance
The term Anomaly detection is used more and more in the field of Predictive maintenance. What is meant by an Anomaly, in this case, is an unexpected or abnormal event. It can manifest as unusual spikes or dips in tag values, anomalous time-series, or artifacts in the data. Most equipment manufacturers have a set of parameters that their equipment should ideally run at. However, in real conditions, companies have more than one or two machines and are able to collect the data into a centralized location and predict the overall tendencies, review the dependencies and potential effect of machine interactions. This is where Anomaly detection comes in.
As an example, an operator on the shop floor would see individual failures and would be able to repair and address issues immediately; however, without analyzing the overall data about the equipment.
There are several principal strategies for detecting anomalies, varying from relatively straightforward to more complicated and requiring significant computing power.
Distance-based and cluster-based techniques determine deviations from a path or a cluster of acceptable values. For example, if the temperature and power are spiking while pressure and flow remain within normal ranges, something unexpected may be occurring.
A more complicated technique is building Neural networks; it requires more computational power and is more complicated to build. However, unlike the previously mentioned technique, they need little to no specific domain knowledge. However, they do require large amounts of historic data classified as normal and abnormal to train the models that would allow achieving significant predictive accuracy. A neural network is essentially a computing system that “learns” to perform tasks by considering historical data. These systems might not have prior knowledge of what each of the parameters means, i.e., pressure, temperature, or vibration. Still, by processing historical data, they learn to predict anomalies based on this data.
Real-world applications
A typical pulp mill can have over 4000 industrial motors. Most of the motors are used for powering pumps to either mix the material or move it from one place to another. The most critical environmental parameters to monitor for industrial motor maintenance are vibration and temperature. Given that motors often come from different manufacturers just simple visual monitoring on the floor would not be enough. In this example, to solve the issue of maintenance every motor was connected to sensors measuring temperature and vibration, which in turn was collected to be sent to the Cloud. Data were collected at regular intervals set by the user to have as low energy consumption as possible and, as a result, completely autonomous. Each of the sensors was equipped with a GPS to be able to determine the location accurately.
Data thus collected was sent back to the handheld devices of the workers, where they could see the current parameters and get warnings if anything was going wrong. Information was also stored in the cloud database for the purposes of analysis and as input to machine learning for Predictive Maintenance purposes.
A biotech company has noticed an increase in downtime due to blockages in pipes. The production line consisted of various machines e.g., mixing tanks, centrifuges, pumps, etc. While all of the equipment was working within the normal parameters, a series of events in several points led to increased viscosity that blocked the pipes. By introducing monitoring on all of the equipment and analyzing the overall process, the team managed to identify the series of events that lead to increased viscosity events. And by introducing alerts before the blockage could occur. As a result, the operations team could perform maintenance at the right schedule only when necessary, instead of stopping the whole line after every run. That resulted in reduced time and an increase in production line capacity.