The Four Necessary Steps to Predictive Asset Management

Tired of reacting to unexpected equipment failures, power outages, and unplanned downtime? If you are not actively transitioning from a reactive to a predictive asset management program, you are likely to deal with the same frustrations a year from now as you have in the past. Although hindsight will always be twenty-twenty, foresight has the potential to save your company millions in operating dollars.

Tired of reacting to unexpected equipment failures, power outages, and unplanned downtime? If you are not actively transitioning from a reactive to a predictive asset management program, you are likely to deal with the same frustrations a year from now as you have in the past. Although hindsight will always be twenty-twenty, foresight has the potential to save your company millions in operating dollars.

Unfortunately, for your company’s bottom line, it is against human nature to look for trouble when it doesn’t exist, creating a built-in “head-in-the-sand” response to the profit raiders that are always beating at the factory door. A Predictive Asset Management Model (PAMM) allows you to keep vigil on the potential danger signals coming from equipment. Through this model, information is easily shared between staff members, increasing communications and providing better feedback loops. Assets are ranked by potential risk, allowing for better preventative maintenance, planning, and parts support.

At the core of the predictive model is the data historian, faithfully recording your entire system’s process data. Once reliably stored, your historian will allow you to create asset groups, monitor the related points, and carefully study the analytics that will help you “look around the corners” and see what equipment issues you can expect in the immediate future. The key application of predictive analytics in the power and energy sector is to flatten load curves, balance peak demand, and achieve optimum efficiency from generation sources. Successfully achieving these goals not only reduces overall cost, but also limits the need for new construction and infrastructure projects.

To shift your company into a more predictive asset management strategy, begin by following these four simple steps.

 

Step One – Gather Your Process Data

It is alarming to learn how many organizations do not accurately collect and store their process data. Even more upsetting is the number of businesses that only maintain a six to twelve-month historic process database.

A proper data historian should give you access to decades of your entire process data history using proprietary database technology. This can be extremely difficult for a relational database such as SQL. A typical power facility can easily exceed 100,000 data tags and power distribution systems often have millions of points to monitor. To record these volumes of data at one second intervals would require write speeds that are unavailable for relational databases. Even if they were able to write at these speeds, the database management would require full-time IT staff, burdening the same bottom line you are attempting to decrease.

Furthermore, what is the benefit of a data historian if retrieving information from it is slow and cumbersome? Retrieval speeds, or read speeds, are often overlooked when choosing a data historian. To load a trend chart with sixty days of one-second data for four points, you would need to read and present 20.7 million data points. At present, the Canary data historian is capable of reading 4.6 million points per second, allowing this chart to be loaded in approximately five seconds. Relational based data historians would struggle to recall this same information in less than thirty minutes.

Most electric utilities and power facilities have data collection occurring in multiple locations. Substations for example, can easily be incorporated into a centralized data historian, allowing your team to quickly monitor activity throughout the grid. Often, especially in rural utilities, these substations are only manually checked every few days, and require staff to travel to their location. By monitoring the process data remotely, a better understanding of the entire system can be achieved.

 

Step Two – Share the Data Throughout the Organization

Often process data is made available only to control engineers. The consensus appears to be that only those who have control abilities need access to historical data. This could not be farther from the truth. A more effective model is to provide as much data as possible, to as many individuals that can benefit from it. Using a robust data historian with strong trending, alarming, and reporting software, you can effectively share process data without any worry about control access. The benefits from increasing your data’s reach are numerous. For instance, sharing analytics on transformer load can help utilities better understand abnormal loading patterns and consequent deterioration of the life of distribution transformers as well as help confirm proper transformer sizing.

The idea of increasing your process data availability will assist your organization and help protect it from the “knowledge loss” that can occur as key personnel retire or leave the company. The more information that is shared across the company, the less chance of any one individual holding key knowledge that cannot be replaced or passed to others. In addition, the sharing of data will also help prevent the dangerous mentality of “this is just how we do it.” Especially in asset management, this can be a dangerous adoptive philosophy. Increased data availability will cause more individuals to challenge the status quo of a typical reactive maintenance model.

The final benefit of sharing process data across the organization is a better sense of team. When more individuals are included, collaboration and cooperation are soon to follow. A group effort will be

required in any organization if the bottom line is to be effected. In this application, it will likely take a team to determine the proper algorithms, like regression analysis, that will need to be implemented to create a successful Predictive Asset Management Model. It will also take a team to make key decisions on which assets to monitor, and which upgrades and repairs are most prudent.

 

Step Three – Create Asset Categories

Alarming has become a standard tool available in data historian offerings, but are you maximizing its potential? Most companies leverage alarming software as a notification service, setting tag limits and receiving text or email alerts if that limit is reached. Does this sound familiar? If so, you can liken this approach to making a grocery run in a Ferrari 458 Speciale, painfully inching along at thirty miles an hour the entire way. Will it get you to the market and back home? Sure, but you will never appreciate all the performance of its 570 horsepower V8. Similarly, the Canary alarming software will certainly notify you of a high/low limit event, but only using it in this application would neglect its powerful asset management capabilities.

First identify your asset and the group of tags that will serve as performance indicators. For instance, if you wanted to manage a pump, you may monitor ten to twenty points including vibrations, temperatures, flows, and pressures. Or, you may choose to monitor a group of transformers, watching for voltage issues that may shorten a transformer’s life.

Ensure each tag is clearly labeled so you can easily identify and relate the tag to the asset. Then establish what the normal operational thresholds are for each data point. Note, these will probably be considerably “tighter” than typical notification points. Focus less on critical values and more on ideal operating values. With the Canary software, you can now set alarm boundaries at the top and bottom of these ideal thresholds for each data point. You can also create logic rules within your alarm. For instance, you may only be worried about crankcase vibration if it reaches a certain level and maintains that level for more than 5 seconds.

Finally, decide to what degree tag alarms determine asset alarms. Do three separate temperature alarms cause your asset to alarm? Or maybe a combination of one pressure and one vibration alarm would be a better indicator. You determine the logic and the rules. Remember, you are not adding notification services, these alarms will run in the background and will not interrupt your process. If you have similar assets across the process, copy this template and apply to them as well. Continue this process and construct asset and tag alarms for your entire asset portfolio.

 

Step Four – Start Small

Jim Collins, in his business book ‘Great by Choice’ outlined the importance of starting small. Through empirical testing, an organization can identify what works with small, calibrated testing methods, prior to deploying the process across the entire organization. Collins coined this concept as “firing bullets, then cannonballs.” The idea is that bullets cost less, are low risk, and have minimal distraction. Once on target with measured results and a proven history, the bullets should be substituted with a cannonball. The cannonball is the robust rollout of a new strategy with the full backing and support of the organization and all available resources powering it.

Apply this “bullet then cannonball” approach to begin your new predictive asset management program. To start with your entire system would be a mistake. Instead, start small. Choose a few
asset groups and monitor them, comparing your results with the rest of the organization. For instance, you may choose to monitor 5,000 of your available 120,000 transformers for load verse capacity over the next three months. At the end of this period, employ the alarming software’s analytics and review your assets. Sort your identified assets by the number of alarms they received, then look deeper into each of your higher alarm count assets. Review the data points that define those assets and study the historical data to get a better sense of what may have gone wrong.

Use the time shift tool to look further back in your data and compare these current trends with the same data from one or two years ago. A sudden change is easily identifiable, however, a slow and gradual change can be nearly impossible to perceive, and these slow gradual changes are exactly what you are trying to identify. Often time shifting thirty or sixty days does not help, simply because the time shift is not extreme enough.

To illustrate this point, imagine that instead of transformers, you choose to monitor several CAT 3500 generators. A key indicator of engine performance is the exhaust gas temperature (EGT). Generally, during operation, these temperatures hover around 930 degrees Fahrenheit but have an acceptable variance of +/- fifty-five degrees. It is important to the overall health of these motors that you continue to maintain acceptable exhaust temperatures so you decide to track these historically, comparing their live data to historical data from thirty days prior.

If the exhaust temperatures began to increase by a factor of fifteen percent month over month, you would easily visually identify that trend. But what if they were increasing by only one-third of a percent each month? Would you be able to see that change, especially with a daily operational variance of nearly forty-five degrees? A small change of less than one percent would typically go unnoticed resulting in no further analysis. However, there is likely an underlying issue that needs diagnosed that may lead to machine downtime or future machine inefficiency.

Enter the importance of longer time shift intervals. By comparing that same EGT tag to data from two years earlier, you would see a variance of over seventy degrees. Even then, due to the operating fluctuation of fifty-five degrees, you may not take action. However, if you leveraged another tool, the Time Average Aggregate, you could smooth the EGT data. By comparing four hours of current EGT data with a sixty-second time average to four hours of EGT data from two years ago with that same sixty-second time average, you are much more likely to notice the resulting change. This long-term time shift practice is invaluable and should be implemented often.

Returning to the example of transformer monitoring, take all of the information gathered from the assets in alarm, the individual data points, and what is learned from time shifting, and make educated decisions. Adjust some of your alarm points as need be, and repeat the ninety-day monitoring process again. Continue to reorder your tag groups, refine your operational thresholds, and adjust alarm rules until you feel comfortable with the results.

Once the initial monitoring is complete, make transformer upgrades and replacements based on the findings. Spend the next six months comparing the performance of the initial small sample group of transformers to the performance of the rest of the transformers outside of that group.

How much manpower was saved? How many power outages could have been avoided? Would customer satisfaction have increased? What could this do to the bottom line?

Now that the Predictive Asset Management Model has been successfully confirmed to be beneficial, it can be applied to all assets. Doing so will result in a reduction of paid overtime, fewer power loss instances, a healthier bottom line, and happier customers.

 

Conclusion

A Predictive Asset Management Model has many advantages, including the accurate forecasting and diagnosis of problems with key equipment and machinery. By properly gathering your process data and sharing it across your organization, you can begin monitoring asset groups and become proactive in maintenance, servicing, and repairs. Doing so will ensure you experience less unplanned downtime, stronger customer satisfaction ratings, and higher profit margins. However, a robust and capable data historian is the cornerstone of a successful Predictive Asset Management Model. Without easy access to your historical process data and an accessible set of powerful analytical tools, your company will be forced into reacting when it should have been predicting.

 

 
CanaryLabs Vertical 250

Make It Easy To Use Your Time-Series Data

Using your time-series data to make better decisions doesn’t have to be hard! At Canary, we believe your database should do the heavy lifting for you.

Try Canary

Make It Easy To Use Your Time-Series Data

Try Canary

Most companies are spending too much money on their data historians.

Download Pricing