JavaScript is off. Many parts of this site require JavaScript to function properly.

How to Adopt a Predictive Analytic Solution

9:20 AM

SAP recently posted an article titled "10 Myths About Predictive Analytics".  It is a great read and worth your time, and can be found here.  The article makes the following points regarding current Predictive Analytic Myths:
  • 1.  Predictive analytics is easy
  • 2.  Scientific evidence is proof
  • 3.  Only what you can measure matters
  • 4.  Correlation = causation
  • 5.  Predictions are perfect
  • 6.  Predictions are forever
  • 7.  You need a skilled consultant to implement predictive analytics
  • 8.  Predictive analytics is mostly a machine problem
  • 9.  Predictive analytics are expensive
  • 10. Insights = action
A potential client was discussed on a recent phone call with a distributor.  This client manages a fleet of over 100 large sea vessels and is looking for a solution to monitor the entire fleet, as well as move their maintenance model away from interval and run to failure solutions, hoping to employ a predictive or auto prognostic model.
This particular client is looking for an "all-in-one" solution that will help them transition from their current stage, not collecting or sharing any historical process data, to operating under a full machine learning / predictive model.  The challenge is the client is focused on an "out of the box" solution that can offer them everything they want.
You may be chuckling to yourself because you have had interactions with a similar client or you yourself are seeking the same type of solution.  I would suggest a different approach, one that I commonly refer to as the "crawl-walk-run" plan.

The most basic first step is the collection of large amounts of process data.  The top priority of the client needs to be the development of a solution that will allow for all of the sensors that define their many assets to be recorded and stored in a central database.  This database needs to be "loss-less", meaning that the data does not change over time to reduce storage space limitations.  For instance, if you collect a pressure reading at one-second intervals, a "loss-less" database will keep all those individual readings for as long as you like.  Many data historians do not offer this feature.  Instead, after a few months, they turn one second data into 60 second or 5 minute averages.  For some industries or uses, this might be acceptable.  However, if you plan to employ algorithms to scour your data looking for correlations, you don't want to feed it processed time-averages.

Once the data is being stored, the client must put together a plan that will allow for the data to be shared around the organization.  Centralizing data is one thing, but if that data is not easy to access and report from, the information will stay locked in the database with very few individuals gaining any value from it.  To make this possible, the database must be engineered in a way that offers a variety of connections, including ODBC and custom APIs.  It is likely that the process data will be beneficial to combine with other data and move into other systems.

Once the data has been collected and effectively made available to all that can consume it, advanced analytics such as predictive maintenance schedules can be created.  However, to get to this step will require time.  Not because the organization will move slowly, nor because it will be difficult to collect and distribute the process data.  The reason this final step will take time is due to the nature of predictive analytics.  In order for the tools to properly function, it will be necessary to provide years of process data to learn from.  Machine learning is only as good as the data that it is learning from, and having adequate history is a requirement.

Therefore, I made this recommendation to our distributor.  Tell your client to focus on the first two parts of this process.  Today they need to focus less on the analytic solutions that are available, and instead need to focus on finding a strong and reliable database that is capable of storing sub second data from millions of sensors.  That database cannot be SQL based and must be fast.  Recalling the data is going to be of paramount importance for the future process and must be thought about during selection.  Focusing on the best machine learning available today instead of the best data historian available today would be a mistake for several reasons.
  1. Technological advancements - most predictive analytic companies are less than five years old, which is a strong indicator that the market is quickly changing and advancing by the month.  The best solution today is unlikely to be the best solution in two or three years when they are ready to move forward.
  2. Pricing changes - like most new technologies, the price points of predictive analytics have already started to fall, and will further drop over time.  Making decisions today on pricing seems unnecessary when you can nearly guarantee that model will change over the next two years.
  3. Volatile industry - as mentioned in the first point, this is a relatively young market and is full of start-ups.  It seems like an unnecessary risk to form a partnership now when it won't be necessary for another few years.  Given time, leaders will emerge and hold fast, giving a better indicator of who should (and should not) be partnered with.
As with any solution, always start with the basics first.  Find a historian that can handle the millions upon millions of data points you are going to point towards it.  Once the data is collected, then begin to focus on what you can learn and how the organization can benefit.

Read More

Monitoring Multiple Locations Through Corporate Mirroring

5:22 AM

The Corporate Enterprise Solution

Canary Software was designed for full enterprise solutions. The historian can scale to handle 25 million tags, hunderds of sites and remote logging locations, as well as be installed with redundancy that ensures data integrity as well as security. The system is highly customizable, allowing each application to be designed for the specific customer's needs.

Redundancy and Communication Outages

The Canary Historian receives data through the Canary Logger and Store and Forward Service. Store and Forward is comprised of two components, a Sender Service and a Receiver Service. These two services communicate using Windows Communication Foundation (WCF) and all data is encrypted during communication.

The OPC server, Canary Logger, and Store and Forward Service can be installed on the same machine as the Canary Enterprise Historian or sit on their own independent machine and connect to several historians across multiple networks. If contact is lost between the Sender and Receiver Service, the Sender Service will cache data to local disk. When communications return, the cached data is transferred to the historian in time sequence order and removed from the Sender Service.

Not only can one Sender Service connect to more than one historian, but multiple logging machines can also be configured and networked to one historian. These loggers can be setup across multiple networks to monitor remote sites as well. The Canary system does not limit the number of loggers used, and there is no additional charge for adding extra loggers as needed.

Data Mirroring

The Canary Enterprise Historian provides the mirroring of stored data on multiple site historians to provide high levels of data redundancy as well as to simplify data retrieval. The Mirror Service allows for both live data as well as daily batch uploads, and can be configured based on the DataSet the data is housed in.

A Corporate Enterprise Historian is not limited to pulling data through only the Canary Mirror Service. Since each individual Store and Forward Service can point to multiple historians, local site loggers can be configured to push data to both the local site historians as well as the corporate historian. This model allows for both site and corporate locations to receive real-time data as well as increases communication and database redundancy.
With appropriate security privileges, data reads can be made across any of the connected historians, site or corporate. Using Microsoft Windows security infrastructure, users can be given access to multiple data sets and tag values across multiple sites while still being restricted to other historians or tag groups.

System Capacity and Performance

When outlining capacity and performance at the Enterprise level it is important to distinguish between two different historian roles, site and corporate.
At the site or local level, an individual historian can log up to 1,000,000 tags. This is accomplished through a minimum of fifteen individual logging sessions, each communicating to the local historian through Canary’s Store and Forward Service.

At the historian or corporate level, a single Mirror Historian can support 25,000,000 tags. The Mirror Historian is updated daily from all local site historians. If live data is required, the Mirror Historian can be configured to handle specific live data feeds, determined by DataSet, as well as still receive daily file updates. The live data can be pushed directly from the logging session at the local site, or can also be sent from the site historian. The flexibility of the Canary system allows each application to be customized for the specific needs of the client and their individual needs and limitations such as bandwidth, budget, and data needs.
Read More

Axiom Version 11.1 Released

8:29 AM

In a continuous effort to ensure that Canary software is best in class, we are pleased to announce the release of 11.1.0.  Several new features are debuting with 11.1.0, including the ability to connect the Views service to OPC HDA as well as OPC UA, event playback, and the saving of design templates.  Read further for the complete list of changes.

Data Historian Changes Made in Version 11.1.0

  • Canary Admin: changed Home tab scaling to better fill the area without having a scroll bar.
  • Canary Admin: added ability to save credentials for endpoints requiring username/password.
  • Security: fixed problem with user not recognized as part of Administrators group with User Access Control enabled.
  • Canary Admin: added more links to F1, helping browsing.
  • Canary Admin: add keep alive timer on home screen when it goes out of view to prevent service from timimng out.
  • Views Service: refactored to support specific Axiom functionality and the Views plug-ins. Rewrote existing customer’s Plug-in to new interface.
  • Views Service: fixed Axiom's history data issue causing no data when going into live mode.
  • Views API: added some new interfaces for retrieving data to support UA functionality.
  • Views Service: added new plug-ins for OPC HDA and OPC UA for displaying data from 3rd party servers in Axiom.
  • Historian: fixed issue with time extension on annotations coming through Store and Forward.
  • Security: fixed issue if server name in chart file does not match case of server name.

Axiom Changes Made to Version 11.1.0

  • Added preference template to AxiomTrend.
  • Added AxiomTrend layout option to load a group of charts into saved locations.
  • Added Playback option to AxiomTrend/AxiomView.
  • Added ability for an Administrator to perform file management in the ReadOnly folder.
  • AxiomChartConversion, legend display property not converting as expected because some trend link files had -1 for a column visiblity indicator.
  • AxiomView startup performance improvements.
  • ChartConversion update to handle Fluke HDA.
  • Added default template into the UserFiles\All ReadOnly.
  • Added "stop live mode" in the RemoveAllTrends method to keep the Core and Client in sync.
  • Changed the AxiomCore to retreive the list of aggregates from the web service.
  • AxiomView: added new navigate property on button that allows switching screens without using script.
  • AxiomView: corrected UTC display issues for value bar and absolute time fields.
  • AxiomView: preserve time component when using chart date pickers.
  • Browser Client: "No Data" quality was not always being caught and would trend 0. Added different check for bad quality.
  • AxiomCore: correct logic that was causing intermittent "No Data"s to appear in the client.
  • Fixed problem with "No Data" being displayed at the Live Edge while in Live Mode.
  • Updated helps to reflect the new Metro look of Axiom.
  • AxiomCore: corrected not being able to save annotation on trend with multiple aggregates.
  • AxiomView: correct live mode not being restored when undoing time change.
  • AxiomCore: correct web service license not releasing when Axiom chart shut down.
  • AxiomView: correct graphic storage error when running multiple instances.
  • Axiom Browser Client: value bar was not being re-positioned when trends were resized.
  • AxiomTrend: corrections to statistics Samples, %valid and algorithms.
  • Fixed problem of licenses reserved for Administrative groups not being used.
  • Updated Axiom Browser client to only consume 1 license for clients on multiple tabs.
  • Security: fixed problem with user not recognized as part of Administrators group with User Access Control enabled.

Interested in trying a demo license of our product?

Read More

What Every Process Engineer Can Learn From The Goonies

12:28 PM

For the 80's Kid Who Loves Process Data

As an 80’s kid myself, I doubt there is any movie I have watched more often than Steven Spielberg’s “The Goonies”. While recently watching the Goonie gang, I realized that this fabled childhood film has some serious crossover into my adult world, industrial process data. So read below and answer the question I can guarantee you’ve never been asked before… What can "The Goonies" teach me about my process?
"The Goonies" a Richard Donner Film directed by Stephen Spielberg


One of the most lovable of all the Goonies, Chunk is the King of Exaggerations.  His stories are always over the top and he makes it nearly impossible to know if what he says is true or not. Sure he means well, but as a result of all his exaggerations, when he has something important to share, like in the movie when he needs to call the police and report the Fratelli clan, his call falls on deaf ears.

The same can hold true for your process, generally as a result of improperly set alarms.  Ask your SCADA operator how many alarms they receive on a regular basis, often because the system was not set up properly.  Alarms are useless if they aren't set to the correct thresholds and notifying the right people.


An innovator, Data has a solution for every potential roadblock.  Being chased across a log by a pair of crooks?  No problem, slick shoes to the rescue.  Falling down a huge shaft towards your death?  Pinchers of Power to the rescue!  The only issue is that Data's solutions are untested and his outcomes are rarely on target.

Pay attention because this can hold true for your process as well!  What are you currently assuming works, even though it has not been properly tested?  Before you make a change in your process, what measures do you and your team take to ensure that you have properly validated your results?  When testing a new procedure, do you start small first?  Remember, as we have said before in Step Four of our Predictive Asset Management article, shoot bullets, then cannonballs!


Loud and rather obnoxious, it seems every group of friends had a "Mouth" in their group.  Always pushing the envelope, Mouth helped push the Goonies when they might have wanted to quit and his Spanish skills came in handy repeatedly.  Chances are, even today, you probably have a friend like Mouth.  Sometimes you can't remember exactly why you hang out with them, and often you wish you didn't.  The funny thing is, they have a habit of always coming through when you need them.

That's why we feel it is so important to record all of your process data, all of the time!  Too often companies will only save process data for a set period of time, or perhaps won't monitor all of their points and tags.  Learn from Mouth, you never know when something that seems unimportant will become crucial to your operation.  If it's there, record it!


Unarguably the dreamer and leader of the group, Mikey was positive that finding One-Eyed Willie’s treasure was possible. He showed a profound respect for One-Eyed Willie and found as much value in the adventure as he did the gold.  For Mikey, it was all about the journey and every member of the Goonies had to be involved for it to count.

Understanding how your group of tags interact together to complete your process is crucial.  To truly find your treasure, you have to better understand the relationship between your equipment and your production results.  A product like Axiom is key for understanding how your data interconnects and relates.


Mikey’s older brother who is convinced that the Goonies are chasing after a myth. Unlike his brother, Brandon is willing to accept his present situation and not make any changes to improve it.  In fact, the only time he tries to fix a situation is to avoid getting into too much trouble.

If you find yourself constantly repairing equipment after it breaks, chances are you are spending too much time reacting and need to shift towards a more predictive asset management model.  Don't wait for things to go wrong or simply accept that your process cannot improve.  Download our free whitepaper and follow these four steps to shift to a predictive asset management model!


Timid every step of the way, Andy is afraid of the challenge, but learns to overcome her fears through small victories like playing the skeleton organ.  Although she doesn't start out a Goonie, by the end of the adventure, she is one of the gang.

Training new staff, especially control operators, can be a big job.  Using software that is quick to learn and simple to use can help reduce the training burden.  Read about the personal experience of one of our own staff members when they learned to use Axiom.  You can further help new operators by providing operation standards they can follow.  Using Axiom to set threshold limits, both high and low values, for key data points can be a great practice to help acclimate a new individual to the team.


Anyone that would make a character judgement based on his appearance would miss finding one of the most protective and caring members of the Goonies.  Sloth had much more to offer than a quick glance would ever reveal.

Likewise, just looking at your data without context will never provide a clear picture of what is really happening.  Using tools like time-shifting will however give you a much better idea of how your current data compares historically, especially for small incremental changes that would be impossible to determine by the naked eye alone.

One-Eyed Willie

Mikey names Willie the original Goonie.  The pirate from the old legend found his treasure and then carefully kept anyone from it, refusing to share his knowledge with others.

To truly empower your organization and transform your process, make your data available to as many people as possible!  Axiom allows for many users, even off site consultants.  Allow others to look in on the process, to compare live and historical data, and run complex calculated trends.

Would you like to learn more about Canary software?  Try out software for free!

Learn More

Know someone that loves the Goonies?  Share this article with them!

Read More

Monitor Your Process and Make Informed Decisions from Anywhere

11:04 AM

Mobile SCADA software

Axiom Keeps Your Process Accessible

Every day you use mobile technology, sometimes to book a flight, arrange for a cab ride, or check your bank account. How often do you use it to monitor your process and ensure operations are running smoothly? If you can’t currently use your smartphone or tablet to quickly and easily view your process data, you should consider Canary’s Axiom solution.

Axiom is a multi-platform visualization tool that transforms time-series process data into usable information through trending and dynamic KPI displays. Web browser based, you can quickly log into Canary software and remotely monitor your entire process from practically anywhere. There are no applications to download or install, just simply enter your username and password. Software updates occur automatically, requiring no additional IT support.

Since Axiom does not feature “control ability” there is no need to worry about accidentally interfering with the process. Instead, you can feel confident knowing that yourself, management, consultants, and any other key staff now have the ability to watch operations even when they are not in the building. This feature can be especially helpful for companies featuring multiple sites and multiple locations. No matter where you are, you can quickly and easily “look-in” on the process at any facility and see what the operator sees. Axiom allows for custom built displays that can identically match your process.
Trending software for your mobile device

Do more with your process data. Trend points, run calculations, compare several pump efficiencies, or compare current values against historic values. All are possible with Axiom, and can be achieved on a laptop, tablet, or smartphone. You already have the data, Axiom gives you the knowledge.

Read More

Smart SCADA Software: Logic Meets Process Data

12:37 PM

Trending Tools for SCADA Systems

Too many operators and engineers rely on their SCADA and HMI displays for process feedback.  While these outlets may be suitable for live, "in the moment" process management, they are not useful for process review, especially if you have access to a process historian.  Just like game film is crucial to the success of an NFL quarterback, a solid data historian with strong trending solutions and process data analytics are a must-have if you hope to better your industrial process.

Adding Logic to the Calculated Trend

We have previously written about the Calculated Trend tool and how it allows you to better compare multiple data points and study correlations.  If you have not begun to take advantage of this free Axiom tool, you should start doing so immediately.  Any proper data historian system should be equipped with robust calculated trend tool.   
SCADA Trending
If you need to watch a quick video on how to use this tool, take four minutes and watch this Canary University tutorial featuring the Calculated Trend.

Leveraging the IF Statement

To demonstrate the power of the IF statement, let's assume you are operating a piece of heavy machinery outdoors, and are concerned about the summer heat and it's potential effect on your equipment.  You have several data points you are measuring, including air temperature, humidity, wind speed, cooling fan operation, and a engine temperature point.
SCADA Trending Tool
To begin, you decide that you want to define a potential environmental threat.  You do so by declaring that your equipment is at risk when the air temperature is over 90 degrees, the humidity is above 85, and the wind speed is below 5 miles per hour.  You decide that you would like to measure the frequency in which the cooling fan is operating (off equals a value of 0 and on equals a value of 1) during these "at-risk" periods.
Using the Calculated Trend tool and the IF statement, you can create a calculated trend using the following occasion:

if([Cooling Fan Off/On]=0,if([Humidity]>85,if([Wind Speed]<5,if([Air Temperature]>89.9999,[Cooling Fan Off/On],'!NODATA!'),'!NODATA!'),'!NODATA!'),'!NODATA!')
Once entered, you overlay the calculated trend onto the Cooling Fan chart and color it yellow. You can now easily visually differentiate when the "at-risk" conditions are present and the cooling fan is not operating.
This is helpful, but with the power of the calculated trend tool, there is no need to stop there.  Let's go a step further and also overlay the gear temperature when it is above the 470 limit.  To do this, I created a new Calculated Trend, specifically designed to capture the gear temperature readings only when all 3 environmental factors are at risk and only when the gear temperature is over 470 degrees.  The formula looks like this:
if([Cooling Malfunction]=0, if([Gear Temperature]>470,[Gear Temperature],'!NODATA!' ) ,'!NODATA!')
I started the trend calculation and then dragged the new trend line up onto the Cooling Fan trend, setting the bottom scale of the Gear Temperature to 470 and locking it into place with the Cooling Fan's low scale of 0.  The end result is a visual overlay of Gear Temperature when at risk directly on top of Cooling Fan when it is not operating.  I added a limit with shading to make the trend stand out, and drilled down specifically on the time interval where the machinery was at risk.
SCADA Trending Software
If I wanted to take it one step further, I could again overlay this data directly on top of the Gear Temperature for further visual cues.  Note, this is not the same as a high limit because I am specifically only interested in the high limit of 470 when it also coincides with my environmental "at-risk" period.  Finally, I also added a time aggregate of the Gear Temperature at 2 minute intervals (orange) so I can also quickly compare the at risk Gear Temperature to the standard baseline.
SCADA Software
This is just a simple example of a way you can quickly use logic in a calculated trend.  How might you better understand your process if you applied similar concepts to your system?  Want to try Axiom for free?  Just let us know!

Read More

The Four Necessary Steps to Predictive Asset Management

11:52 AM

Process Historians Empower a Predictive Asset Management Model

Tired of reacting to unexpected equipment failures, power outages, and unplanned downtime? If you are not actively transitioning from a reactive to a predictive asset management program, you are likely to deal with the same frustrations a year from now as you have in the past. Although hindsight will always be twenty-twenty, foresight has the potential to save your company millions in operating dollars.

Unfortunately, for your company’s bottom line, it is against human nature to look for trouble when it doesn’t exist, creating a built-in “head-in-the-sand” response to the profit raiders that are always beating at the factory door. A Predictive Asset Management Model (PAMM) allows you to keep vigil on the potential danger signals coming from equipment. Through this model, information is easily shared between staff members, increasing communications and providing better feedback loops. Assets are ranked by potential risk, allowing for better preventative maintenance, planning, and parts support.

At the core of the predictive model is the data historian, faithfully recording your entire system’s process data. Once reliably stored, your historian will allow you to create asset groups, monitor the related points, and carefully study the analytics that will help you “look around the corners” and see what equipment issues you can expect in the immediate future. The key application of predictive analytics in the power and energy sector is to flatten load curves, balance peak demand, and achieve optimum efficiency from generation sources. Successfully achieving these goals not only reduces overall cost, but also limits the need for new construction and infrastructure projects.

To shift your company into a more predictive asset management strategy, begin by following these four simple steps.

Step One – Gather Your Process Data

It is alarming to learn how many organizations do not accurately collect and store their process data. Even more upsetting is the number of businesses that only maintain a six to twelve-month historic process database.

A proper data historian should give you access to decades of your entire process data history using proprietary database technology. This can be extremely difficult for a relational database such as SQL. A typical power facility can easily exceed 100,000 data tags and power distribution systems often have millions of points to monitor. To record these volumes of data at one second intervals would require write speeds that are unavailable for relational databases. Even if they were able to write at these speeds, the database management would require full-time IT staff, burdening the same bottom line you are attempting to decrease.

Furthermore, what is the benefit of a data historian if retrieving information from it is slow and cumbersome? Retrieval speeds, or read speeds, are often overlooked when choosing a data historian. To load a trend chart with sixty days of one-second data for four points, you would need to read and present 20.7 million data points. At present, the Canary data historian is capable of reading 4.6 million points per second, allowing this chart to be loaded in approximately five seconds. Relational based data historians would struggle to recall this same information in less than thirty minutes.

Most electric utilities and power facilities have data collection occurring in multiple locations. Substations for example, can easily be incorporated into a centralized data historian, allowing your team to quickly monitor activity throughout the grid. Often, especially in rural utilities, these substations are only manually checked every few days, and require staff to travel to their location. By monitoring the process data remotely, a better understanding of the entire system can be achieved.

Step Two – Share the Data Throughout the Organization

Often process data is made available only to control engineers. The consensus appears to be that only those who have control abilities need access to historical data. This could not be farther from the truth. A more effective model is to provide as much data as possible, to as many individuals that can benefit from it. Using a robust data historian with strong trending, alarming, and reporting software, you can effectively share process data without any worry about control access. The benefits from increasing your data’s reach are numerous. For instance, sharing analytics on transformer load can help utilities better understand abnormal loading patterns and consequent deterioration of the life of distribution transformers as well as help confirm proper transformer sizing.

The idea of increasing your process data availability will assist your organization and help protect it from the “knowledge loss” that can occur as key personnel retire or leave the company. The more information that is shared across the company, the less chance of any one individual holding key knowledge that cannot be replaced or passed to others. In addition, the sharing of data will also help prevent the dangerous mentality of “this is just how we do it.” Especially in asset management, this can be a dangerous adoptive philosophy. Increased data availability will cause more individuals to challenge the status quo of a typical reactive maintenance model.

The final benefit of sharing process data across the organization is a better sense of team. When more individuals are included, collaboration and cooperation are soon to follow. A group effort will be

required in any organization if the bottom line is to be effected. In this application, it will likely take a team to determine the proper algorithms, like regression analysis, that will need to be implemented to create a successful Predictive Asset Management Model. It will also take a team to make key decisions on which assets to monitor, and which upgrades and repairs are most prudent.

Step Three – Create Asset Categories

Alarming has become a standard tool available in data historian offerings, but are you maximizing its potential? Most companies leverage alarming software as a notification service, setting tag limits and receiving text or email alerts if that limit is reached. Does this sound familiar? If so, you can liken this approach to making a grocery run in a Ferrari 458 Speciale, painfully inching along at thirty miles an hour the entire way. Will it get you to the market and back home? Sure, but you will never appreciate all the performance of its 570 horsepower V8. Similarly, the Canary alarming software will certainly notify you of a high/low limit event, but only using it in this application would neglect its powerful asset management capabilities.

First identify your asset and the group of tags that will serve as performance indicators. For instance, if you wanted to manage a pump, you may monitor ten to twenty points including vibrations, temperatures, flows, and pressures. Or, you may choose to monitor a group of transformers, watching for voltage issues that may shorten a transformer’s life.

Ensure each tag is clearly labeled so you can easily identify and relate the tag to the asset. Then establish what the normal operational thresholds are for each data point. Note, these will probably be considerably “tighter” than typical notification points. Focus less on critical values and more on ideal operating values. With the Canary software, you can now set alarm boundaries at the top and bottom of these ideal thresholds for each data point. You can also create logic rules within your alarm. For instance, you may only be worried about crankcase vibration if it reaches a certain level and maintains that level for more than 5 seconds.

Finally, decide to what degree tag alarms determine asset alarms. Do three separate temperature alarms cause your asset to alarm? Or maybe a combination of one pressure and one vibration alarm would be a better indicator. You determine the logic and the rules. Remember, you are not adding notification services, these alarms will run in the background and will not interrupt your process. If you have similar assets across the process, copy this template and apply to them as well. Continue this process and construct asset and tag alarms for your entire asset portfolio.

Step Four – Start Small

Jim Collins, in his business book ‘Great by Choice’ outlined the importance of starting small. Through empirical testing, an organization can identify what works with small, calibrated testing methods, prior to deploying the process across the entire organization. Collins coined this concept as “firing bullets, then cannonballs.” The idea is that bullets cost less, are low risk, and have minimal distraction. Once on target with measured results and a proven history, the bullets should be substituted with a cannonball. The cannonball is the robust rollout of a new strategy with the full backing and support of the organization and all available resources powering it.

Apply this “bullet then cannonball” approach to begin your new predictive asset management program. To start with your entire system would be a mistake. Instead, start small. Choose a few
asset groups and monitor them, comparing your results with the rest of the organization. For instance, you may choose to monitor 5,000 of your available 120,000 transformers for load verse capacity over the next three months. At the end of this period, employ the alarming software’s analytics and review your assets. Sort your identified assets by the number of alarms they received, then look deeper into each of your higher alarm count assets. Review the data points that define those assets and study the historical data to get a better sense of what may have gone wrong.

Use the time shift tool to look further back in your data and compare these current trends with the same data from one or two years ago. A sudden change is easily identifiable, however, a slow and gradual change can be nearly impossible to perceive, and these slow gradual changes are exactly what you are trying to identify. Often time shifting thirty or sixty days does not help, simply because the time shift is not extreme enough.

To illustrate this point, imagine that instead of transformers, you choose to monitor several CAT 3500 generators. A key indicator of engine performance is the exhaust gas temperature (EGT). Generally, during operation, these temperatures hover around 930 degrees Fahrenheit but have an acceptable variance of +/- fifty-five degrees. It is important to the overall health of these motors that you continue to maintain acceptable exhaust temperatures so you decide to track these historically, comparing their live data to historical data from thirty days prior.
If the exhaust temperatures began to increase by a factor of fifteen percent month over month, you would easily visually identify that trend. But what if they were increasing by only one-third of a percent each month? Would you be able to see that change, especially with a daily operational variance of nearly forty-five degrees? A small change of less than one percent would typically go unnoticed resulting in no further analysis. However, there is likely an underlying issue that needs diagnosed that may lead to machine downtime or future machine inefficiency.
Enter the importance of longer time shift intervals. By comparing that same EGT tag to data from two years earlier, you would see a variance of over seventy degrees. Even then, due to the operating fluctuation of fifty-five degrees, you may not take action. However, if you leveraged another tool, the Time Average Aggregate, you could smooth the EGT data. By comparing four hours of current EGT data with a sixty-second time average to four hours of EGT data from two years ago with that same sixty-second time average, you are much more likely to notice the resulting change. This long-term time shift practice is invaluable and should be implemented often.

Returning to the example of transformer monitoring, take all of the information gathered from the assets in alarm, the individual data points, and what is learned from time shifting, and make educated decisions. Adjust some of your alarm points as need be, and repeat the ninety-day monitoring process again. Continue to reorder your tag groups, refine your operational thresholds, and adjust alarm rules until you feel comfortable with the results.
Once the initial monitoring is complete, make transformer upgrades and replacements based on the findings. Spend the next six months comparing the performance of the initial small sample group of transformers to the performance of the rest of the transformers outside of that group.
How much manpower was saved? How many power outages could have been avoided? Would customer satisfaction have increased? What could this do to the bottom line?
Now that the Predictive Asset Management Model has been successfully confirmed to be beneficial, it can be applied to all assets. Doing so will result in a reduction of paid overtime, fewer power loss instances, a healthier bottom line, and happier customers.


A Predictive Asset Management Model has many advantages, including the accurate forecasting and diagnosis of problems with key equipment and machinery. By properly gathering your process data and sharing it across your organization, you can begin monitoring asset groups and become proactive in maintenance, servicing, and repairs. Doing so will ensure you experience less unplanned downtime, stronger customer satisfaction ratings, and higher profit margins. However, a robust and capable data historian is the cornerstone of a successful Predictive Asset Management Model. Without easy access to your historical process data and an accessible set of powerful analytical tools, your company will be forced into reacting when it should have been predicting.

Read More