Three Ways To Get More From Your Historian

Your data historian holds more analytical potential today than you may realize. This process knowledge powerhouse can help you transform operations and fundamentally change the way time-series data is interpreted.  However, few companies have taken the necessary steps to actualize their data historian’s full potential.  Most engineers, supervisors, and operators are either working double-time to meet spikes in demand, or are handling duties outside their typical job description to reduce cost. The bottom line? You are likely too busy elsewhere to spend time mining the knowledge base waiting inside your process historian.

Your data historian holds more analytical potential today than you may realize. This process knowledge powerhouse can help you transform operations and fundamentally change the way time-series data is interpreted.  However, few companies have taken the necessary steps to actualize their data historian’s full potential.  Most engineers, supervisors, and operators are either working double-time to meet spikes in demand, or are handling duties outside their typical job description to reduce cost. The bottom line? You are likely too busy elsewhere to spend time mining the knowledge base waiting inside your process historian.

By implementing these three ideas, you can begin to better apply your historian’s capabilities and identify at-risk assets, increase efficiency, and lessen downtime.

 

Use Events for Asset Management

Most companies leverage alarming software as a notification service, setting tag limits and receiving text or email alerts if that limit is reached.  Does this sound familiar?  Similar to your SCADA alarming software, Canary Events can notify you of a high/low limit event, but only using it in this application would neglect its powerful asset management capabilities.  Take your asset management to the next step by following this best practice.
 
First construct and define your asset in the Canary Asset Model. For instance, if you wanted to manage a compressor, you may monitor ten to twenty points including vibrations, temperatures, flows, and pressures.
Next, establish what the normal operational thresholds are for each data point. Note, these will probably be considerably “tighter” than typical notification points. Focus less on critical values and more on ideal operating values. Within Canary Events you can create individual rules for an asset based on tag readings, defining the top and bottom of these ideal thresholds for each data point.
 
You can also create logic rules within Events. For instance, you may only be worried about a current flow level if it reaches a certain level and maintains that level for more than 30 minutes. Or you may only want to start an event when both a temperature reading and a pressure reading is exceed a certain limit.  You determine the logic and the rules. Remember, you are not adding notification services, these events will run in the background and will not interrupt your process.  Continue this process and construct events based on ideal thresholds for your entire asset portfolio.  Once you define the events for one asset, you can automatically apply them to all similar assets in your organization.
 
Now the easy part, just wait.  After 90 days, use the Excel Add-in to review all events and make comparisons.  Look for trends or noticeable patterns that have developed.  Which assets operated outside of threshold for extended periods of time but stayed under the typical alarm point?  What adjustments need made to operations or what maintenance steps can be taken to return these assets to their ideal running status?
 

Use Calculated Trends to Monitor Efficiency and Cost Savings

Canary's dashboard and trending tool, Axiom, contains a powerful feature you may not be using.  The Calculate feature gives you the ability to configure complex calculations involving multiple tags and mathematical operations. Often this feature is used to convert temperatures, estimate pump efficiency and chemical use, and better guide the process. However, one of the most beneficial uses of this tool involves stepping outside of the Operations mind frame and thinking more like an accountant.
 
Every month, quarter, and end of year, the CFO, controller, or plant accountant run a series of mathematical equations specifically designed to identify profitability, efficiency, and return on investment. Their results are shared with CEO’s, VPs, and upper management, and reviewed in offices and boardrooms. How often do these results make it to the control operator’s desk? Probably never.  Equip your control and operation engineers with accounting insights to unlock this best practice.
 
Using the Canary calculated trend feature, efficiency calculations can be added directly to the trend charts for each piece of operating equipment in your facility. You can easily and quickly transform every control room into a real-time profit monitoring center. The best part, this requires very little time, and no financial investment.
The calculated trend tool can be launched from any Axiom trending chart and comes preloaded with a variety of mathematical operations including but not limited to Minimum/Maximum, Absolute Value, Sine, Cosine, Tangent, Arcsine, Arccosine, Arctangent, Square Root, and many others. Trends loaded unto the chart can also be included in any formula, and you are not limited in character length.
 
Operations Supervisor for the City of Boca Raton, Mike Tufts, has seen this work first hand at their water and wastewater facility. As he explains, “This is very useful for predicting costs and allocating the proper budget and funding for chemicals and pacing based on the flows. With the Canary software we closely estimate chemical amounts that will be used based on flow volume. We know exactly how much production is put out, and what they were using at the time, and we have a tighter and improved number for budgeting and purchasing for the next year, the next month, and even the next contract.”
 
Once the calculated trend is created, it appears on the trend chart and will continue to calculate whenever that chart is loaded. You can then use that calculated trend inside future calculated trends as well, helpful for example, when calculating pump efficiencies. If you choose, you can also write the calculated trend back into your data historian as a permanent tag.
 

Time Shifting Makes the Invisible Visible

Everyone knows data historians provide the visualization of both real-time as well as historical data. But how deep into your historical data do you dig, and how often? Do you generally look back in periods of days, or weeks? How often do you compare real-time data to historical data from six months ago or even six years ago and is there an inherit benefit in doing so? Look further back into your historical data when making real-time comparisons to unlock this final best practice.
 
Time shifting is certainly not new to historians, but it is a feature that’s rarely used to its full potential. For those not familiar, time shifting allows live data trends to be stacked directly on top of historical data trends and is a great tool for comparing a current data point to itself from a previous time period. This is an important feature as we can easily become accustomed to the data around us and miss small, but significant changes. This is often a larger problem for experienced staff as they develop a preset knowledge of where they expect certain values to fall and will be more prone to miss small changes in the data that are nearly indistinguishable if not viewed as a comparison.
 
For instance, recall the adage about frogs and hot water. The myth states that if you throw a frog into a pot of boiling water it will quickly jump out, however, if you start the frog in a pot of cool water, and slowly increase the temperature, the frog will fail to notice the gradual temperature change, eventually cooking. Your ability to interpret data can be very similar. A sudden change is easily identifiable, however, a slow and gradual change can be nearly impossible to perceive, and these slow, gradual changes are exactly what we are trying to identify. Often time shifting does not help, simply because the time shift is not extreme enough. To illustrate this point, imagine you are monitoring the exhaust gas temperature (EGT) of a set of CAT 3500 generators. Generally, during operation, these temperatures hover around 930 degrees Fahrenheit but have a variance of +/- thirty-five degrees. It is important to the overall health of these motors that you continue to maintain acceptable exhaust temperatures so you decide to track these historically, comparing their live data to historical data from thirty days prior.
 
If the exhaust temperatures began to increase by a factor of fifteen percent month over month, you would easily visually identify that trend. But what if they were increasing by only one-third of a percent each month? Would you be able to see that change, especially with a daily operational variance of nearly seventy degrees? A small change of less than one percent would typically go unnoticed resulting in no further analysis. However, there is likely an underlying issue that needs diagnosed that may lead to machine downtime or future machine inefficiency.
 
Enter the importance of longer time shift intervals. By comparing that same EGT tag to data from two years earlier, you would see a variance of over twenty degrees. However an allowable temperature fluctuation of +/- thirty-five degrees may hide the issue; but by applying a Time Average aggregate you would plainly see the variance.  Also, if you compared twenty-four hours of current EGT data with a sixty second time average to twenty-four hours of EGT data from two years ago with that same sixty second time average, you are much more likely to notice the resulting change.
 
Certainly it is not always possible to use data from several years ago, as many factors can and will change. However, as a best practice, the further back you can reach, the higher the likelihood of identifying gradual variance in your data. You can also use secondary and tertiary trends to increase the validity of these comparisons. For instance, when comparing EGT tags, you may also need to include ambient air temperature and load tags (among others) to better determine any other potential mitigating factors.
 
Incorporate these time shift observations in your regular schedule. Create and save charts in the Axiom trending software that will allow you to quickly view time shift comparisons on a monthly basis. These saved template charts should be preloaded with all necessary tags, formatted to have the trends banded and scaled together, and have time average aggregates already performed. Don’t stop with basic tag monitoring, follow through with the previous best practice and additionally monitor calculated efficiency trends.
 
CanaryLabs Vertical 250

Make It Easy To Use Your Time-Series Data

Using your time-series data to make better decisions doesn’t have to be hard! At Canary, we believe your database should do the heavy lifting for you.

Try Canary

Make It Easy To Use Your Time-Series Data

Try Canary

Most companies are spending too much money on their data historians.

Download Pricing