JavaScript is off. Many parts of this site require JavaScript to function properly.

Axiom Version 11.1 Released

8:29 AM

In a continuous effort to ensure that Canary software is best in class, we are pleased to announce the release of 11.1.0.  Several new features are debuting with 11.1.0, including the ability to connect the Views service to OPC HDA as well as OPC UA, event playback, and the saving of design templates.  Read further for the complete list of changes.

Data Historian Changes Made in Version 11.1.0

  • Canary Admin: changed Home tab scaling to better fill the area without having a scroll bar.
  • Canary Admin: added ability to save credentials for endpoints requiring username/password.
  • Security: fixed problem with user not recognized as part of Administrators group with User Access Control enabled.
  • Canary Admin: added more links to F1, helping browsing.
  • Canary Admin: add keep alive timer on home screen when it goes out of view to prevent service from timimng out.
  • Views Service: refactored to support specific Axiom functionality and the Views plug-ins. Rewrote existing customer’s Plug-in to new interface.
  • Views Service: fixed Axiom's history data issue causing no data when going into live mode.
  • Views API: added some new interfaces for retrieving data to support UA functionality.
  • Views Service: added new plug-ins for OPC HDA and OPC UA for displaying data from 3rd party servers in Axiom.
  • Historian: fixed issue with time extension on annotations coming through Store and Forward.
  • Security: fixed issue if server name in chart file does not match case of server name.

Axiom Changes Made to Version 11.1.0

  • Added preference template to AxiomTrend.
  • Added AxiomTrend layout option to load a group of charts into saved locations.
  • Added Playback option to AxiomTrend/AxiomView.
  • Added ability for an Administrator to perform file management in the ReadOnly folder.
  • AxiomChartConversion, legend display property not converting as expected because some trend link files had -1 for a column visiblity indicator.
  • AxiomView startup performance improvements.
  • ChartConversion update to handle Fluke HDA.
  • Added default template into the UserFiles\All ReadOnly.
  • Added "stop live mode" in the RemoveAllTrends method to keep the Core and Client in sync.
  • Changed the AxiomCore to retreive the list of aggregates from the web service.
  • AxiomView: added new navigate property on button that allows switching screens without using script.
  • AxiomView: corrected UTC display issues for value bar and absolute time fields.
  • AxiomView: preserve time component when using chart date pickers.
  • Browser Client: "No Data" quality was not always being caught and would trend 0. Added different check for bad quality.
  • AxiomCore: correct logic that was causing intermittent "No Data"s to appear in the client.
  • Fixed problem with "No Data" being displayed at the Live Edge while in Live Mode.
  • Updated helps to reflect the new Metro look of Axiom.
  • AxiomCore: corrected not being able to save annotation on trend with multiple aggregates.
  • AxiomView: correct live mode not being restored when undoing time change.
  • AxiomCore: correct web service license not releasing when Axiom chart shut down.
  • AxiomView: correct graphic storage error when running multiple instances.
  • Axiom Browser Client: value bar was not being re-positioned when trends were resized.
  • AxiomTrend: corrections to statistics Samples, %valid and algorithms.
  • Fixed problem of licenses reserved for Administrative groups not being used.
  • Updated Axiom Browser client to only consume 1 license for clients on multiple tabs.
  • Security: fixed problem with user not recognized as part of Administrators group with User Access Control enabled.

Interested in trying a demo license of our product?

Read More

What Every Process Engineer Can Learn From The Goonies

12:28 PM

For the 80's Kid Who Loves Process Data

As an 80’s kid myself, I doubt there is any movie I have watched more often than Steven Spielberg’s “The Goonies”. While recently watching the Goonie gang, I realized that this fabled childhood film has some serious crossover into my adult world, industrial process data. So read below and answer the question I can guarantee you’ve never been asked before… What can "The Goonies" teach me about my process?
"The Goonies" a Richard Donner Film directed by Stephen Spielberg


One of the most lovable of all the Goonies, Chunk is the King of Exaggerations.  His stories are always over the top and he makes it nearly impossible to know if what he says is true or not. Sure he means well, but as a result of all his exaggerations, when he has something important to share, like in the movie when he needs to call the police and report the Fratelli clan, his call falls on deaf ears.

The same can hold true for your process, generally as a result of improperly set alarms.  Ask your SCADA operator how many alarms they receive on a regular basis, often because the system was not set up properly.  Alarms are useless if they aren't set to the correct thresholds and notifying the right people.


An innovator, Data has a solution for every potential roadblock.  Being chased across a log by a pair of crooks?  No problem, slick shoes to the rescue.  Falling down a huge shaft towards your death?  Pinchers of Power to the rescue!  The only issue is that Data's solutions are untested and his outcomes are rarely on target.

Pay attention because this can hold true for your process as well!  What are you currently assuming works, even though it has not been properly tested?  Before you make a change in your process, what measures do you and your team take to ensure that you have properly validated your results?  When testing a new procedure, do you start small first?  Remember, as we have said before in Step Four of our Predictive Asset Management article, shoot bullets, then cannonballs!


Loud and rather obnoxious, it seems every group of friends had a "Mouth" in their group.  Always pushing the envelope, Mouth helped push the Goonies when they might have wanted to quit and his Spanish skills came in handy repeatedly.  Chances are, even today, you probably have a friend like Mouth.  Sometimes you can't remember exactly why you hang out with them, and often you wish you didn't.  The funny thing is, they have a habit of always coming through when you need them.

That's why we feel it is so important to record all of your process data, all of the time!  Too often companies will only save process data for a set period of time, or perhaps won't monitor all of their points and tags.  Learn from Mouth, you never know when something that seems unimportant will become crucial to your operation.  If it's there, record it!


Unarguably the dreamer and leader of the group, Mikey was positive that finding One-Eyed Willie’s treasure was possible. He showed a profound respect for One-Eyed Willie and found as much value in the adventure as he did the gold.  For Mikey, it was all about the journey and every member of the Goonies had to be involved for it to count.

Understanding how your group of tags interact together to complete your process is crucial.  To truly find your treasure, you have to better understand the relationship between your equipment and your production results.  A product like Axiom is key for understanding how your data interconnects and relates.


Mikey’s older brother who is convinced that the Goonies are chasing after a myth. Unlike his brother, Brandon is willing to accept his present situation and not make any changes to improve it.  In fact, the only time he tries to fix a situation is to avoid getting into too much trouble.

If you find yourself constantly repairing equipment after it breaks, chances are you are spending too much time reacting and need to shift towards a more predictive asset management model.  Don't wait for things to go wrong or simply accept that your process cannot improve.  Download our free whitepaper and follow these four steps to shift to a predictive asset management model!


Timid every step of the way, Andy is afraid of the challenge, but learns to overcome her fears through small victories like playing the skeleton organ.  Although she doesn't start out a Goonie, by the end of the adventure, she is one of the gang.

Training new staff, especially control operators, can be a big job.  Using software that is quick to learn and simple to use can help reduce the training burden.  Read about the personal experience of one of our own staff members when they learned to use Axiom.  You can further help new operators by providing operation standards they can follow.  Using Axiom to set threshold limits, both high and low values, for key data points can be a great practice to help acclimate a new individual to the team.


Anyone that would make a character judgement based on his appearance would miss finding one of the most protective and caring members of the Goonies.  Sloth had much more to offer than a quick glance would ever reveal.

Likewise, just looking at your data without context will never provide a clear picture of what is really happening.  Using tools like time-shifting will however give you a much better idea of how your current data compares historically, especially for small incremental changes that would be impossible to determine by the naked eye alone.

One-Eyed Willie

Mikey names Willie the original Goonie.  The pirate from the old legend found his treasure and then carefully kept anyone from it, refusing to share his knowledge with others.

To truly empower your organization and transform your process, make your data available to as many people as possible!  Axiom allows for many users, even off site consultants.  Allow others to look in on the process, to compare live and historical data, and run complex calculated trends.

Would you like to learn more about Canary software?  Try out software for free!

Learn More

Know someone that loves the Goonies?  Share this article with them!

Read More

Monitor Your Process and Make Informed Decisions from Anywhere

11:04 AM

Mobile SCADA software

Axiom Keeps Your Process Accessible

Every day you use mobile technology, sometimes to book a flight, arrange for a cab ride, or check your bank account. How often do you use it to monitor your process and ensure operations are running smoothly? If you can’t currently use your smartphone or tablet to quickly and easily view your process data, you should consider Canary’s Axiom solution.

Axiom is a multi-platform visualization tool that transforms time-series process data into usable information through trending and dynamic KPI displays. Web browser based, you can quickly log into Canary software and remotely monitor your entire process from practically anywhere. There are no applications to download or install, just simply enter your username and password. Software updates occur automatically, requiring no additional IT support.

Since Axiom does not feature “control ability” there is no need to worry about accidentally interfering with the process. Instead, you can feel confident knowing that yourself, management, consultants, and any other key staff now have the ability to watch operations even when they are not in the building. This feature can be especially helpful for companies featuring multiple sites and multiple locations. No matter where you are, you can quickly and easily “look-in” on the process at any facility and see what the operator sees. Axiom allows for custom built displays that can identically match your process.
Trending software for your mobile device

Do more with your process data. Trend points, run calculations, compare several pump efficiencies, or compare current values against historic values. All are possible with Axiom, and can be achieved on a laptop, tablet, or smartphone. You already have the data, Axiom gives you the knowledge.

Read More

Smart SCADA Software: Logic Meets Process Data

12:37 PM

Trending Tools for SCADA Systems

Too many operators and engineers rely on their SCADA and HMI displays for process feedback.  While these outlets may be suitable for live, "in the moment" process management, they are not useful for process review, especially if you have access to a process historian.  Just like game film is crucial to the success of an NFL quarterback, a solid data historian with strong trending solutions and process data analytics are a must-have if you hope to better your industrial process.

Adding Logic to the Calculated Trend

We have previously written about the Calculated Trend tool and how it allows you to better compare multiple data points and study correlations.  If you have not begun to take advantage of this free Axiom tool, you should start doing so immediately.  Any proper data historian system should be equipped with robust calculated trend tool.   
SCADA Trending
If you need to watch a quick video on how to use this tool, take four minutes and watch this Canary University tutorial featuring the Calculated Trend.

Leveraging the IF Statement

To demonstrate the power of the IF statement, let's assume you are operating a piece of heavy machinery outdoors, and are concerned about the summer heat and it's potential effect on your equipment.  You have several data points you are measuring, including air temperature, humidity, wind speed, cooling fan operation, and a engine temperature point.
SCADA Trending Tool
To begin, you decide that you want to define a potential environmental threat.  You do so by declaring that your equipment is at risk when the air temperature is over 90 degrees, the humidity is above 85, and the wind speed is below 5 miles per hour.  You decide that you would like to measure the frequency in which the cooling fan is operating (off equals a value of 0 and on equals a value of 1) during these "at-risk" periods.
Using the Calculated Trend tool and the IF statement, you can create a calculated trend using the following occasion:

if([Cooling Fan Off/On]=0,if([Humidity]>85,if([Wind Speed]<5,if([Air Temperature]>89.9999,[Cooling Fan Off/On],'!NODATA!'),'!NODATA!'),'!NODATA!'),'!NODATA!')
Once entered, you overlay the calculated trend onto the Cooling Fan chart and color it yellow. You can now easily visually differentiate when the "at-risk" conditions are present and the cooling fan is not operating.
This is helpful, but with the power of the calculated trend tool, there is no need to stop there.  Let's go a step further and also overlay the gear temperature when it is above the 470 limit.  To do this, I created a new Calculated Trend, specifically designed to capture the gear temperature readings only when all 3 environmental factors are at risk and only when the gear temperature is over 470 degrees.  The formula looks like this:
if([Cooling Malfunction]=0, if([Gear Temperature]>470,[Gear Temperature],'!NODATA!' ) ,'!NODATA!')
I started the trend calculation and then dragged the new trend line up onto the Cooling Fan trend, setting the bottom scale of the Gear Temperature to 470 and locking it into place with the Cooling Fan's low scale of 0.  The end result is a visual overlay of Gear Temperature when at risk directly on top of Cooling Fan when it is not operating.  I added a limit with shading to make the trend stand out, and drilled down specifically on the time interval where the machinery was at risk.
SCADA Trending Software
If I wanted to take it one step further, I could again overlay this data directly on top of the Gear Temperature for further visual cues.  Note, this is not the same as a high limit because I am specifically only interested in the high limit of 470 when it also coincides with my environmental "at-risk" period.  Finally, I also added a time aggregate of the Gear Temperature at 2 minute intervals (orange) so I can also quickly compare the at risk Gear Temperature to the standard baseline.
SCADA Software
This is just a simple example of a way you can quickly use logic in a calculated trend.  How might you better understand your process if you applied similar concepts to your system?  Want to try Axiom for free?  Just let us know!

Read More

The Four Necessary Steps to Predictive Asset Management

11:52 AM

Process Historians Empower a Predictive Asset Management Model

Tired of reacting to unexpected equipment failures, power outages, and unplanned downtime? If you are not actively transitioning from a reactive to a predictive asset management program, you are likely to deal with the same frustrations a year from now as you have in the past. Although hindsight will always be twenty-twenty, foresight has the potential to save your company millions in operating dollars.

Unfortunately, for your company’s bottom line, it is against human nature to look for trouble when it doesn’t exist, creating a built-in “head-in-the-sand” response to the profit raiders that are always beating at the factory door. A Predictive Asset Management Model (PAMM) allows you to keep vigil on the potential danger signals coming from equipment. Through this model, information is easily shared between staff members, increasing communications and providing better feedback loops. Assets are ranked by potential risk, allowing for better preventative maintenance, planning, and parts support.

At the core of the predictive model is the data historian, faithfully recording your entire system’s process data. Once reliably stored, your historian will allow you to create asset groups, monitor the related points, and carefully study the analytics that will help you “look around the corners” and see what equipment issues you can expect in the immediate future. The key application of predictive analytics in the power and energy sector is to flatten load curves, balance peak demand, and achieve optimum efficiency from generation sources. Successfully achieving these goals not only reduces overall cost, but also limits the need for new construction and infrastructure projects.

To shift your company into a more predictive asset management strategy, begin by following these four simple steps.

Step One – Gather Your Process Data

It is alarming to learn how many organizations do not accurately collect and store their process data. Even more upsetting is the number of businesses that only maintain a six to twelve-month historic process database.

A proper data historian should give you access to decades of your entire process data history using proprietary database technology. This can be extremely difficult for a relational database such as SQL. A typical power facility can easily exceed 100,000 data tags and power distribution systems often have millions of points to monitor. To record these volumes of data at one second intervals would require write speeds that are unavailable for relational databases. Even if they were able to write at these speeds, the database management would require full-time IT staff, burdening the same bottom line you are attempting to decrease.

Furthermore, what is the benefit of a data historian if retrieving information from it is slow and cumbersome? Retrieval speeds, or read speeds, are often overlooked when choosing a data historian. To load a trend chart with sixty days of one-second data for four points, you would need to read and present 20.7 million data points. At present, the Canary data historian is capable of reading 4.6 million points per second, allowing this chart to be loaded in approximately five seconds. Relational based data historians would struggle to recall this same information in less than thirty minutes.

Most electric utilities and power facilities have data collection occurring in multiple locations. Substations for example, can easily be incorporated into a centralized data historian, allowing your team to quickly monitor activity throughout the grid. Often, especially in rural utilities, these substations are only manually checked every few days, and require staff to travel to their location. By monitoring the process data remotely, a better understanding of the entire system can be achieved.

Step Two – Share the Data Throughout the Organization

Often process data is made available only to control engineers. The consensus appears to be that only those who have control abilities need access to historical data. This could not be farther from the truth. A more effective model is to provide as much data as possible, to as many individuals that can benefit from it. Using a robust data historian with strong trending, alarming, and reporting software, you can effectively share process data without any worry about control access. The benefits from increasing your data’s reach are numerous. For instance, sharing analytics on transformer load can help utilities better understand abnormal loading patterns and consequent deterioration of the life of distribution transformers as well as help confirm proper transformer sizing.

The idea of increasing your process data availability will assist your organization and help protect it from the “knowledge loss” that can occur as key personnel retire or leave the company. The more information that is shared across the company, the less chance of any one individual holding key knowledge that cannot be replaced or passed to others. In addition, the sharing of data will also help prevent the dangerous mentality of “this is just how we do it.” Especially in asset management, this can be a dangerous adoptive philosophy. Increased data availability will cause more individuals to challenge the status quo of a typical reactive maintenance model.

The final benefit of sharing process data across the organization is a better sense of team. When more individuals are included, collaboration and cooperation are soon to follow. A group effort will be

required in any organization if the bottom line is to be effected. In this application, it will likely take a team to determine the proper algorithms, like regression analysis, that will need to be implemented to create a successful Predictive Asset Management Model. It will also take a team to make key decisions on which assets to monitor, and which upgrades and repairs are most prudent.

Step Three – Create Asset Categories

Alarming has become a standard tool available in data historian offerings, but are you maximizing its potential? Most companies leverage alarming software as a notification service, setting tag limits and receiving text or email alerts if that limit is reached. Does this sound familiar? If so, you can liken this approach to making a grocery run in a Ferrari 458 Speciale, painfully inching along at thirty miles an hour the entire way. Will it get you to the market and back home? Sure, but you will never appreciate all the performance of its 570 horsepower V8. Similarly, the Canary alarming software will certainly notify you of a high/low limit event, but only using it in this application would neglect its powerful asset management capabilities.

First identify your asset and the group of tags that will serve as performance indicators. For instance, if you wanted to manage a pump, you may monitor ten to twenty points including vibrations, temperatures, flows, and pressures. Or, you may choose to monitor a group of transformers, watching for voltage issues that may shorten a transformer’s life.

Ensure each tag is clearly labeled so you can easily identify and relate the tag to the asset. Then establish what the normal operational thresholds are for each data point. Note, these will probably be considerably “tighter” than typical notification points. Focus less on critical values and more on ideal operating values. With the Canary software, you can now set alarm boundaries at the top and bottom of these ideal thresholds for each data point. You can also create logic rules within your alarm. For instance, you may only be worried about crankcase vibration if it reaches a certain level and maintains that level for more than 5 seconds.

Finally, decide to what degree tag alarms determine asset alarms. Do three separate temperature alarms cause your asset to alarm? Or maybe a combination of one pressure and one vibration alarm would be a better indicator. You determine the logic and the rules. Remember, you are not adding notification services, these alarms will run in the background and will not interrupt your process. If you have similar assets across the process, copy this template and apply to them as well. Continue this process and construct asset and tag alarms for your entire asset portfolio.

Step Four – Start Small

Jim Collins, in his business book ‘Great by Choice’ outlined the importance of starting small. Through empirical testing, an organization can identify what works with small, calibrated testing methods, prior to deploying the process across the entire organization. Collins coined this concept as “firing bullets, then cannonballs.” The idea is that bullets cost less, are low risk, and have minimal distraction. Once on target with measured results and a proven history, the bullets should be substituted with a cannonball. The cannonball is the robust rollout of a new strategy with the full backing and support of the organization and all available resources powering it.

Apply this “bullet then cannonball” approach to begin your new predictive asset management program. To start with your entire system would be a mistake. Instead, start small. Choose a few
asset groups and monitor them, comparing your results with the rest of the organization. For instance, you may choose to monitor 5,000 of your available 120,000 transformers for load verse capacity over the next three months. At the end of this period, employ the alarming software’s analytics and review your assets. Sort your identified assets by the number of alarms they received, then look deeper into each of your higher alarm count assets. Review the data points that define those assets and study the historical data to get a better sense of what may have gone wrong.

Use the time shift tool to look further back in your data and compare these current trends with the same data from one or two years ago. A sudden change is easily identifiable, however, a slow and gradual change can be nearly impossible to perceive, and these slow gradual changes are exactly what you are trying to identify. Often time shifting thirty or sixty days does not help, simply because the time shift is not extreme enough.

To illustrate this point, imagine that instead of transformers, you choose to monitor several CAT 3500 generators. A key indicator of engine performance is the exhaust gas temperature (EGT). Generally, during operation, these temperatures hover around 930 degrees Fahrenheit but have an acceptable variance of +/- fifty-five degrees. It is important to the overall health of these motors that you continue to maintain acceptable exhaust temperatures so you decide to track these historically, comparing their live data to historical data from thirty days prior.
If the exhaust temperatures began to increase by a factor of fifteen percent month over month, you would easily visually identify that trend. But what if they were increasing by only one-third of a percent each month? Would you be able to see that change, especially with a daily operational variance of nearly forty-five degrees? A small change of less than one percent would typically go unnoticed resulting in no further analysis. However, there is likely an underlying issue that needs diagnosed that may lead to machine downtime or future machine inefficiency.
Enter the importance of longer time shift intervals. By comparing that same EGT tag to data from two years earlier, you would see a variance of over seventy degrees. Even then, due to the operating fluctuation of fifty-five degrees, you may not take action. However, if you leveraged another tool, the Time Average Aggregate, you could smooth the EGT data. By comparing four hours of current EGT data with a sixty-second time average to four hours of EGT data from two years ago with that same sixty-second time average, you are much more likely to notice the resulting change. This long-term time shift practice is invaluable and should be implemented often.

Returning to the example of transformer monitoring, take all of the information gathered from the assets in alarm, the individual data points, and what is learned from time shifting, and make educated decisions. Adjust some of your alarm points as need be, and repeat the ninety-day monitoring process again. Continue to reorder your tag groups, refine your operational thresholds, and adjust alarm rules until you feel comfortable with the results.
Once the initial monitoring is complete, make transformer upgrades and replacements based on the findings. Spend the next six months comparing the performance of the initial small sample group of transformers to the performance of the rest of the transformers outside of that group.
How much manpower was saved? How many power outages could have been avoided? Would customer satisfaction have increased? What could this do to the bottom line?
Now that the Predictive Asset Management Model has been successfully confirmed to be beneficial, it can be applied to all assets. Doing so will result in a reduction of paid overtime, fewer power loss instances, a healthier bottom line, and happier customers.


A Predictive Asset Management Model has many advantages, including the accurate forecasting and diagnosis of problems with key equipment and machinery. By properly gathering your process data and sharing it across your organization, you can begin monitoring asset groups and become proactive in maintenance, servicing, and repairs. Doing so will ensure you experience less unplanned downtime, stronger customer satisfaction ratings, and higher profit margins. However, a robust and capable data historian is the cornerstone of a successful Predictive Asset Management Model. Without easy access to your historical process data and an accessible set of powerful analytical tools, your company will be forced into reacting when it should have been predicting.

Read More

Digital Marketing Webinar: Blogging

10:15 AM

Optimizing your Blog for Strong Organic Search Results

Blogging should be a strategic practice that your business takes very seriously.  If done correctly, blogging will help you increase your organic search results for the keywords that matter, help identify your company as a problem solver to your customers, and give you an industry relevance that can be difficult to ascertain through a website alone.

Watch Blogging for Business and learn how to implement a strategy that will drive more traffic to your website without having to hire another agency or marketing firm.

Follow these best practices anytime you blog.  In fact, download, print, and tape to the side of your monitor!

Great Resources Online Resources

Keyword Density Tool (Free) -

Search Rank Tool (Free) -

Learn more about Organic Search Results and SEO!

If you would like a copy of the PowerPoint from the webinar, just request it below.

Read More

A Guide to the Best Data Historian Software: A Review of the Canary Historian Versus Rockwell FactoryTalk and OSIsoft Pi

1:22 PM

A Personal Review and Comparison of Data Historian Software

When a company decides to make a capital investment in data historian software, it can often be overwhelming.  Searching through the complete list of data historians in the process database family will reveal nearly a dozen data historians in the marketplace, not to mention open source options that an IT team may decide to attempt.  By definition, data historian software should be
- highly scalable
- accessible
- non-relational (no SQL)
- secure with redundancy
The personal preference of the end user will greatly guide which data historian software they prefer, but a few important features such as scalability, speed, reliability, and overall system pricing, both initial and recurring should be heavily considered.

As an attempt to try to make the process easier, a direct comparison between the Canary software and both FactoryTalk and Pi has been compiled for your review.  Although subjects like ease of installation, training, and overall simplicity of use cannot be addressed, hopefully this comparison will identify strengths and weaknesses of the software.


FactoryTalk SE Historian

Rockwell offers a suite of tools based around their FactoryTalk Historian. Trending tools are offered through their VantagePoint software.  Below is a quick introduction to their data historian software as well as a quick trend mock up.

OSISoft Pi

Industry leader, OSIsoft Pi has offered a data historian software solution for over 30 years.  Installed in over 100 countries, the Pi System is familiar to most, comprised of the Pi Server, the Pi Interface, and Pi Clients.  Below is a whiteboard illustration of the Pi System.

Canary Data Historian

Created 31 years ago, the Canary Data Historian is built on a proprietary database and offers industry leading speed and reliability.  Used in 26 countries with over 18,000 installs, Canary offers an easy to use and quick to install solution.  TrendLink, the Canary legacy trending tool, has recently been replaced with Axiom, the Canary trending and analytic solution.  Axiom offers many useful tool sets, all available with a simple "right click".

System Size and Limitations

Data Historian Software for Rockwell FactoryTalk
Screen shot of FactoryTalk Historian SE statistics
As you can see from the FactoryTalk product page, only 500,000 tags can be monitored on a server with a top write speed of 100,000 tags per second.  Also of note is the limit of only 50,000 tags per interface with a maximum write speed of 25,000 tags per second.

OSIsoft Pi

No public information regarding system capacity or write speeds is made publicly available by OSIsoft.  It is mentioned that millions of tags can be stored and that 10,000's of points can be writen at sub-second intervals in their online brochure.

In 2014 a complete study was conducted by President Gary Stern and the lead engineering team.  The Canary system was tested to capacity and the following performance benchmarks were found:

Maximum Tag Count: 25 million plus
Maximum Write Speed: 2.8 Million data points per second (10,000 tag system, 100,000 tag system tested 18% slower)

Read full study here.

Data Visualization and Trending

FactoryTalk Vantagepoint allows the Rockwell user to visualize their historical data on multiple platforms and see custom graphs alongside trends and data points.  Billed as a manufacturing business intelligence solution that integrates all data into a single information management system, Vantagepoint seems to have some strengths as well as weaknesses.  The strengths seem to fall within it's ability to be customized to show the user the data they want to see.  However, for heavy trending analysis, the system seems rather limited.
Rockwell data historian software

OSIsoft Pi

OSIsoft's Coresight offering, seems to be a stronger contender than Rockwell for trending abilities.  The traditional trending software will allow a user to overlay several trends and compare data on the fly as well as historically without much fuss.  Coresight also offers multi-platform viewing as well as a level of customization that includes graphs and tables overlayed with trending.
Pi Data Historian Software

The Canary data historian comes to life with Axiom, its trending and visualization package that features specific tools designed to give the user the ability to quickly and easily compare historical data trends to current live data trends.  Tools such as calculated trending, annotations, high/low limits, time shifting, and statistical aggregates allow the Canary user the ultimate performance of data comparison.  Also multi-platform Axiom can be viewed on desktop, smartphone, or tablet and is server based, not requiring individual installation or app download.
Axiom Data Historian Software
AxiomView allows a custom interface that gives the user the ability to monitor the entire process and include needed trend graphs on one screen.  Each trend graph is completely operational, independent to the rest of the screen, and can be shared with anyone on staff.  No control is offered, so information is easily shared without concern of operational hazards.
trending solutions for data historians


Neither Rockwell or OSIsoft Pi publish online pricing guides, making this final category difficult to compare.  However, information can gladly be provided regarding Canary Data Historian pricing.
A smaller system of 250 data tags with an Axiom license is available for under $2,950.  As the tag count increases, the price per tag drastically decreases.  For instance, a 5,000 tag bundle featuring 5 Axiom client licenses sells for $34,950.  Better yet, for under $99,500 a complete multi-site system featuring corporate monitoring, 35,000 tags, and 10 Axiom license can be purchased.

5 Reasons to Try the Canary Data Historian Software

1. No Data Loss

At Canary, we believe you need access to all of your data, forever. That’s why we never discard any of your data. Other data historian software “compress” their archive which creates gaps or voids in your time series data resulting in the loss of time, value and quality parameters (TVQs).  Canary's data historian software does not do this, instead it permanently keeps all of your time series and process data.  To reduce the impact on storage space we use intelligent engineering techniques, but we never get rid of any TVQs!

2. Axiom Offers Superior Trending Software

Axiom trending software is built around the concept of quickly and simply offering analytical data in an easy to use program. This idea sounds like it should be easy to accomplish, but in reality, it is quite difficult. By listening to our customer's personal needs and requests, we built Axiom to offer the features you need and will actually use.

8 Things Axiom Offers We Know You Will Love:
  • - Available on multiple platforms, from desktop to smartphone
  • - Offer Click Once, server-centric installation, the software updates automatically
  • - Calculated trends with the option to write them back to the historian
  • - Charts show real-time data along with the historical data
  • - Time shifts and interval averages can be displayed on any trend
  • - Personal flexibility in arranging the display layout and formatting
  • - Staff can be fully trained with 20 minutes of review
  • - High and low limits with color and shading options as well as limit lines

3. Reliable Beyond Belief

Canary data historian software defines reliable, solid, data historian software that just works.  If you aren't already using our data historian software, specifically engineered to continue to run without constant supervision, you should ask yourself, “Why am I wasting money on the ridiculously expensive maintenance contract I'm currently paying another vendor?” Canary data historian software has a long life and proven history of reliability; some of our oldest systems have been running for more than 15 years, more than a century in “computer years”.  Our annual CustomerCare fees are extremely affordable and offer constant product upgrades, training, and continued support as needed.

4. Extreme Performance - It's Crazy Fast!

Let's talk speed.  The Canary data historian can record sub-second data down to 100 nanoseconds!  Not only can we handle fast data, we can handle large volumes of fast data.  Our system has been tested to 4,000,000 TVQs per second.  Now that's fast! Even more impressive, we can retrieve at a rate of 9,000,000 TVQs per second.

So how do we do it?  It's all in our proprietary data base.  Canary’s approach does not try to “pump up” a general purpose database and then “shoe horn” the data into it. Relational databases were not designed to handle the 24/7 data loads that occur in process monitoring. Generally speaking, our data historian is at least 10 (or more) times faster than any general-purpose database for these applications.

5. Incredible Customer Support

Data historian software is the one-thing we do.  We don't produce hardware, we don't function as a control system integrator, we produce lightening fast, ultra reliable data historian software. This singular focus enables and drives us to provide outstanding customer support and service. Outstanding support is always one of the top reasons our customers tell us that they have picked us.  Read the personal review by Mike Tufts, a long time Canary data historian software user below:
"We are very pleased with the Canary product. It is very efficient. We have been collecting 20,000 tags at one second intervals since 2006. It still runs flawlessly. We are just now taking the first 10 years of data offline to store it to the side, not because we have to, but just because we can. We haven’t had any issues with databases. Canary’s technical support is by far one of the best software providers technical support that we deal with. There is always someone available immediately if or when there is every any issues."
- Mike Tufts, Control Supervisor, City of Boca Raton Florida
We always respond with knowledgeable data historian and trending software support. You don't get the run around, "I'm sorry, that's not my department" is not in our vocabulary.  Our support staff is the definition of efficient.  They do not operate from a phone script and they will never use a troubleshooting manual.  They are an integral part of our software development family and understand our software from the top to the bottom.  Anytime you contact customer support, your call comes directly to our corporate office in Central Pennsylvania.  Have a question that is more suitable for one of our software engineers?  Our customer support department works directly alongside our engineering and development team.  In fact, Canary is one of the few software companies world-wide that handles all of their own customer support completely in-house.  Want to review it for personally?  Just call, (814) 793-3770, or email customer support and see how fast we reply!

If you would like a live demo of either Canary's data historian or plant historian software, as well as our data trending software, Axiom, easily schedule below.

A Guide to the Best Data Historian Software

Read More