JavaScript is off. Many parts of this site require JavaScript to function properly.

The Benefit of Local Logging

Nov 30, 2017

The ninth edition of a weekly Question and Answer column with Gary Stern, President and Founder of Canary Labs. Have a question you would like me to answer? Email askgary@canarylabs.com  


Dear C.S.,

While you certainly could keep the Logger in the cloud we usually recommend logging data at the local source rather than logging data remotely.  By local source, we mean installing the Canary Logger and Sender Service on the same machine as the OPC server.  Even though the overall purpose of most cloud applications can vary, there are two major benefits to logging locally.
First, is the advantage of the Canary Store and Forward Service which is comprised of two components, the Sender and the Receiver.  The Sender Service is designed to move information to the Receiver which is installed local to the Canary Data Historian.  If contact is lost between the Sender and Receiver Service, the Sender Service will cache data to local disk. When communications return, the cached data is transferred to the historian in time-sequence order and removed from the Sender Service.  This prevents data loss due to network issues and also allows you to take the historian offline for version updates or maintenance as needed.
Secondly, when installed locally, the Logger communicates to the OPC server via COM.  When installed remotely, it will use DCOM to communicate.  DCOM requires a dynamic port range, making it firewall unfriendly.  You should always pick you IT battles wisely, this is one you would probably not win.
Your question does bring up an interesting side point.  The IIoT push is driving more and more "non-traditional" Canary installs.  We are working with many clients that are leveraging cloud solutions to grab just a small handful of data points from a multitude of sites.  Overall, you must look at your individual application and decide how important is the data, are there potential security risks, and are additional hardware costs justified.  If the largest concern are hardware costs, you might be interested to know that we have successfully logged data from small Linux devices like the Raspberry Pi.

We are here to offer further support as needed, we would love to talk through your individual needs.  

Sincerely,
Gary Stern
President and Founder
Canary Labs


Have a question you would like me to answer?  Email askgary@canarylabs.com

Read More

The Process of Upgrading TrendLink to Axiom

Nov 26, 2017

The eighth edition of a weekly Question and Answer column with Gary Stern, President and Founder of Canary Labs. Have a question you would like me to answer? Email askgary@canarylabs.com  




Dear Cali,

Good question, one that we actually get quite a lot.  Yes, Axiom and TrendLink are both sold apart from our data historian.  To your second question, the update process from a TrendLink license to Axiom is simple.  Canary offers a discounted Axiom crossgrade license that converts an existing TrendLink license to Axiom.  To qualify, your company needs to be an active member of Canary CustomerCare.  Otherwise, you would simply purchase a new version of Axiom.

Once the license has been obtained, simply install Axiom and connect to the existing Canary data historian.  Assuming you are running the most current version of historian software, Axiom will install without interrupting your logging or historian software.  Both Axiom and TrendLink can connect to the same data historian as well.

Existing TrendLink chart files can be easily converted to Axiom chart files using a conversion tool that Canary offers at no additional charge.  If you need help, we will gladly walk you through the process, again, no charge.

Axiom has a very similar feel to TrendLink and most staff are able to make the switch without having to bother with additional training.  For Axiom crossgrade customers, we allow you a full 90 days of Axiom installation before requesting you surrender your existing TrendLink licenses.  We have found this is very helpful when on-boarding old guys like me that don't like change.

Sincerely,

Gary Stern
President and Founder
Canary Labs


Have a question you would like me to answer?  Email askgary@canarylabs.com
Read More

Take Time for Thanksgiving

Nov 21, 2017

The seventh edition of a weekly Question and Answer column with Gary Stern, President and Founder of Canary Labs. Have a question you would like me to answer? Email askgary@canarylabs.com           
             

Dear Gary Stern,

How thoughtful of you to ask!  Yesterday while making a cup of my notoriously bad coffee (although I imagine you would find it quite delicious), I heard a sound clip from the TV that caused me to pause.  The national news was covering the last White House press briefing before the holiday break, and White House Press Secretary Sarah Sanders was at the podium.  I heard her tell the entire press corps that before she would answer any of their questions, they must first state what they were thankful for.

I sat down and watched.  Over the next five minutes, I noticed the entire atmosphere of the room change.  What is often an edgy or even hostile environment was magically transformed into a room filled with warmth and dare I say appreciation!  So, inspired by Mrs. Sanders, I ask you to answer the same question.  What are you thankful for?

Myself, I am thankful for my wife Anne, our six wonderful children, their spouses, and our new grandson Jay.  I am thankful for my church family, our community, and this beautiful part of Central Pennsylvania I call home.  I am thankful to live in a country that has afforded me the opportunity to create a business that helps men and women like yourself succeed in theirs.  I am thankful for each of my employees, not just for their performance and diligence, but that they have chosen to align their lives with mine.

Let the magic of a thankful heart transform your holiday, Happy Thanksgiving everyone.

Sincerely,

Gary Stern
President and Founder
Canary Labs

Have a question you would like me to answer?  Email askgary@canarylabs.com
Read More

One Million Tags on a Single Server

Nov 17, 2017

Recently Canary has received several questions regarding max capacity of our Unlimited solution.  Particularly, exactly how many tags can be stored on a single server.  Although we have addressed this question before, we thought it would be a good time to provide a new test.

Most customers considering an Unlimited license would fit into one of two scenarios.  First, they have a facility with a large number of tags (more than 100,000) and users (more than 25), and do not want to worry about the future cost to add tags or users.    The Unlimited license is perfect for this use case. A second likely scenario would be a large network of facilities with a strong network infrastructure.  By taking advantage of Canary's unlicensed Logging Service, each remote site could have multiple loggers collecting data and migrating it to a centralized, unlimited server.  Users at each site would simple connect across the network and use tools like Axiom and the Excel Add-in remotely.  As long as the network connection from site to site was reliable, Canary would function well in this environment, especially with our Store and Forward technology which caches logged data in the event of a network outage.

To test these scenarios, we created a local server with one million tags.  The tags were logged in twenty separate 50,000 tag logging sessions.  Each log session featured unique tag resolution, ranging from one second data to five minute data.  The change rates were varied across the logging sessions as well to best represent real-world application.  As you can see from the trend chart below, the average number of tag values (TVQ) written per second over the past 7 day period has been between 40,000 and 44,000 (orange trend).  The blue trend represents the maximum number of TVQ per second in thirty minute increments.  Quite a few periods saw peaks over 300,000 TVQs per second being logged.


The server has had no complications with handling this amount of data, nor was it expected to.  CPU system usage has been very light and other than a one-time peak of usage by the historian at 49.6%, the average CPU usage of the historian has been less than 3%.

Previously we have successfully tested much larger tag counts, however few if any customers have ever approached tens of millions of tags on a single server.  We feel very comfortable recommending one, two, or three million tags on a single server assuming a variation of tag resolution and change rate.





Read More

Should You Upgrade TrendLink to Axiom?

Nov 15, 2017

The sixth edition of a weekly Question and Answer column with Gary Stern, President and Founder of Canary Labs. Have a question you would like me to answer? Email askgary@canarylabs.com  


Dear Indy,

I can appreciate the difficulty of your decision.  If your organization is like the other 5,000+ companies that are using TrendLink, then you have found it to be a hard working and reliable tool for viewing years of sensor data.  It can be hard to try something else just because it is new!

We created Axiom four years ago to address two major concerns.  First we saw a growing trend for IT security to move away from DCOM applications, Axiom uses WCF protocol.  Axiom can securely connect to multiple historians across networks and keep those IT guys happy.

Secondly, we wanted to build a tool that would fully function within a modern web browser.  The computers we carry in our pockets have made it possible to always be connected, we wanted a platform that works as fluidly on a smartphone as it does on a desktop.  Browser capability also makes it possible to move off the Windows platform and use Axiom on Linux and Apple products.

In addition to security and browser developments, Axiom also adds the following functionality:

Cloud Friendly - DCOM free, Axiom can connect to the Canary hosted platform, Canary Cloud.

Calculate Trends Adhoc -  Imagine if you could create custom trends on-the-fly using calculations and equations involving other existing trends, now a standard feature with Axiom.

Event and Asset Mode -  You can now look at your data based on pre-built events or by defined assets.

These are a few of the many benefits, but you also asked for the negatives.  The centralized historian requires a 64 bit platform, you may need to upgrade some hardware to achieve this.  Axiom has nearly all the same functionality as TrendLink except you cannot change the trend direction to vertical like you can with TrendLink.

In the end, change is always hard and you are sure to have a few users grumble because something looks different today than it did yesterday.  But relax, the upgrade is simple and all your existing trend charts can be converted to work with Axiom.  All-in-all, this is a simple upgrade that will give you a lot of upside!

Sincerely,


Gary Stern
President and Founder
Canary Labs


Have a question you would like me to answer?  Email askgary@canarylabs.com
Read More

Three Ways To Get More From Your Historian

Nov 8, 2017

Your data historian holds more analytical potential today than you may realize. This process knowledge powerhouse can help you transform operations and fundamentally change the way time-series data is interpreted.  However, few companies have taken the necessary steps to actualize their data historian’s full potential.  Most engineers, supervisors, and operators are either working double-time to meet spikes in demand, or are handling duties outside their typical job description to reduce cost. The bottom line? You are likely too busy elsewhere to spend time mining the knowledge base waiting inside your process historian.

By implementing these three ideas, you can begin to better apply your historian’s capabilities and identify at-risk assets, increase efficiency, and lessen downtime.

Use Events for Asset Management

Most companies leverage alarming software as a notification service, setting tag limits and receiving text or email alerts if that limit is reached.  Does this sound familiar?  Similar to your SCADA alarming software, Canary Events can notify you of a high/low limit event, but only using it in this application would neglect its powerful asset management capabilities.  Take your asset management to the next step by following this best practice.
First construct and define your asset in the Canary Asset Model. For instance, if you wanted to manage a compressor, you may monitor ten to twenty points including vibrations, temperatures, flows, and pressures.
Next, establish what the normal operational thresholds are for each data point. Note, these will probably be considerably “tighter” than typical notification points. Focus less on critical values and more on ideal operating values. Within Canary Events you can create individual rules for an asset based on tag readings, defining the top and bottom of these ideal thresholds for each data point. 
You can also create logic rules within Events. For instance, you may only be worried about a current flow level if it reaches a certain level and maintains that level for more than 30 minutes. Or you may only want to start an event when both a temperature reading and a pressure reading is exceed a certain limit.  You determine the logic and the rules. Remember, you are not adding notification services, these events will run in the background and will not interrupt your process.  Continue this process and construct events based on ideal thresholds for your entire asset portfolio.  Once you define the events for one asset, you can automatically apply them to all similar assets in your organization.
Now the easy part, just wait.  After 90 days, use the Excel Add-in to review all events and make comparisons.  Look for trends or noticeable patterns that have developed.  Which assets operated outside of threshold for extended periods of time but stayed under the typical alarm point?  What adjustments need made to operations or what maintenance steps can be taken to return these assets to their ideal running status?

Use Calculated Trends to Monitor Efficiency and Cost Savings

Canary's dashboard and trending tool, Axiom, contains a powerful feature you may not be using.  The Calculate feature gives you the ability to configure complex calculations involving multiple tags and mathematical operations. Often this feature is used to convert temperatures, estimate pump efficiency and chemical use, and better guide the process. However, one of the most beneficial uses of this tool involves stepping outside of the Operations mind frame and thinking more like an accountant.
Every month, quarter, and end of year, the CFO, controller, or plant accountant run a series of mathematical equations specifically designed to identify profitability, efficiency, and return on investment. Their results are shared with CEO’s, VPs, and upper management, and reviewed in offices and boardrooms. How often do these results make it to the control operator’s desk? Probably never.  Equip your control and operation engineers with accounting insights to unlock this best practice.
Using the Canary calculated trend feature, efficiency calculations can be added directly to the trend charts for each piece of operating equipment in your facility. You can easily and quickly transform every control room into a real-time profit monitoring center. The best part, this requires very little time, and no financial investment.
The calculated trend tool can be launched from any Axiom trending chart and comes preloaded with a variety of mathematical operations including but not limited to Minimum/Maximum, Absolute Value, Sine, Cosine, Tangent, Arcsine, Arccosine, Arctangent, Square Root, and many others. Trends loaded unto the chart can also be included in any formula, and you are not limited in character length.
Operations Supervisor for the City of Boca Raton, Mike Tufts, has seen this work first hand at their water and wastewater facility. As he explains, “This is very useful for predicting costs and allocating the proper budget and funding for chemicals and pacing based on the flows. With the Canary software we closely estimate chemical amounts that will be used based on flow volume. We know exactly how much production is put out, and what they were using at the time, and we have a tighter and improved number for budgeting and purchasing for the next year, the next month, and even the next contract.”
Once the calculated trend is created, it appears on the trend chart and will continue to calculate whenever that chart is loaded. You can then use that calculated trend inside future calculated trends as well, helpful for example, when calculating pump efficiencies. If you choose, you can also write the calculated trend back into your data historian as a permanent tag.

Time Shifting Makes the Invisible Visible

Everyone knows data historians provide the visualization of both real-time as well as historical data. But how deep into your historical data do you dig, and how often? Do you generally look back in periods of days, or weeks? How often do you compare real-time data to historical data from six months ago or even six years ago and is there an inherit benefit in doing so? Look further back into your historical data when making real-time comparisons to unlock this final best practice.
Time shifting is certainly not new to historians, but it is a feature that’s rarely used to its full potential. For those not familiar, time shifting allows live data trends to be stacked directly on top of historical data trends and is a great tool for comparing a current data point to itself from a previous time period. This is an important feature as we can easily become accustomed to the data around us and miss small, but significant changes. This is often a larger problem for experienced staff as they develop a preset knowledge of where they expect certain values to fall and will be more prone to miss small changes in the data that are nearly indistinguishable if not viewed as a comparison.
For instance, recall the adage about frogs and hot water. The myth states that if you throw a frog into a pot of boiling water it will quickly jump out, however, if you start the frog in a pot of cool water, and slowly increase the temperature, the frog will fail to notice the gradual temperature change, eventually cooking. Your ability to interpret data can be very similar. A sudden change is easily identifiable, however, a slow and gradual change can be nearly impossible to perceive, and these slow, gradual changes are exactly what we are trying to identify. Often time shifting does not help, simply because the time shift is not extreme enough. To illustrate this point, imagine you are monitoring the exhaust gas temperature (EGT) of a set of CAT 3500 generators. Generally, during operation, these temperatures hover around 930 degrees Fahrenheit but have a variance of +/- thirty-five degrees. It is important to the overall health of these motors that you continue to maintain acceptable exhaust temperatures so you decide to track these historically, comparing their live data to historical data from thirty days prior.
If the exhaust temperatures began to increase by a factor of fifteen percent month over month, you would easily visually identify that trend. But what if they were increasing by only one-third of a percent each month? Would you be able to see that change, especially with a daily operational variance of nearly seventy degrees? A small change of less than one percent would typically go unnoticed resulting in no further analysis. However, there is likely an underlying issue that needs diagnosed that may lead to machine downtime or future machine inefficiency.
Enter the importance of longer time shift intervals. By comparing that same EGT tag to data from two years earlier, you would see a variance of over twenty degrees. However an allowable temperature fluctuation of +/- thirty-five degrees may hide the issue; but by applying a Time Average aggregate you would plainly see the variance.  Also, if you compared twenty-four hours of current EGT data with a sixty second time average to twenty-four hours of EGT data from two years ago with that same sixty second time average, you are much more likely to notice the resulting change.
Certainly it is not always possible to use data from several years ago, as many factors can and will change. However, as a best practice, the further back you can reach, the higher the likelihood of identifying gradual variance in your data. You can also use secondary and tertiary trends to increase the validity of these comparisons. For instance, when comparing EGT tags, you may also need to include ambient air temperature and load tags (among others) to better determine any other potential mitigating factors.
Incorporate these time shift observations in your regular schedule. Create and save charts in the Axiom trending software that will allow you to quickly view time shift comparisons on a monthly basis. These saved template charts should be preloaded with all necessary tags, formatted to have the trends banded and scaled together, and have time average aggregates already performed. Don’t stop with basic tag monitoring, follow through with the previous best practice and additionally monitor calculated efficiency trends.
Read More

Avoiding Daylight Savings Time Data Issues

Nov 7, 2017

The fifth edition of a weekly Question and Answer column with Gary Stern, President and Founder of Canary Labs. Have a question you would like me to answer? Email askgary@canarylabs.com                        

Dear DSTressed ,

This past Sunday morning Daylight Saving Time ended at 2:00AM and the clocks rolled back an hour, giving us all some extra rest.  However, for data process historians and engineers like yourself, this semi-annual phenomenon can be anything but relaxing!  In fact as you have learned, the Daylight Saving Time (DST) scenario is one of the most difficult situations for a data historian to handle.  Let me explain for others, not as familiar, while I answer your question.
  
Imagine you were to search for data at 1:35AM on the Sunday of DST.  How would the computer know if you meant the first 1:35AM or the second 1:35AM of the morning?  The Canary Historian solves this problem by storing all the data with UTC time stamps.  UTC stands for Coordinated Universal Time and is the basis for civil time today.  Note, this is a time standard, not a time zone.
Until 1972, Greenwich Mean Time (GMT), or Zulu Time, was the same as Universal Time. Since then, GMT is no longer a time standard but instead a time zone used by a few countries in Africa and Western Europe, including the UK during winter and Iceland year round.  Neither UTC nor GMT ever change for Daylight Saving Time.  However, some of the countries that use GMT switch to different time zones during their DST period.  For example, the United Kingdom is not on GMT all year, it uses British Summer Time, which is one hour ahead of GMT, during the summer months.
Since all Canary data values are time stamped using UTC time, we do not compensate for DST transitions.  Instead, it is the responsibility of client applications, like Axiom or the Excel Add-in, that are reading the data from the historian to properly interpret the UTC time stamp.  The application simply applies the correct time zone as well as the appropriate DST rules depending on time of year to properly display the time stamp in the correct local time.  Seems like a simple solution, right?  Canary has successfully used this strategy since 1993 and it works great!
Many other historians still use a local time stamp when storing data values.  I recently saw one vendor recommend their users shut the historian off and not collect data during the the DST transition window.  Not a very nice solution to such a simple problem.

I smile knowing our clients never have a firestorm of issues related to DST transitions; I hope that helps them sleep as peacefully as I do.
Sincerely,

Gary Stern
President and Founder
Canary Labs


Have a question you would like me to answer?  Email askgary@canarylabs.com
Read More
arrow_upward