JavaScript is off. Many parts of this site require JavaScript to function properly.

One Million Tags on a Single Server

Nov 17, 2017

Recently Canary has received several questions regarding max capacity of our Unlimited solution.  Particularly, exactly how many tags can be stored on a single server.  Although we have addressed this question before, we thought it would be a good time to provide a new test.

Most customers considering an Unlimited license would fit into one of two scenarios.  First, they have a facility with a large number of tags (more than 100,000) and users (more than 25), and do not want to worry about the future cost to add tags or users.    The Unlimited license is perfect for this use case. A second likely scenario would be a large network of facilities with a strong network infrastructure.  By taking advantage of Canary's unlicensed Logging Service, each remote site could have multiple loggers collecting data and migrating it to a centralized, unlimited server.  Users at each site would simple connect across the network and use tools like Axiom and the Excel Add-in remotely.  As long as the network connection from site to site was reliable, Canary would function well in this environment, especially with our Store and Forward technology which caches logged data in the event of a network outage.

To test these scenarios, we created a local server with one million tags.  The tags were logged in twenty separate 50,000 tag logging sessions.  Each log session featured unique tag resolution, ranging from one second data to five minute data.  The change rates were varied across the logging sessions as well to best represent real-world application.  As you can see from the trend chart below, the average number of tag values (TVQ) written per second over the past 7 day period has been between 40,000 and 44,000 (orange trend).  The blue trend represents the maximum number of TVQ per second in thirty minute increments.  Quite a few periods saw peaks over 300,000 TVQs per second being logged.


The server has had no complications with handling this amount of data, nor was it expected to.  CPU system usage has been very light and other than a one-time peak of usage by the historian at 49.6%, the average CPU usage of the historian has been less than 3%.

Previously we have successfully tested much larger tag counts, however few if any customers have ever approached tens of millions of tags on a single server.  We feel very comfortable recommending one, two, or three million tags on a single server assuming a variation of tag resolution and change rate.





Read More

Should You Upgrade TrendLink to Axiom?

Nov 15, 2017

The sixth edition of a weekly Question and Answer column with Gary Stern, President and Founder of Canary Labs. Have a question you would like me to answer? Email askgary@canarylabs.com  


Dear Indy,

I can appreciate the difficulty of your decision.  If your organization is like the other 5,000+ companies that are using TrendLink, then you have found it to be a hard working and reliable tool for viewing years of sensor data.  It can be hard to try something else just because it is new!

We created Axiom four years ago to address two major concerns.  First we saw a growing trend for IT security to move away from DCOM applications, Axiom uses WCF protocol.  Axiom can securely connect to multiple historians across networks and keep those IT guys happy.

Secondly, we wanted to build a tool that would fully function within a modern web browser.  The computers we carry in our pockets have made it possible to always be connected, we wanted a platform that works as fluidly on a smartphone as it does on a desktop.  Browser capability also makes it possible to move off the Windows platform and use Axiom on Linux and Apple products.

In addition to security and browser developments, Axiom also adds the following functionality:

Cloud Friendly - DCOM free, Axiom can connect to the Canary hosted platform, Canary Cloud.

Calculate Trends Adhoc -  Imagine if you could create custom trends on-the-fly using calculations and equations involving other existing trends, now a standard feature with Axiom.

Event and Asset Mode -  You can now look at your data based on pre-built events or by defined assets.

These are a few of the many benefits, but you also asked for the negatives.  The centralized historian requires a 64 bit platform, you may need to upgrade some hardware to achieve this.  Axiom has nearly all the same functionality as TrendLink except you cannot change the trend direction to vertical like you can with TrendLink.

In the end, change is always hard and you are sure to have a few users grumble because something looks different today than it did yesterday.  But relax, the upgrade is simple and all your existing trend charts can be converted to work with Axiom.  All-in-all, this is a simple upgrade that will give you a lot of upside!

Sincerely,


Gary Stern
President and Founder
Canary Labs


Have a question you would like me to answer?  Email askgary@canarylabs.com
Read More

Three Ways To Get More From Your Historian

Nov 8, 2017

Your data historian holds more analytical potential today than you may realize. This process knowledge powerhouse can help you transform operations and fundamentally change the way time-series data is interpreted.  However, few companies have taken the necessary steps to actualize their data historian’s full potential.  Most engineers, supervisors, and operators are either working double-time to meet spikes in demand, or are handling duties outside their typical job description to reduce cost. The bottom line? You are likely too busy elsewhere to spend time mining the knowledge base waiting inside your process historian.

By implementing these three ideas, you can begin to better apply your historian’s capabilities and identify at-risk assets, increase efficiency, and lessen downtime.

Use Events for Asset Management

Most companies leverage alarming software as a notification service, setting tag limits and receiving text or email alerts if that limit is reached.  Does this sound familiar?  Similar to your SCADA alarming software, Canary Events can notify you of a high/low limit event, but only using it in this application would neglect its powerful asset management capabilities.  Take your asset management to the next step by following this best practice.
First construct and define your asset in the Canary Asset Model. For instance, if you wanted to manage a compressor, you may monitor ten to twenty points including vibrations, temperatures, flows, and pressures.
Next, establish what the normal operational thresholds are for each data point. Note, these will probably be considerably “tighter” than typical notification points. Focus less on critical values and more on ideal operating values. Within Canary Events you can create individual rules for an asset based on tag readings, defining the top and bottom of these ideal thresholds for each data point. 
You can also create logic rules within Events. For instance, you may only be worried about a current flow level if it reaches a certain level and maintains that level for more than 30 minutes. Or you may only want to start an event when both a temperature reading and a pressure reading is exceed a certain limit.  You determine the logic and the rules. Remember, you are not adding notification services, these events will run in the background and will not interrupt your process.  Continue this process and construct events based on ideal thresholds for your entire asset portfolio.  Once you define the events for one asset, you can automatically apply them to all similar assets in your organization.
Now the easy part, just wait.  After 90 days, use the Excel Add-in to review all events and make comparisons.  Look for trends or noticeable patterns that have developed.  Which assets operated outside of threshold for extended periods of time but stayed under the typical alarm point?  What adjustments need made to operations or what maintenance steps can be taken to return these assets to their ideal running status?

Use Calculated Trends to Monitor Efficiency and Cost Savings

Canary's dashboard and trending tool, Axiom, contains a powerful feature you may not be using.  The Calculate feature gives you the ability to configure complex calculations involving multiple tags and mathematical operations. Often this feature is used to convert temperatures, estimate pump efficiency and chemical use, and better guide the process. However, one of the most beneficial uses of this tool involves stepping outside of the Operations mind frame and thinking more like an accountant.
Every month, quarter, and end of year, the CFO, controller, or plant accountant run a series of mathematical equations specifically designed to identify profitability, efficiency, and return on investment. Their results are shared with CEO’s, VPs, and upper management, and reviewed in offices and boardrooms. How often do these results make it to the control operator’s desk? Probably never.  Equip your control and operation engineers with accounting insights to unlock this best practice.
Using the Canary calculated trend feature, efficiency calculations can be added directly to the trend charts for each piece of operating equipment in your facility. You can easily and quickly transform every control room into a real-time profit monitoring center. The best part, this requires very little time, and no financial investment.
The calculated trend tool can be launched from any Axiom trending chart and comes preloaded with a variety of mathematical operations including but not limited to Minimum/Maximum, Absolute Value, Sine, Cosine, Tangent, Arcsine, Arccosine, Arctangent, Square Root, and many others. Trends loaded unto the chart can also be included in any formula, and you are not limited in character length.
Operations Supervisor for the City of Boca Raton, Mike Tufts, has seen this work first hand at their water and wastewater facility. As he explains, “This is very useful for predicting costs and allocating the proper budget and funding for chemicals and pacing based on the flows. With the Canary software we closely estimate chemical amounts that will be used based on flow volume. We know exactly how much production is put out, and what they were using at the time, and we have a tighter and improved number for budgeting and purchasing for the next year, the next month, and even the next contract.”
Once the calculated trend is created, it appears on the trend chart and will continue to calculate whenever that chart is loaded. You can then use that calculated trend inside future calculated trends as well, helpful for example, when calculating pump efficiencies. If you choose, you can also write the calculated trend back into your data historian as a permanent tag.

Time Shifting Makes the Invisible Visible

Everyone knows data historians provide the visualization of both real-time as well as historical data. But how deep into your historical data do you dig, and how often? Do you generally look back in periods of days, or weeks? How often do you compare real-time data to historical data from six months ago or even six years ago and is there an inherit benefit in doing so? Look further back into your historical data when making real-time comparisons to unlock this final best practice.
Time shifting is certainly not new to historians, but it is a feature that’s rarely used to its full potential. For those not familiar, time shifting allows live data trends to be stacked directly on top of historical data trends and is a great tool for comparing a current data point to itself from a previous time period. This is an important feature as we can easily become accustomed to the data around us and miss small, but significant changes. This is often a larger problem for experienced staff as they develop a preset knowledge of where they expect certain values to fall and will be more prone to miss small changes in the data that are nearly indistinguishable if not viewed as a comparison.
For instance, recall the adage about frogs and hot water. The myth states that if you throw a frog into a pot of boiling water it will quickly jump out, however, if you start the frog in a pot of cool water, and slowly increase the temperature, the frog will fail to notice the gradual temperature change, eventually cooking. Your ability to interpret data can be very similar. A sudden change is easily identifiable, however, a slow and gradual change can be nearly impossible to perceive, and these slow, gradual changes are exactly what we are trying to identify. Often time shifting does not help, simply because the time shift is not extreme enough. To illustrate this point, imagine you are monitoring the exhaust gas temperature (EGT) of a set of CAT 3500 generators. Generally, during operation, these temperatures hover around 930 degrees Fahrenheit but have a variance of +/- thirty-five degrees. It is important to the overall health of these motors that you continue to maintain acceptable exhaust temperatures so you decide to track these historically, comparing their live data to historical data from thirty days prior.
If the exhaust temperatures began to increase by a factor of fifteen percent month over month, you would easily visually identify that trend. But what if they were increasing by only one-third of a percent each month? Would you be able to see that change, especially with a daily operational variance of nearly seventy degrees? A small change of less than one percent would typically go unnoticed resulting in no further analysis. However, there is likely an underlying issue that needs diagnosed that may lead to machine downtime or future machine inefficiency.
Enter the importance of longer time shift intervals. By comparing that same EGT tag to data from two years earlier, you would see a variance of over twenty degrees. However an allowable temperature fluctuation of +/- thirty-five degrees may hide the issue; but by applying a Time Average aggregate you would plainly see the variance.  Also, if you compared twenty-four hours of current EGT data with a sixty second time average to twenty-four hours of EGT data from two years ago with that same sixty second time average, you are much more likely to notice the resulting change.
Certainly it is not always possible to use data from several years ago, as many factors can and will change. However, as a best practice, the further back you can reach, the higher the likelihood of identifying gradual variance in your data. You can also use secondary and tertiary trends to increase the validity of these comparisons. For instance, when comparing EGT tags, you may also need to include ambient air temperature and load tags (among others) to better determine any other potential mitigating factors.
Incorporate these time shift observations in your regular schedule. Create and save charts in the Axiom trending software that will allow you to quickly view time shift comparisons on a monthly basis. These saved template charts should be preloaded with all necessary tags, formatted to have the trends banded and scaled together, and have time average aggregates already performed. Don’t stop with basic tag monitoring, follow through with the previous best practice and additionally monitor calculated efficiency trends.
Read More

Avoiding Daylight Savings Time Data Issues

Nov 7, 2017

The fifth edition of a weekly Question and Answer column with Gary Stern, President and Founder of Canary Labs. Have a question you would like me to answer? Email askgary@canarylabs.com                        

Dear DSTressed ,

This past Sunday morning Daylight Saving Time ended at 2:00AM and the clocks rolled back an hour, giving us all some extra rest.  However, for data process historians and engineers like yourself, this semi-annual phenomenon can be anything but relaxing!  In fact as you have learned, the Daylight Saving Time (DST) scenario is one of the most difficult situations for a data historian to handle.  Let me explain for others, not as familiar, while I answer your question.
  
Imagine you were to search for data at 1:35AM on the Sunday of DST.  How would the computer know if you meant the first 1:35AM or the second 1:35AM of the morning?  The Canary Historian solves this problem by storing all the data with UTC time stamps.  UTC stands for Coordinated Universal Time and is the basis for civil time today.  Note, this is a time standard, not a time zone.
Until 1972, Greenwich Mean Time (GMT), or Zulu Time, was the same as Universal Time. Since then, GMT is no longer a time standard but instead a time zone used by a few countries in Africa and Western Europe, including the UK during winter and Iceland year round.  Neither UTC nor GMT ever change for Daylight Saving Time.  However, some of the countries that use GMT switch to different time zones during their DST period.  For example, the United Kingdom is not on GMT all year, it uses British Summer Time, which is one hour ahead of GMT, during the summer months.
Since all Canary data values are time stamped using UTC time, we do not compensate for DST transitions.  Instead, it is the responsibility of client applications, like Axiom or the Excel Add-in, that are reading the data from the historian to properly interpret the UTC time stamp.  The application simply applies the correct time zone as well as the appropriate DST rules depending on time of year to properly display the time stamp in the correct local time.  Seems like a simple solution, right?  Canary has successfully used this strategy since 1993 and it works great!
Many other historians still use a local time stamp when storing data values.  I recently saw one vendor recommend their users shut the historian off and not collect data during the the DST transition window.  Not a very nice solution to such a simple problem.

I smile knowing our clients never have a firestorm of issues related to DST transitions; I hope that helps them sleep as peacefully as I do.
Sincerely,

Gary Stern
President and Founder
Canary Labs


Have a question you would like me to answer?  Email askgary@canarylabs.com
Read More

Speed Capabilities of the Canary Logger

Oct 29, 2017

The fourth edition of a weekly Question and Answer column with Gary Stern, President and Founder of Canary Labs. Have a question you would like me to answer? Email askgary@canarylabs.com                        

Dear Speedy,

The maximum speed of our system depends a lot on the quality of the OPC server you plan to connect us to. For some OPC servers, the maximum reliable speed might be 50 milliseconds while others might be 250 milliseconds. For some servers with a limited number of tags it might go all the way down to 30 milliseconds for short bursts of data.  Other questions that will need answered are the total number of tags, the number of high speed tags (less than 500 milliseconds), and how often those tags log (are the continuous or do they log in bursts).

If you plan to log hundreds of thousands of tags and the majority are changing four times per second (250 milliseconds), that is probably not realistic unless everything has been configured perfectly with some very high-end hardware; although I doubt that is what you want to do.  Perhaps a call with one of our engineers should be scheduled to discuss this particular project?

One final note. From a performance stand point, I am confident in saying that if Pi can log the data, we definitely can and probably on lower-end hardware.

Sincerely,

Gary Stern
President and Founder
Canary Labs

Have a question you would like me to answer?  Email askgary@canarylabs.com

Read More

Smart Pipeline to Debut in Norway

Oct 24, 2017


A team of engineers in Norway have previously announced the completion of a large collaborative effort to outfit offshore oil pipeline with thousands of sensors.  This new "smart pipeline" will be able to provide real-time data to operators both on ship and back at shore.  New technology like this will greatly assist in increased efficiency, the eliminating of downtime, and adherence to stronger safety standards.

As more "smart solutions" appear across numerous industries, data collection will become essential.  Canary is working hard to provide affordable industrial software that specializes in the storing of sensor data.  Read more about the smart pipeline below.

Greenlight For The World's First Intelligent Oil Pipelines

Electronics installed in Norwegian oil pipelines have been tested both at sea and in transport vessel reeling simulations. All that now remains is to install them offshore.  In recent years, researchers at SINTEF have been developing oil pipelines that can provide real-time condition monitoring reports by means of transmitting data to shore. This has been achieved in collaboration with industrial partners Bredero Shaw, Force Technology, Siemens Subsea and ebm-papst.

Last autumn, 200 meters of pipeline were laid in Orkanger harbor to find out if the electronics would survive being submerged and the sensors succeed in transmitting data onshore.  "The tests were successful", says SINTEF Project Manager Ole Øystein Knudsen.  Since then the researchers have carried out so-called "reeling tests" to investigate whether the electronics remain intact when the pipeline is reeled onto drums prior to transport offshore.  "Pipes are stretched and deformed during such tests, and because the electronics are vulnerable to bending, some of the sensors were destroyed", says Knudsen. "But now that we know what happened we can make some small modifications to better protect the electronics", he says.

Need for real-time information


The SmartPipe project has been active since 2006 when the Research Council of Norway and a number of oil companies joined forces to find the approx. 25 million needed to fund the research program.  As oil production moves into even deeper and more environmentally sensitive waters, the pipelines carrying the hot well stream to a production platform have to be in good condition.
Instead of basing condition monitoring on safety measures and inspections made every five years or so, the aim of this project has been to obtain continuous real-time information which will enable an entirely new approach checking pipeline status.

Belts packed with electronics


SmartPipe pipelines carry out condition monitoring in real time. This is achieved by installing belts around the pipelines packed with a multitude of sensors which measure pipe wall thickness, tension, temperature and vibration.  The sensor belts are located at 24-meter intervals along the length of the pipeline. A thick insulating layer of polypropylene covers the outside of the steel pipe construction, and this is where the electronics are concealed. It is also through this layer that wireless data transmissions can be sent either onshore or to the production platform.

US interest


After a year of tests, the project is now moving into its pilot phase.  Knudsen says that the project has been visited by an American oil company with whom he is having negotiations.  "The company contacted us following the Gulf of Mexico accident", he says. "Initially, they started their own project because they anticipated the future introduction of stricter pipeline monitoring regulations. But when they discovered that SmartPipe had come further down the road, they contacted us. We think this could be a commercial winner", says Knudsen.

From regulatory to real-time monitoring


The researchers see a number of benefits of the new pipelines. Since many pipelines also carry produced water from the reservoir, they are vulnerable to corrosion. This can be counteracted by adding small concentrations of inhibitor substances. However, errors in concentrations may occur and it may be some time before they are discovered. This may mean that a pipeline has to be decommissioned earlier than planned. Current pipeline condition monitoring by means of inspections and checks is also expensive.

The new system will make it possible to identify errors at an early stage and make adjustments.  Another important consideration is the monitoring of free-span sections of pipeline. In areas of undulating seabed, free-span sections may start to swing in response to marine currents.  "The new pipes mean that we can measure fatigue development and thus get accurate estimates of pipeline lifetimes", says Knudsen.

Read more at: https://phys.org/news/2015-03-green-world-intelligent-oil-pipelines.html#jCp
Read More

Using the Asset Model to Better Structure Tags

Oct 23, 2017

The third edition of a weekly Question and Answer column with Gary Stern, President and Founder of Canary Labs. Have a question you would like me to answer? Email askgary@canarylabs.com



Dear Lost,

Thanks for writing.  The problem you face is common in most industries.  Larger organizations often follow more complex naming structures in order to best organize vast amounts of tags over many locations.  However, that doesn't mean it has to be a daily chore for you or other staff to grab the data you need!

I think you need to use our Asset Model, a free product that is included with our Enterprise Historian.  Asset Model allows you to "reshape your browse tree" by modifying tag organization and tag labeling without affecting the tag name or location.  To accomplish this, you use regular expressions to create both Model and Asset Rules within our Asset Model.

Model Rules allow you to reshape your browse tree without actually changing the names of the tags.  For instance, one recent client used an alphanumeric code at the end of tag names to represent the type of asset the tag was associated with, as well as the physical location of the asset.  They created Model Rules that simplified the naming structure and replaced the alphanumeric code with common names.  So a tag named "24TE_1220C_Comp_Outboard" in the historian can also be found within the Asset Model as "Lake Charles.Compressors.1220.Outboard Bearing".

Asset Rules allow you to group common tags based on the asset they represent.  For example, if a group of 60 sensors all belong to a compressor you can organize them into an asset called "Compressor".  The asset can have subgroups allowing you to designate assets within an asset.  For instance, a compressor may have a group of tags that represent an interstage cooler, a water pump, and a motor.  These rules can be applied universally, and create hundreds of assets without requiring hours of work.

Here is a quick video that will help demonstrate the concept.

When browsing for tags inside the trending tool Axiom, you have the choice to browse your Historian where the tags will present themselves based on their default organization and naming structure, or you can browse across the Asset Model which will allow you to see your newly created browse tree and assets.  Axiom gives you the ability to quickly locate and load trends for assets, as well as compare multiple assets on a single chart.  You can see more here.

Hope this helps saves you time and a bit of daily frustration!

Sincerely,

Gary Stern
President and Founder
Canary Labs

Have a question you would like me to answer?  Email askgary@canarylabs.com


Read More
arrow_upward