JavaScript is off. Many parts of this site require JavaScript to function properly.

Using the Asset Model to Better Structure Tags

Oct 23, 2017

The third edition of a weekly Question and Answer column with Gary Stern, President and Found of Canary Labs. Have a question you would like me to answer? Email askgary@canarylabs.com



Dear Lost,

Thanks for writing.  The problem you are face is common place in most industries.  Larger organizations often follow more complex naming structures in order to best organize vast amounts of tags over many locations.  However, that doesn't mean it has to be a daily chore for you or other staff to grab the data you need!

I think you need to use our Asset Model, a free product that is included with our Enterprise Historian.  Asset Model allows you to "reshape your browse tree" by modifying tag organization and tag labeling without affecting the tag name or location.  To accomplish this, you use regular expressions to create both Model and Asset Rules within our Asset Model.

Model Rules allow you to reshape your browse tree without actually changing the names of the tags.  For instance, one recent client used an alphanumeric code at the end of tag names to represent the type of asset the tag was associated with, as well as the physical location of the asset.  They created Model Rules that simplified the naming structure and replaced the alphanumeric code with common names.  So a tag named "24TE_1220C_Comp_Outboard" in the historian can also be found within the Asset Model as "Lake Charles.Compressors.1220.Outboard Bearing".

Asset Rules allow you to group common tags based on the asset they represent.  For example, if a group of 60 sensors all belong to a compressor you can organize them into an asset called "Compressor".  The asset can have subgroups allowing you to designate assets within an asset.  For instance, a compressor may have a group of tags that represent an interstage cooler, a water pump, and a motor.  These rules can be applied universally, and create hundreds of assets without requiring hours of work.

Here is a quick video that will help demonstrate the concept.

When browsing for tags inside the trending tool Axiom, you have the choice to browse your Historian where the tags will present themselves based on their default organization and naming structure, or you can browse across the Asset Model which will allow you to see your newly created browse tree and assets.  Axiom gives you the ability to quickly locate and load trends for assets, as well as compare multiple assets on a single chart.  You can see more here.

Hope this helps saves you time and a bit of daily frustration!

Sincerely,

Gary Stern
President and Founder
Canary Labs

Have a question you would like me to answer?  Email askgary@canarylabs.com


Read More

Machine Learning: 10 Things You Should Know

Oct 20, 2017

As you can imagine, we speak with a lot of clients and potential customers that are very interested in machine learning. I remember the first time I was exposed to the concept, what an incredible step in technology! My mind was racing with the possibilities, and frankly, still is.

We have met incredibly intelligent men and women working for forward-thinking companies like SparkCognition, Mnubo, and Seeq, all of which are in the machine learning and predictive analytic spaces.  Canary continues to focus on collecting and storing data so that you can partner with companies like these to transform your process and reduce downtime!

Recently a colleague emailed me the Quora Q&A below and I thought it would be a fitting piece to share.  I particularly love the focus on how valuable good data is and the requirements of having a lot of it.  Seems to fit rather nicely with Canary's focus on providing years of unaltered sensor data.  Enjoy the article and don't worry, SkyNet is still not active....


Question: What Should Everyone Know About Machine Learning?

Answer by DanielTunkelang, who led machine learning projects at Endeca, Google, LinkedIn.

As someone who often finds himself explaining machine learning to non-experts, I offer the following list as a public service announcement.

1. Machine learning means learning from data

AI is a buzzword. Machine learning lives up to the hype: there are an incredible number of problems that you can solve by providing the right training data to the right learning algorithms. Call it AI if that helps you sell it, but know that AI is a buzzword that can mean whatever people want it to mean.


2. Machine learning is about data and algorithms, but mostly data

There’s a lot of excitement about advances in machine learning algorithms, and particularly about deep learning. But data is the key ingredient that makes machine learning possible. You can have machine learning without sophisticated algorithms, but not without good data.

3. Unless you have a lot of data, you should stick to simple models

Machine learning trains a model from patterns in your data, exploring a space of possible models defined by parameters. If your parameter space is too big, you’ll overfit to your training data and train a model that doesn’t generalize beyond it. A detailed explanation requires more math, but as a rule you should keep your models as simple as possible.

4. Machine learning can only be as good as the data you use to train it

The phrase “garbage in, garbage out” predates machine learning, but it aptly characterizes a key limitation of machine learning. Machine learning can only discover patterns that are present in your training data. For supervised machine learning tasks like classification, you’ll need a robust collection of correctly labeled, richly featured training data.

5. Machine learning only works if your training data is representative

Just as a fund prospectus warns that “past performance is no guarantee of future results”, machine learning should warn that it’s only guaranteed to work for data generated by the same distribution that generated its training data. Be vigilant of skews between training data and production data, and retrain your models frequently so they don’t become stale.

6. Most of the hard work for machine learning is data transformation

From reading the hype about new machine learning techniques, you might think that machine learning is mostly about selecting and tuning algorithms. The reality is more prosaic: most of your time and effort goes into data cleansing and feature engineeringthat is, transforming raw features into features that better represent the signal in your data.

7. Deep learning is a revolutionary advance, but it isn’t a magic bullet

Deep learning has earned its hype by delivering advances across a broad range of machine learning application areas. Moreover, deep learning automates some of the work traditionally performed through feature engineering, especially for image and video data. But deep learning isn’t a silver bullet. You can’t just use it out of the box, and you’ll still need to invest significant effort in data cleansing and transformation.

8. Machine learning systems are highly vulnerable to operator error

With apologies to the NRA, “Machine learning algorithms don’t kill people; people kill people.” When machine learning systems fail, it’s rarely because of problems with the machine learning algorithm. More likely, you’ve introduced human error into the training data, creating bias or some other systematic error. Always be skeptical, and approach machine learning with the discipline you apply to software engineering.

9. Machine learning can inadvertently create a self-fulfilling prophecy

In many applications of machine learning, the decisions you make today affect the training data you collect tomorrow. Once your machine learning system embeds biases into its model, it can continue generating new training data that reinforces those biases. And some biases can ruin people’s lives. Be responsible: don’t create self-fulfilling prophecies.


10. AI is not going to become self-aware, rise up, and destroy humanity

A surprising number of people (cough) seem to be getting their ideas about artificial intelligence from science fiction movies. We should be inspired by science fiction, but not so credulous that we mistake it for reality. There are enough real and present dangers to worry about, from consciously evil human beings to unconsciously biased machine learning models. So you can stop worrying about SkyNet and “superintelligence”.

There’s far more to machine learning than I can explain in a top-10 list. But hopefully this serves as a useful introduction for non-experts.


"What should everyone know about machine learning?" originally appeared on Quora: the place to gain and share knowledge, empowering people to learn from others and better understand the world.


Read More

Real-Time Data Powers Water/Wastewater Management

Oct 17, 2017

Water and wastewater utilities across the United States find themselves in very similar situations; plagued by aging infrastructure, handicapped by tight budgets, and unable to access live data.  A recent article (below) published last month by Pumps&Systems discusses how IIoT can be leveraged to help provide more visibility to small and medium sized municipalities.  Access to data that has previously been unavailable will help smart professionals target specific infrastructure needs without wasting resources on infrastructure that does not pose an immediate threat.

Canary could be a perfect solution for the small to medium size municipality that needs access to remote data on a budget.  The Canary Cloud offers a monthly/quarterly subscription fee (starting at $195/mo) without any large capital investments.  Learn more about the Canary Cloud to see if this is the right system for your utility.



IIoT & Real-Time Data are Transforming Water Management

Published: Pumps&Systems, 9/26/17
Authors: John Fryer, Craig Resnick


No one understands the poor state of the water and wastewater infrastructure in many U.S. cities and municipalities better than public works professionals. Due to tight budgets and a lack of broad public recognition of the problem, needed improvements have been deferred for years or even decades in some cases.

The problem isn’t limited to prominent public cases such as Flint, Michigan. Studies have revealed water losses between source and destination as high as 46 percent. Losses of this magnitude are clearly unsustainable over the long term. Yet a “rip and replace” of the existing water and wastewater systems is not feasible, promised federal infrastructure investments notwithstanding.

How can cities and towns protect the quality and availability of their public water and wastewater systems within their budget and resource constraints? Increased use of data analytics and IIoT are playing a key role in answering this question- a role that will only grow in scope and importance.

The use of data in managing water and wastewater systems is nothing new. Public works professionals have long relied on test data from water samples and other manually collected metrics to monitor their product and the efficiency of their distribution systems. But this data is limited and retrospective. Results only provided a snapshot of what was happening in a particular moment in the past. And the data is rarely analyzed in aggregate, missing the opportunity to identify subtle trends that could provide early warning of developing problems.

A New Paradigm

The IIoT changes this paradigm. Installing sensors at critical control points linked to data aggregation and analytics systems enables continuous monitoring, measurement and analysis of a wide range of parameters, from water quality to flow rates to equipment performance, which delivers insights in near real time. The advantages are significant.

Consider the water loss problem. By placing sensors at key distribution points to monitor and analyze flow data, operators can accurately pinpoint problem areas and target their scarce resources only on those sections requiring repair or upgrades. If a new leak develops, operators can be alerted to the flow problem in seconds, allowing faster response to minimize loss and the risk of an outage. Just as significant, analyzing data from across the water or wastewater infrastructure over time provides insights that help municipalities make more informed long-term capital planning decisions.

High-Value Applications

IIoT can transform the management of water and wastewater systems. But it also increases the need to protect the data. Ensuring continuous, uninterrupted data availability is a critical success factor for tapping the full potential of real-time analytics for high-value applications.

Safety and Compliance

Real-time, continuous monitoring and analytics give public works professionals the ability to identify and respond to quality issues proactively, protecting public safety. This data also provides a rich historical record to support compliance documentation. Any interruption in the flow of this data, however, could lead to operational issues that might affect supply, pressure or other critical performance issues. If data is lost, this gap could lead to a regulatory compliance violation, resulting in a fine.

Predictive Maintenance

Continuous monitoring and analytics take asset performance management (APM) to new heights. Instead of waiting until pumps or valves fail, sensors gather data on vibration and other subtle performance variations and feed it into analytics engines, detecting early signs of problems, thereby avoiding unscheduled downtime. With lead time on replacements often stretching to weeks, knowing in advance when a piece of equipment requires overhaul or is nearing end-of-life is crucial to avoiding a process interruption.

Analytics also provide insights that enable municipalities to repair or replace only what actually needs to be replaced, optimizing use of financial resources. Any interruption in this data flow, however, could effectively block operators to knowing the condition of key components, leading to a possible unscheduled downtime.

Remote Monitoring

Remote monitoring allows fewer people to monitor many more assets, resulting in labor savings that can quickly show a return on investment for these projects. This is key, especially as older employees begin to retire and finding qualified talent to replace them can be challenging.

In addition, the ability to monitor systems using devices that people already have, such as smartphones and tablets, saves municipalities from purchasing dedicated devices for remote monitoring. Uninterrupted remote availability is essential in these settings, as it ensures that systems can be continuously monitored even with a lack of on-site staff.

Water Conservation

An example of the opportunity of the IIoT is the potential to integrate weather data, including temperature and precipitation trends, into plant management analytics. This can provide predictive insights to dramatically improve water allocation and wastewater processing. These insights enable a more proactive approach to declaring or lifting bans on lawn watering or filling pools, or agricultural water distribution. Improving conservation efforts enables municipalities to avoid costly expansions of capacity. Uninterrupted data makes it possible.

Laying the Groundwork

As more public works professionals recognize how real-time data analytics delivers value in real-world applications, more municipalities will take their first steps on the IIoT journey. Certainly, developments, such as the Smart City Initiative, are encouraging urban centers to adopt new, intelligent monitoring and automation technologies to improve both the efficiency and safety of public services. As data analytics become more mainstream in public works, these capabilities will eventually migrate down to mid-size and even smaller communities.

As public works and municipal leaders plan their IIoT road map, it is important to make the right investments now to ensure the greatest payback. Investing in data systems that provide the high availability required for continuous monitoring and analytics is critical.

Equally important is making sure any new data infrastructure is simple to operate and serviceable, given the limited IT resources typical to many public works departments. The right decisions today will position public works departments to reap the benefits of the intelligent water and wastewater systems tomorrow.


Read More

How Unlimited is Unlimited?

Oct 16, 2017

The second edition of a weekly Question and Answer column with Gary Stern, President and Found of Canary Labs. Have a question you would like me to answer? Email askgary@canarylabs.com



Dear Max,

That's a great question.... without a simple answer.  We reference this package as unlimited, but as we all know, there is always a limit.... though we doubt you will ever find ours.  Let me explain.

A Canary Historian is designed to gather time-series data for many different industries with hundreds of applications.  Every user has different data with different parameters, and the Historian is optimized to work in each of these situations.  For instance, one user may be collecting only 50,000 tags at one second scan intervals, with each tag value changing on average once ever other scan.  Another individual may be collecting 500,000 tags at ten second scan intervals with an average tag change only once ever five scans.  Now factor in the performance of the hardware that is housing our platform.  As you can see, there are simply too many variables, most of which are constantly changing.  As I tell the staff all the time, "real data is messy".

So how do we claim an unlimited server when we have no way to know exactly how you will ask it to perform?  We simply over engineer everything.  Internally, we have tested our database in two different "overload" scenarios.  In the first, we logged a million tags and changed all million tag values every second.  The database ran without issue.  Secondly, we tested a single machine with 25 million tags, with a much higher scan time and a much lower change rate.  Again, the database performed.  Obviously both of these tests were conducted using higher end hardware, I used my top of the line desktop machine for one of them.  You can read more about other aspects of our performance testing here if you like.

Here is the takeaway... we have purposely developed a database that should be able to handle millions of tags on a single server and provided a licensing model that will allow you to add tags and allow anyone in your organization access to the data without worrying about future costs.  For extreme use cases we will gladly make recommendations on both server capacity as well as hardware recommendations, just let us know your specific situation.

Sincerely,

Gary Stern
President and Founder

Have a question you would like me to answer?  Email askgary@canarylabs.com

Read More

Municipality-Friendly Smart City Tech on a Budget

Oct 11, 2017

Often local municipalities lag behind on the tech curve because of inadequate funding and fewer resources. The IoT revolution can be different. Centered around low cost, simple to deploy solutions, IoT will allow small-to-medium sized municipalities to streamline data collection and have better asset visibility. With a finger on the "pulse" of their data, operators and supervisors will make key "cost saving" decisions more often.
However, this will require a plan and the right partners.  Municipalities will rely heavily on strong system integrators and engineering firms to help them leverage the available technology and work within their budget.  Hosted solutions that connect hundreds of affordable, remote smart sensors, will be extremely valuable.  To be affordable, these solutions need to only cost a few hundred dollars a month and need to eliminate up-front capital expenses.
In a recent article written by Ken Briodagh, Editorial Director for IoT Evolution Magainze, two different municipalities are profiled that are using remote data loggers to collect data from sites that typically have zero visibility.  By leveraging LoRa network technology, both are now able to see real-time data and make better operational decisions.  Read more in a copied version of the article below.

Canary has developed several connectors to leverage the LoRaWAN, a Low Power Wide Area Network (LPWAN) specification intended for wireless battery operated sensors in a regional network.  Coupled with our hosted Cloud Historian and Axiom visualization tools, we can offer affordable IoT solutions with little to no upfront software expense.

Curious to learn more?  Schedule a time to speak with us regarding your data needs by email or call us direct at (814) 793 - 3770.

How Municipalities are Making IoT Work for Them

Written by Ken Briodagh, Editorial Director
Published by  "IoT Evolution Magazine"
It’s always exciting to talk about how the biggest cities in the world are implementing new smart city strategies. Chicago, London, Singapore, and many others around the world are taking leadership roles in the global IoT, with test cases and economic development strategies focused on the data and technology that is designed to improve citizens’ daily lives and increase the cities’ efficiency and services.
And, although that leadership is needed, smaller municipalities have pressing needs than might be addressed with IoT and smart city, but far less budget with which to implement them. Those challenges aren’t stopping the progress for the most forward-thinking communities. Instead, they are forcing them to be more creative and strategic.
In Ontario, Canada’s Region of Waterloo, the municipal government is working with eleven-x, operator of a Canadian coast-to-coast LoRaWANlow power wide area network, on a new smart city development project. In what they are calling the first IoT application of its kind in Canada, Waterloo and eleven-x are testing real-time automated data collection from the region’s water supply production and monitoring wells. eleven-x’s network is designed to enable connectivity with low cost devices and has been tasked with providing real-time communication of the status of key parameters for managing the region’s primary water supply sources.
“The value we gain in having well water data at our fingertips is tremendous for us in terms of decision-making,” said Eric Hodgins, manager of hydrogeology and source water for the Region of Waterloo. “The technology may allow us to connect our wells directly with our water operations management system and give us the ability to advance the way we monitor and manage this crucial resource.”
About 75 percent of the water supply for the region is derived from groundwater through a system of 132 large production wells extracting water from local sand, gravel, and rock aquifers. The region automatically captures data from a network of 585 monitoring wells to assess any impacts and provide information to manage its water supply sources. However, the data is only collected manually several times over the course of a year, which results in delays in getting status information for each of the supply wells. These delays restrict the decision-making ability of the region in terms of managing its water supply sources.
Eleven-x is enhancing its network with data logging devices to enable automated measuring and tracking of well water levels and temperature from select production and monitoring wells. Water data will be collected and communicated automatically on an hourly basis. Additionally, a newly added capability of event-driven real-time alerts based on pre-determined parameters, such as significant level fluctuations, will also be tested.
“Real-time updates on our wells will give us a better understanding of what is happening with our water supply sources and could really improve this key service we provide to the residents of our region,” said Nancy Kodousek, the director of water services for the Region of Waterloo. “The opportunity to reduce our costs is a real bonus.”
Far to the south, scientists for the City of Lakeland, Fla., have been manually monitoring lake levels to prevent flooding. That has been a time-consuming, resource-draining task, especially during the nearly four-month rainy season.
“Maintaining balanced water levels is critical to avoid flooding in residential areas and conserve enough water for the dry season,” said Laurie Smith, manager of the lakes and stormwater division for the City of Lakeland. “Our technician has to drive back and forth between 11 lakes and make sure the levels don’t get too high.”
To monitor water levels more efficiently, the city has turned to Sensus, a Xylem brand, to help it leverage IoT technology and advanced connectivity. The Lakeland team has deployed the SensusFlexNet system to create remote water monitoring stations at two lakes using the Sensus Smart Gateway sensor interface.
With Smart Gateway and the FlexNet system, the City of Lakeland’s scientists are now able to collect water level data remotely in real time. Technicians can identify when lakes are at risk for flooding and drive directly to the affected lakes to open or close the installed flood control structures, saving time and operational costs.
With the successful pilot of the solution at two of the city’s lakes, the team now looks forward to deploying remote monitoring at the remaining nine lakes.

Ken Briodagh is a writer and editor with more than a decade of experience under his belt. He is in love with technology and if he had his druthers would beta test everything from shoe phones to flying cars.
Edited by Ken Briodagh x
Read More

Transform IIoT Data Into Actionable Insight

Oct 4, 2017

Every day the machinery and equipment that surrounds you produces more and more data. The IIoT is moving forward, whether you are embracing it or not. Earlier in the year, Forbes published an article that estimated the IoT market would see more than 267 billion in purchasing dollars by 2020! As more industries and operations embrace IIoT technology, the challenge they will face is turning this massive data pool into knowledge that will help them run more efficiently and return a better ROI.  
Canary hopes to help solve this problem by offering simple software that easily collects data from all IIoT sources without requiring database management.  The article referenced below discusses some interesting ways you can save time and money assuming you have access to all the necessary data!

Factories of the Future Save Energy with IoT

Written by Megan Ray Nichols, IoT Science Writer
Published by "IoT Evolution World", https://tinyurl.com/y9uak9dk

The Industrial Internet of Things (IIoT), is quickly becoming the new standard for data collection, data analysis and hardware connectivity in manufacturing. Although much of the technology driving IIoT is still in its earliest stages of development and implementation, its potential to transform the industry has already earned IIoT the nickname of Industry 4.0. Not only does it lead to increased productivity, efficiency and competitiveness, but it can also help you trim some of your operational costs.

How to Save Money with IIoT

Some manufacturers shy away from IIoT due to the initial investment of time and money.  While there is a lot of planning and work involved, you can take advantage of many different options to increase your operational efficiency and lower your day-to-day costs.
Utilize Smart Meters: 
Although there are a lot of complaints around smart meters installed in the home, they do provide numerous benefits; especially to the modern factory or manufacturing plant. Smart meters provide almost real-time reporting on metrics like total energy consumption and the total cost you’re paying for electricity. They can even identify grid shortcomings and other inefficiencies, which ultimately lets service providers improve their infrastructure in the future.  
One manufacturer, located on the East Coast, manufactures enough smart meters to cut electricity usage by almost 2 million megawatt hours on an annual basis. This is more than enough to provide power for 52,000 homes throughout an entire year.
Smart meters even help the environment. The decrease in overall electricity consumption and demand eliminates the need for new power plants while simultaneously reducing our nation’s dependency on the old, inefficient power plants we currently maintain. Consumers end up paying less while minimizing their carbon footprints and helping to preserve our environment.
Identify Hardware Inefficiencies:
The typical manufacturing plant features a lot of different hardware and machinery. Unexpected malfunctions are not only costly, but they could be hazardous to the safety of any employees on the factory floor. Not only does IIoT identify hardware inefficiencies and potential risks, but next-gen systems automatically shut down machines to perform predictive maintenance and prevent even longer downtimes.
Many factories utilize large-scale steam traps on a day-to-day basis. Just one malfunctioning steam trap wastes as much as $30,000 per year, but IIoT monitoring – via wireless acoustic transmitters – lets employees monitor this hardware more directly and efficiently.
Others rely on backup generators to provide electricity in the event of a sudden power outage. It’s sometimes difficult to determine the right size generator for your factory, but the data collected through IIoT makes it easier. Not only does it provide you with figures regarding your actual power consumption, but some systems can identify wasted power, too.
Control the Temperature:
Many homeowners use smart thermostats to moderate the temperature in their homes, but these devices are useful in the factory, too. According to OSHA, the ideal temperature when working indoors is between 68 and 76 degrees Fahrenheit with humidity ranging from 20% to 60%. These standards are easy to achieve with smart thermostats and other connected, IIoT-driven devices.
Whether you’re on the factory floor or working within a formal office setting, the interior temperature has the potential to impact productivity. According to a recent report, half of all respondents are unhappy with their workplace temperature multiple times per month. The same report suggests that most employees expect improvements in both productivity and morale when they’re given control over the interior temperature at their job.
You can also moderate your factory’s temperature by running some equipment during off-peak power hours. Smart appliances already do this in consumer households around the nation, so it makes sense for factories and plants to follow suit. If a particular machine tends to run hot and heats up the entire facility, consider operating it during the nightshift. Apart from lessening the load on your local power grid, this also spares employees from working around hot machinery on those warm summer days.
Maximize Your Savings with Next-Gen Hardware:
Despite the usefulness and efficiency of IIoT, there are still some traditionalists that are set in their ways. Once they begin to realize how much they can save – in the way of operational costs and by mitigating downtime and lost productivity – many embrace IIoT with open arms. Those who hesitate to implement IIoT might find it difficult to compete in the coming years. 
Read More

Avoiding Performance Loss Over Time

Oct 3, 2017

The first edition of a weekly Question and Answer column with Gary Stern, President and Found of Canary Labs. Have a question you would like me to answer? Email askgary@canarylabs.com


Dear Curious,

As historical data is added to an SQL database, the database adds more rows to tables. As the table within the database continues to grow, with more and more rows, it needs more memory space and disk space to load and search for data, resulting in slower and slower performance.  When an SQL database gets too slow, the DBA needs to roll-off some older data.  The data is either abandoned, “rolled up” to lower resolution (losing data), or placed off-line, (no longer easily accessible). 
To better handle both performance and resources, Canary uses a custom, non-SQL database designed to handle high volumes of data typical with historian applications without requiring a DBA.  To achieve the highest performance Canary incorporates both “time and dataset segmentation” into our design.  
The overall database is physically stored in pieces called datasets (logical grouping of commonly collected tags) and within a dataset by time range segments.  Segmentation control, what we call “rollover”, is normally set to daily, but there also are various other options.  The historian keeps track of all these database segments, moving portions of them in and out of memory as necessary (based on demand).   This allows us to manage an extremely large database without needing a large and expensive server.  It also keeps performance consistent regardless of the size.  We have customers with 20 years of raw, unchanged data “on-line” which is immediately and quickly accessible.

I think it is worth noting however, there are two sides to the performance issue, “Reading” and “Writing”; and it goes back to the underlying design of how the data is stored.  Some designs can write data quickly, but when they read data, the performance is terrible.  The design can be optimized for data retrieval, but then writing the data is more difficult and much slower limiting the number of tags and amount of data that a single server can handle.
Here, time and experience favor Canary.  The “Historian Core” is a fourth generation design that began in 1987.  Each new generation incorporated the field operating experience from customers and leveraged new technological improvements.  We continue to make many improvements to the Historian service for easier access, security, administration, etc. to ensure that all of Canary’s surrounding products are built around a rock-solid and time-tested solution.
So what does this mean to you?  Simply put, the Canary historian performs.  Canary tests the Historian’s writing performance by requiring a single machine to write one million tags with each tag value changing every second.  For reading performance, we require the Historian to return more than four million TVQs (Timestamp, Value, Quality) per second.  This raw horsepower means you never compromise performance and always have your data accessible.
Another issue affecting performance is the amount of storage required.  We believe Canary has the best compression and lowest storage needs of any historian on the market. We have never seen anything better.  Besides the ability to “dead-band” the incoming data at the logger level, the historian uses a format that eliminates redundant information.  And since all Canary compression is “loss-less”, the data that goes in is exactly the same as the data the comes out; an important feature when later using analytical tools such as predictive analysis.

I know this is a lengthy reply to what seems like a relatively simple question, but it’s really quite a complicated issue.
Sincerely,

Gary Stern
President and Founder

Have a question you would like me to answer?  Email askgary@canarylabs.com


Read More
arrow_upward