Edge or Cloud? The Six Factors That Determine Where to Put Workloads
It’s predictable as continental drift: the center of gravity in computing moves every few decades from the center and the edge and back again thanks to changing demands and technological advances.
From 1948 to the 70s, centralized mainframes ruled because that’s all there was. Then Intel invested 4004 in 1971 which brought processing power to desktops: distributed edge computing was born. In 1993 Netscape came up with the first browser. Sure, the browser caused PC sales to explode, but it had even a bigger impact on the evolution of what we could do with aggregated computing power. Companies like Amazon and Google became global giants while data centers grew to be the size of the Pentagon. The cloud seemed poised to take over everything.
Find out more about Amazon Digital Transformation Success
Then came the fail whale. Remember how Twitter used to regularly fail because users overwhelmed it? That was the first of a growing number of signs that trying to run everything from a central point wasn’t going to work.
And now we have IoT. Not only will the Internet of Things generate asinine quantities of data, this data will often have to be acted on in rapid fire fashion. IDC estimates that 40% of IoT data will be captured, processed and stored pretty much were it was born. While Gartner estimates the amount of data outside the cloud or enterprise data centers will grow from 10% today to 55% by 2022.
So how do you figure out what goes where?
Who Needs It? IoT will. Manufacturers and utilities are already tracking millions of data streams and generating terabytes a day. Machine data can also come at blazingly fast speeds–vibration systems can churn out over 100,000 signals a second—and get delivered in a crazy number of formats.
But everyone wants different cuts of data. Australia Gas Light, Australia’s largest utility, tracks 45,000 data streams every five minutes in its diagnostic center. Some are tracked by people monitoring water levels at hydro dams. Other people analyze demand. A lot of those users sit near the asset. Just keep it there.
The best bet: look at the use case scenario first. Chances are, every workload will require both cloud and edge technologies, but the size of the edge might be larger than anticipated.
How Urgently Do They Need It? We’ve all become accustomed to the Netflix wheel that tells you your movie is only 17% loaded. But imagine if your lights were stuck at 17% brightness when you came home? Utilities, manufacturers and other industrial companies operate in a real time world. Any amount of network latency can constitute an urgent problem.
CAISO, the grid operator in California, receives updates about the state’s power status every four seconds, a level of immediacy that paves the way for more renewables. (Similarly DTE Energy is on track to cut outage minutes experienced by its customers by 6.6 million minutes a year through new grid sensors.) Rule of thumb: if interruptions can’t be shrugged off, stay on the edge.
Is Anyone’s Life on the Line?When IT managers think about security, they think firewalls and viruses. When engineers on factor floors and other “OT” employees—who will be some of the biggest consumers and users of IoT — think about security, they think about fires, explosions and razor wire. The risk of a communications disruption on an offshore drilling rig far outweighs the cost benefits of putting all of the necessary computing assets on the platform itself. Take a risk-reward assessment.
What Are The Costs? Let’s say the data isn’t urgent, it won’t impact safety and more than a local group of engineers will need it. Do you send it to the cloud? It depends on the cost. Too many companies have responded to cloud like a teenager in 2003 given their first smart phone. Everything seems okay, until the bill comes.
Wikibon’s David Floyer found that the bandwidth costs for even basic reporting at a wind farm can be a burden after three years. The emergence of LTE and private networks, however, may begin to change the picture a bit.
How Complex is the Problem? This is the most important, and most challenging, factor. Are you examining a few data streams to solve an immediate problem such as optimizing a conveyor belt or are you comparing thousands of lines across multiple facilities?
Most predictive maintenance problems are actually solved on the edge: give people a few data streams and they can figure it out. No cloud required White House Utility District, a municipal water district in Tennessee, was losing 32% of its water through leaks. Two employees came up with a way to pinpoint leaks with just a few data points. In three years, WHUD has recovered over $2.5 million on water that would have been lost abd postponed a $15 million upgrade until 2028.
Syncrude, a tar sands mining company in Canada, saved $20 million a year after employees, again looking at local data, figured out what was causing truck engines to explode.
Who Do You Want to Share It With? Companies increasingly will share data: large power consumers will open a sanitized portal into their data so utilities can plan power loads in real time (and earn themselves curtailment fees.) Supply chains will become more fluid. Data sharing can start at the edge, but it ultimately will have to rely on intermingled clouds.
And as sharing becomes more compelling, problems will arise and the balance of power will inexorably shift again.
This article was written by Chris Nelson, VP of Software Development at OSIsoft.