Lately, I see a lot of folks on Twitter talking about the #Edge of #CloudComputing and arguing “That’s the Edge” and “That’s not the Edge!”…
My first thought was, “Wow, we sure love reusing words and then debating their meaning!”
And then I remembered our discussions at the first #OpenDev conference. The team behind the OSF put this mini-conference together to collectively answer the big question: “What is the Edge?”
The napkin drawings we did in the back of the room on the first day turned into an impromptu talk on the second day (it might have been recorded, but I’m not going to go look for it right now). Fueled by some strong Portland coffee this morning, I decided to write this post and rehash the discussion. It is still the best definition I’ve heard in any forum since then.
First, a few other definitions, just to make sure we’re on the same page:
- Dynamic Workload: an application, service, or functional program (whether run on physical, virtual, or containerized infrastructure) that is managed by an orchestration system which dynamically responds to external (user or device) driven demands.
- Cloud: an API-driven consumption model for abstract compute, storage, and networking resources.
- Connected Device: an internet-connected device whose function is to interact directly with a human, eg.: cell phone, smart lightbulb, connected speaker, learning thermostat, self-driving car.
I think we can all agree that application workloads have been moving away from traditional colo’s and managed hosting providers — and into the cloud. I’m not here today to debate whether “cloud” means centralized (AWS, GCE, etc) or on-prem / in a CoLo, the point is that application management has become more automated, workload-driven, and centralized.
However, there is now a growing pressure to move “towards the edge” — but what exactly is that?
In a connected world, the Edge (of the Cloud) is that Compute resource which is closest to the data producer and data consumer, where a Dynamic Workload can be run in order to meet user, device, or application demand.restated from discussions at OpenDev 2017
Latency and bandwidth are key to understanding the move towards the Edge.
The increased bandwidth consumption comes, in part, from sensor networks and IoT devices which inherently generate more data. The latency requirement comes from the situational use of “smart devices” which need to respond more quickly to their environment than a webpage ever did. In short, the Edge is also the result of the increasing prevalence of Augmented Intelligence (AI) & Machine Learning (ML) applications.
Today, companies need faster processing of data streams and applications that are more tolerant of network hiccups; we will soon need the ability to deliver AI-driven responses to environmental changes while being completely disconnected from a traditional data center. Incidentally, this same situation is driving the creation of, and race towards, 5G. (Hint: it’s all connected!)
Allow me to offer a few practical examples.
Imagine a self-driving car, with all its video cameras and sensor networks that require massive AI-driven processing to “see” and “react” to changing traffic conditions. Now imagine the uplink glitches for 100ms while transferring between cell towers. Whereas your facebook-browsing or video-streaming service (assuming you were a passenger in the car) wouldn’t be affected by a quarter-second latency spike, the autonomous vehicle could be unable to respond to a sudden change in road conditions (eg, another vehicle swerving) and cause a crash! That’s obviously terrible and should be prevented! To address this, a lot of powerful computing resources need to be put into the car — or the car’s uplink needs to be both blazingly fast and come with a guaranteed 100% uptime. That car needs to become the edge.
Here’s another example: imagine your excitement at having just installed 20 Hue lightbulbs. They’re all busy streaming sensor data into “the cloud”, and your house is well beautified. You turn on Netflix, but it starts having trouble delivering any HD content because of the increased network traffic from all those smart bulbs. Clearly, you don’t want that, and Phillips knew this, so they designed Hue in such a way that you need to buy a Hub which your Hue bulbs connect to. The hub aggregates traffic between your devices and the Cloud. That hub is the edge.
These scenario aren’t oddities; this is the direction that so many tech companies are going. If you hear the buzzwords IoT, Edge, ML, AI … it’s all related to the drive to deploy applications closer to sensors and consumers. To address this, we need abstract compute workloads that can run as close to the data producer & consumer as possible, thus reducing latency and increasing available bandwidth.
Today, the Edge is often a rack of hardware installed in a manufacturing plant, connected back to the company’s central CoLo or up to their cloud services through a VPC. In some cases, it’s a device in your own home.
Cisco is already touting its AI-driven commercial routers, and it won’t be long before this technology reaches consumer devices (if it hasn’t already). This might look like a smart set-top box installed in your house which enables your Cable provider to dynamically run microservices to optimize your viewing experience on their content, thus potentially circumventing NetNeutrality rules.
That doesn’t sound great, does it? Let’s get even darker… what if your ISP could run a microservice on your WiFi router to monitor your in-home device usage and better target advertising to you? … I think that’s a gross invasion of my privacy, but it’s not far off. We already have companies in our house (Alexa, stop spying on me!), and it stands to reason that telco’s want in on that revenue.
But at least you can still run Open Source wifi firmware (eg, OpenWRT) 🙂
While the promise of smarter, faster, more ubiquitous, always-connected computing brings a lot of value for companies and users who aren’t concerned about privacy, I’m going to keep working on Open Source projects that keep the playing field level and empower independent users, too.