Edge of the Clouds

xkcd comic #307: "that's no moon"
“That’s no Edge…”

Lately, I see a lot of folks on Twitter talking about the #Edge of #CloudComputing and arguing “That’s the Edge” and “That’s not the Edge!”…

My first thought was, “Wow, we sure love reusing words and then debating their meaning!”

And then I remembered our discussions at the first #OpenDev conference. The team behind the OSF put this mini-conference together to collectively answer the big question: “What is the Edge?”

The napkin drawings we did in the back of the room on the first day turned into an impromptu talk on the second day (it might have been recorded, but I’m not going to go look for it right now). Fueled by some strong Portland coffee this morning, I decided to write this post and rehash the discussion. It is still the best definition I’ve heard in any forum since then.

First, a few other definitions, just to make sure we’re on the same page:

  • Dynamic Workload: an application, service, or functional program (whether run on physical, virtual, or containerized infrastructure) that is managed by an orchestration system which dynamically responds to external (user or device) driven demands.
  • Cloud: an API-driven consumption model for abstract compute, storage, and networking resources.
  • Connected Device: an internet-connected device whose function is to interact directly with a human, eg.: cell phone, smart lightbulb, connected speaker, learning thermostat, self-driving car.

I think we can all agree that application workloads have been moving away from traditional colo’s and managed hosting providers — and into the cloud. I’m not here today to debate whether “cloud” means centralized (AWS, GCE, etc) or on-prem / in a CoLo, the point is that application management has become more automated, workload-driven, and centralized.

However, there is now a growing pressure to move “towards the edge” — but what exactly is that?

In a connected world, the Edge (of the Cloud) is that Compute resource which is closest to the data producer and data consumer, where a Dynamic Workload can be run in order to meet user, device, or application demand.

restated from discussions at OpenDev 2017

Latency and bandwidth are key to understanding the move towards the Edge.

The increased bandwidth consumption comes, in part, from sensor networks and IoT devices which inherently generate more data. The latency requirement comes from the situational use of “smart devices” which need to respond more quickly to their environment than a webpage ever did. In short, the Edge is also the result of the increasing prevalence of Augmented Intelligence (AI) & Machine Learning (ML) applications.

Today, companies need faster processing of data streams and applications that are more tolerant of network hiccups; we will soon need the ability to deliver AI-driven responses to environmental changes while being completely disconnected from a traditional data center. Incidentally, this same situation is driving the creation of, and race towards, 5G. (Hint: it’s all connected!)

Allow me to offer a few practical examples.

Imagine a self-driving car, with all its video cameras and sensor networks that require massive AI-driven processing to “see” and “react” to changing traffic conditions. Now imagine the uplink glitches for 100ms while transferring between cell towers. Whereas your facebook-browsing or video-streaming service (assuming you were a passenger in the car) wouldn’t be affected by a quarter-second latency spike, the autonomous vehicle could be unable to respond to a sudden change in road conditions (eg, another vehicle swerving) and cause a crash! That’s obviously terrible and should be prevented! To address this, a lot of powerful computing resources need to be put into the car — or the car’s uplink needs to be both blazingly fast and come with a guaranteed 100% uptime. That car needs to become the edge.

Here’s another example: imagine your excitement at having just installed 20 Hue lightbulbs. They’re all busy streaming sensor data into “the cloud”, and your house is well beautified. You turn on Netflix, but it starts having trouble delivering any HD content because of the increased network traffic from all those smart bulbs. Clearly, you don’t want that, and Phillips knew this, so they designed Hue in such a way that you need to buy a Hub which your Hue bulbs connect to. The hub aggregates traffic between your devices and the Cloud. That hub is the edge.

These scenario aren’t oddities; this is the direction that so many tech companies are going. If you hear the buzzwords IoT, Edge, ML, AI … it’s all related to the drive to deploy applications closer to sensors and consumers. To address this, we need abstract compute workloads that can run as close to the data producer & consumer as possible, thus reducing latency and increasing available bandwidth.

(diagram showing images of a data center, small colo, cell tower, and wifi router, along an axis labelled "distance in milliseconds" which decreases from 100ms to 1ms, with a collection of icons on the right edge representing a user and consumer devices)

Today, the Edge is often a rack of hardware installed in a manufacturing plant, connected back to the company’s central CoLo or up to their cloud services through a VPC. In some cases, it’s a device in your own home.

Cisco is already touting its AI-driven commercial routers, and it won’t be long before this technology reaches consumer devices (if it hasn’t already). This might look like a smart set-top box installed in your house which enables your Cable provider to dynamically run microservices to optimize your viewing experience on their content, thus potentially circumventing NetNeutrality rules.

That doesn’t sound great, does it? Let’s get even darker… what if your ISP could run a microservice on your WiFi router to monitor your in-home device usage and better target advertising to you? … I think that’s a gross invasion of my privacy, but it’s not far off. We already have companies in our house (Alexa, stop spying on me!), and it stands to reason that telco’s want in on that revenue.

But at least you can still run Open Source wifi firmware (eg, OpenWRT) 🙂

While the promise of smarter, faster, more ubiquitous, always-connected computing brings a lot of value for companies and users who aren’t concerned about privacy, I’m going to keep working on Open Source projects that keep the playing field level and empower independent users, too.

distributed roots

A lot of my conversations with friends these days have been about my choice to leave Facebook and other centralized platforms. The short answer is, those companies are not the Internet that I want to build.

My career in software development started at a now-forgotten company called Static Online, which straddled the space between games and video streaming. I joined in ’99, the company peaked in late 2000, and closed in 2001 – typical for a dot.com startup in those days. What we did there has shaped how I see the internet, and I’m grateful to the folks I got to work with. If I have any regrets, it’s that we didn’t publish our work, and that it took me another 6 years to connect with the F/OSS community.

You see, we were working on building a distributed, dynamically-routed, peer-to-peer network capable of live video streaming from any user (even someone on dial-up) to as many users as wanted to watch. This wasn’t possible with any other technology at the time, and while it resembles the architecture of several current projects, such as ZeroNet, the tech isn’t widely available even today.

With Static focused on video streaming in an era when most people were either on dial-up or slow DSL connections, we had created a protocol to:
– divide a large file (eg, a video stream) into chunks, identified by unique hashes;
– represent the relationship between chunks;
– dynamically identify the nearest (fastest) server, relative to a given client;
– upload the chunks in parallel to one or more servers, which in turn streamed to several of their neighbors, and so on;
– exponentially amplifying the bandwidth potential of a single upload stream.

If this sounds familiar to you, it should. This approach is remarkably similar to the BitTorrent protocol that was published a few years later. We also implemented a file sharing tool within our client/browser, but this wasn’t our core business so that feature just collected dust.

I was a basically a kid at the time that we built all this, and when the funding at Static ran out all I wanted to do was go study tai chi and meditate by a stream for a while. (No joke – I did that in the summer of 2001)

Fast forward to 2019 and we are seeing a massive resurgence of peer-to-peer innovation. At the same time, with all the privacy concerns around Facebook and the recent ban of adult content on Tumblr and related sites, more and more users are moving away from centralized platforms, where content moderation can be done by AI’s, and whose ad-click buttons track our movements all over the web.

Instead, folks are opting for decentralized / peer-to-peer services once again — like Mastodon and PeerTube.

Right now, I feel a little like that kid again, dreaming of a globally decentralized network where users build communities by co-hosting the infrastructure they share, of an internet where we are all unburdened with online ads that spy on us.

It’s good to look back 20 years and feel like I’m still on the right track.

business cards

I often suffer #DecisionFatigue as the day goes on. Roughly speaking, every decision I make throughout the day gets a little harder to make, and it becomes easier to get distracted thinking through the ‘what-ifs’. This cognitive tax up over time and affects how happy I am, and how much social/personal bandwidth I have. Even social decisions (should I go to this party or that party?) can tax me – but these, oddly, tend to be easier decisions. I’ve started to wonder why that is…

Today’s first decision: business cards.

It’s been at the bottom of my #TODO list for a while. This morning, I looked at redesigning my personal cards and immediately became aware that even the little decisions – what weight cardstock to use, whether to round the corners – were starting to cause decision fatigue. Maybe this was because I’m fighting off a cold, so I had less energy to start the day with.

In practicing #mindfulness, I observed my mind projecting from other perspectives – I trained it to do that a long time ago, and I value this ability, but in this way it is not serving me well.

I believed that these cards should look professional so they will be pleasing to whomever receives them. I found myself imagining how someone else might interact with each card design, how it would feel to them, whether it would fit into their pocket or their wallet, whether they’d keep it or not. None of these thoughts have simple answers, in part, because every person I imagine has different cultural biases, and my brain is trying to map those based on the furthest reaches of my own exposure to other cultures and perspectives. I’ve lived in a lot of countries, and even when generalizing, this decision tree is painfully complex!

But wait, I thought, the shape of my business cards is trivial.

It doesn’t warrant this level of attention. More importantly, I don’t want to feel stressed about it, because there are other, more important, things that need my attention.

I decided to use a different approach: will it bring me joy?

I imagine receiving the card as I design it, and then observe each possibility to see if it feels good. I’ve never used this approach before because it’s entirely self-centered, and that’s supposed to be bad!

Let’s look at this another way: by designing a card that brings me joy, it will reflect my unique joy to whomever receives it — and that is a more authentic (and memorable) window into who I am, and will help to establish a better connection anyway.

Also, joyfully decluttering my mind feels good 🙂