SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

The Road to Open Edge Computing

By: Ildiko Vancsa

Humanity has depended on tools from its earliest days. As we’ve evolved, tools have turned into what we call technology, which has had its own trajectory of evolution. This is true for cloud computing, as edge computing is the next step forward, bringing the cloud closer to people’s everyday lives. This progress requires evolution in the assembly lines that support it.  

Computational power surrounds us. It’s on the fields that grow the vegetables we eat and in the factories that package food and make the plates and silverware with which we eat it. While our use of computers proliferates and expands, the evolution to enable further functionality never stops. Neither does progress toward connecting our ever-growing number of endpoints, simultaneously.

Cloud computing brought us the possibility to utilize resources in large data centers in a flexible and agile way. This is in stark contrast to workloads that ran on dedicated hardware, which provided an environment more suitable for applications but was wasteful and rigid. Even industry segments like telecommunications chose to experiment with cloud computing, and many operators and vendors went even further— not just deploying a cloud solution but choosing a platform with mainly open source components in it.

While there are cases in which cloud software on the back end is a natural fit, it rather comes as a surprise when your service provider (SP) picks this direction due to principles such as high availability, or five nines. This represents the service level agreements (SLAs) on the availability the SPs are required to provide for their services. Five nines translates to five minutes of downtime per year, and while it is very annoying when your favorite webshop is not available, it won’t threaten your life like not being able to make an emergency call because the network is down.

And yet the telecom operators are now paving the road for the next generation of cloud computing that many people just simply call edge. Let’s take a closer look.

The edge scene

Edge computing has become a familiar expression, but there isn’t consensus on what the “edge” in it refers to, exactly. You’ll receive as many answers as people you ask. The only commonality of the different edges is that they are all on the edge of something.

Edge computing depends on the paradigm of distributed systems and amplifies the scale to a level that wasn’t available before. The aim is to take both the computational power as well as the flexibility available in large cloud data centers closer to the users, be they humans or machines or the combination of the two.

This leads us to the far edge, the device edge, the large edge site, the small edge data center, the aggregated edge, the access edge, and so forth. These are all valid terms and make sense once you put them into context.

If you aim to give one definition to what edge itself is, you would need to bake the context, the use case and some of its characteristics into it to make sure those who use it will have the proper view and understanding. All these details make it impossible to provide one description that holds up in all circumstances, which makes the aim and process to create one definition unnecessary. At the end of the day it is your edge. See Figure 1.


Figure 1: Definitions of “the edge,” in context

(click to enlarge)

One thing is common in all edges, which is that they are all part of a distributed system, the scale of which is greater than ever. This puts new requirements on your shoulders to solve and highlights imperfections in existing solutions.



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel