Fireside Chat: What Are the Practical Opportunities for 5G and Edge Cloud?

D. Kehoe

Summary Bullets:

  • Edge redefines cloud by functionality, not location, as the old model of consumption economics moves from a centralized to a distributed model.
  • Telco exchanges, central offices, and street-side cabinets are prime edge/cloud locations and will create a special role for operators.

GlobalData, together with Infosys, HPE, and Spark New Zealand, hosted a global webinar recently to discuss 5G and edge compute.  This session attracted a cross-section of delegates, especially telecom providers across different regions.  The webinar included a presentation, a fireside chat, and direct questions from the audience to the panelists. 

Edge compute is the use of network, compute, storage, and analytics in close physical proximity to where data is collected to drive dramatic improvements in application response times and new use cases.  Multi-access edge compute (MEC), which is a developing standard, is deployed across multiple locations in mobile networks.  There are many ways to deploy this capability.  Miniaturizing and distributing cloud in this way changes the consumption economics of the cloud from a centralized to a distributed model.  The discussion during the webinar argued that such a paradigm shift redefines cloud by functionality, not location.  As a result, there are more possibilities to improve latency.  Even without the compelling new use case, businesses are interested in optimizing network by reducing the ‘hairpin effect’ of transporting a workload to a central location and back.  Others like the idea of data staying on-premises for compliance reasons.  With the latter, there was some discussion on how edge computing can be supported within offline environments (depending on the configuration) to deliver the efficiencies of a local, customized cloud, without compromising data sovereignty. 

We discussed how edge is a one-size-fits-all proposition.  Deployment options could be a hyperconverged appliance, a dedicated server, an IoT/edge gateway, or even a mobile device.  Hyperscale companies such as Microsoft Azure, AWS, and others can offer services from other locations such as a micro data center.  This can be a 400kW connection to a single rack interconnecting to a 30MW hyperscale facility through a common platform.  Given the many deployment models, GlobalData predicts that sales of edge computing infrastructure and services will increase to $19.3 billion by 2024 (conservatively), with the manufacturing, retail banking, energy, and information technology sectors leading the way.

The Telecom Vertical

Telecom operators, uniquely, have an opportunity to drive this market.  The investment in 5G helps to drive low latency, network slicing, and support for much denser IoT deployments as it brings 3GPP standards-backed technology to the fore.  As telecom operators partner with the hyperscale companies in the buildout of new edge nodes, the use cases will increase.  A practical starting point for media broadcast is using edge as an interconnection and content aggregation hub for better content distribution.  The same nodes could also help smart cities with video analytics, local caching, compression, and data storage.

Edge compute can also enable virtual network functions (VNFs) within the service provider network or allow carriers to serve customers better by turning on and off virtual instances, such as a firewall or load balancing.  Given the partner ecosystem behind edge computing, this can extend to x86 and non-x86 environments, VMs, containers, or microservices.  Operators can be great channel partners.  Within a national market, there are typically advantages of operational capabilities, scale, and access to partner ecosystems, which are also important to drive market participation.  Professional services, whether operators’ own or through partners, are what will mold basic connectivity and platforms into vertical-specific solutions.

Location, Location, Location

However, telecom providers have another card.  Many own legacy exchanges, central offices, colo huts, and street-side cabinets that are ideal locations for the new edge clouds.  If edge devices are to drive lower latency, with the limitations of the speed of light still a constant, then the physical locations will play a major factor to circumvent the laws of physics to deliver better performance.  The more edge clouds can be distributed across a central network, the better the ability to scale.   

Telco Data Centers Are a Challenge

This is not going to be an easy fix for carriers.  The telecom industry has standardized the data center power and other connected assets to DC while the rest of the industry tends to use AC to power data centers.  The other concern is the age of data center facilities, which impacts other metrics such power draw, energy efficiency, and resiliency.  But the tradeoff, especially for incumbents, is prime real estate and worth consideration.  Retrofits will be required, but this also opens other models.  There are also opportunities to partner, sell these locations outright (with a lease-back clause), or for operators to upgrade on their own to pursue specific opportunities (e.g., smart cities, Industry 4.0).  The likely shakeout will be hybrid models.

With 5G maturing, operators can bring more than connectivity, licensing, and coverage.  A brick-and-mortar footprint will be essential for the buildout of transformative services, such as new use cases that leverage ultra-low latency applications.  This could be the one bottleneck that makes edge cloud more compelling for the network operators, especially incumbents.  BT UK, for example, is extending its cloud to 100 metro locations.  AWS, Microsoft Azure, and Google Cloud Platform (GCP) are extending their edge clouds.  In some markets, there will be overlap.

The Ecosystem Play

Infrastructure needs to update as fast as software.  As networks continue to be disaggregated, it is also important to build a strong architecture that embraces the principles of open platforms, APIs, multivendor solutions, and cloud-native principles.  The concept of an ecosystem, as pointed out by HPE, will be important for attracting developers, creating coverage, and delivering outcomes to the benefit of business and communities at large.  Software will be more important to mask the underlying complexity of a new technology.  Leading integrators, such as Infosys, will play an important role in mapping technology to solving client-facing issues, especially by working within the ecosystem.  An example was given on how edge computing is changing the renewable energy business.  Processing one set of data locally can reduce unplanned downtime through predictive maintenance in wind turbines, which have about twelve points of common failure.  Processing a similar dataset in real time can also help improve energy output.  Adjusting the configuration and direction of these turbines according to current wind speed can continuously improve outputs.  Ultimately, a successful cloud with an edge deployment should deliver several business outcomes, make technology work through an ecosystem play, and adapt as sudden and sharp as a change in weather. 

What do you think?

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.