Speeding up the edge cloud core

John Baker, SVP Business Development, Mavenir on the three phases of core network virtualisation.

Operators face a choice in how to design a network architecture that can enable low system latencies for future applications. This guest post from Mavenir describes how its containerised, cloud native vEPC, with control and user plane separation, can enable the edge cloud core.

From gaming and video streaming services, to remote medical operations, driverless cars, and the Internet of Things (IoT), recent years have seen a considerable increase in the predicted demand for low latency applications.

Such demand has led to the growing pressure on operators to reconsider the architecture of their mobile networks; to place functions of the core at the Edge, and envisage their network as a distributed compute network. These different architectures offer the opportunity to address new market segments, such as the enterprise space which, with minimal numbers of base stations, and minimal numbers of subscribers, is the perfect environment for self-contained LTE networks.

By way of illustration, multi-access Edge Computing is already being standardised, for example, while products such as Amazon’s AWS Greengrass are targeting edge deployments that can be integrated into mobile wireless networks.

Deploying to the edge

At the centre of the ability to deploy network functions at the Edge is the Evolved Packet Core (EPC), comprising a number of nodal entities, such as MME, SBC (session border controller) and PCW.

Mavenir found in the virtualisation process that, by rationalising the basic 3GPP EPC structure to essentially just one input and one output, eliminating internal interfaces, it was possible to separate the control plane and the user plane, and remove duplicate processes. Doing so increases the virtual EPC’s speed, and makes it possible to put a number of applications on the product while maintaining a minimal footprint.
As a result, Mavenir’s vEPC is able to scale down to support a small number of subscribers, and scale up to millions if required. Not only does this allow operators to look at new vEPC deployments in very small networks, it can also be used to create a distributed virtual core.
By deploying vEPCs to the network edge, performing as one distributed EPC, the network immediately becomes more reliable, with the scale and routing functionality required to manage the demands for lower latency application and storage.

Three phases

Virtualisation is driving the growth of the distributed computing market.
The growing popularity of low-cost, low complexity white box hardware, with a low number of x86 compute cores, means it’s now possible for a virtual EPC to run on such a platform.

However, virtualisation has had to go through three phases to reach this point.
Phase 1: The first virtualised elements essentially involved taking the software that was running on custom hardware and putting a wrapper around it, referring to those elements as virtualised when, in reality, the operating software had just been updated and was still running on the same custom hardware.
PThe majority of OEMs tend to still be at this stage as, although they might be moving toward the next step, their old business models require them to sell as much hardware as software.

Phase 2: The next version, essentially running on COTS hardware platforms, enabled the virtual function to be scaled up and down, but without constraints. VMs would be spun up and could continue to grow in size until their appetite for resources became uncontrollable, taking up all available memory and disk space. These VMs could also take a long time to “spin up” and be brought into service.

Phase 3: By introducing virtualisation in Containers, VMs are constrained, with the required operating resource defined up front. This results in what is known as Cloud Native virtualisation. In this way resources and the performance of the VM and “spin up” time are more controlled and defined.

If a lot of users are streaming a local football match, for example, the video streaming application and associated edge storage can be scaled up in the user plane where it’s needed at that moment in time, and can then fall away when the match is over.

Breaking constraints

All of this was on the back of standard EPC architecture, in which the control plane and the user plane were essentially locked together.
With Mavenir’s vEPC, virtualisation and the resulting architecture has taken a leap forward with the commercial availability of the next generation 5G core network architecture, where the control plane and user plane are split (CUPS), allowing either function to be scaled up or down as required.

vEPC elements can be spun up anywhere in the network, at any time, and scaled according to use, allowing operators to operate more efficiently, better able to respond to – and capitalise on – trends in subscriber traffic.

If a lot of users are streaming a local football match, for example, the video streaming application and associated edge storage can be scaled up in the user plane where it’s needed at that moment in time, and can then fall away when the match is over. Where once the architecture was very constrained, it’s now possible to put a vEPC wherever it’s needed, close to the edge, even inside the enterprise where local network connectivity is required.

There are many ways of building a Multi-access Edge Computing (MEC) function, but deploying vEPCs in this way eliminate issues around central control, security and authentication, while delivering the scalability and low latency demanded by today’s users and applications.
Operators now have a choice in architecting their network for the demands of the future with native virtualisation of EPC functionality and distributed computing ultimately lowering the TCO and offering a smooth path to 5G upgrades in the near future.