Turkcell pushing on with vRAN as Red Hat federates edge management

Turkcell goes for vRAN as first edge case, still wrestling with edge cloud management. Red Hat says we're here to help - with Advanced Kubernetes Cluster Management, and maybe some Kaloom.

Turkcell will have the first iteration of its mobile edge computing strategy rolled out this year as it deploys virtual RAN technology in its mobile access network. The deployment of that cloud platform to support vRAN could then support other edge apps.

Speaking during an online event organised as part of Red Hat Summit 2020 Online Experience, Turkcell’s Aykut Demirkol, Broadband Services Manager, Telco Cloud, said vRAN would lead the investment case for edge deployments. 

Demirkol said Turkcell sees opportunity in three areas for edge computing: low latency use cases; providing mobile connectivity and near-premise data processing to industries; and network optimisations such as edge CDNs. “But I think the first iteration of the edge, and it’s happening this year, is actually for the virtualisation of our RAN, and even for virtualising our RAN we need a platform for could on the edge. For us I think this is the first step.”

This year we’ll be deploying lots of virtual RAN sites, we’ll be going live.”

Turkcell has previously announced a partnership with Mavenir for open vRAN tests and pilots. The partnership will start with containerised CU/DU and Open FH with Split 7.2 in trial and planned deployments, and Mavenir’s Virtual RAN solution will be the first workload that will be going live on Turkcell Edge Cloud.

Asked if Turkcell would use the same platform for hosted edge apps as for vRAN, Demirkol said that is yet to be determined, but it is a clear goal.

This is really a hard question, because currently we are not in the phase one of putting some customer workloads on the solution. But if we can, our top priority will be using a unified platform, because when you go closer to the edge you cannot have the luxury of deploying and maintaining multiple platforms. So you have to come up with a way that you can create a coexistence for different workloads; some of them are customer workloads, some from the network workloads, but you should be creating a kind of a unified platform.”

Managing the edge

Red Hat used the Summit to explore that last point – how telcos can manage hybrid cloud and Kubernetes clusters at different edge sites in the most efficient manner.

Joe Fitzgerald, VP and GM, Management Business Unit, said that controlling across multiple edges via federated management is a key focus for Red Hat. 

“There are probably a half dozen different layers or tiers in edge and it creates a distributed computing problem… which is how do I make sure that the right software settings, all those security things, are in place, in a lot of different places that are now distributed. So some of the work we’ve been doing there around our advanced cluster management for Kubernetes allows us to federate those systems at the edge to be able to manage them centrally. That means you can set a policy and manage the state of those environments in a very easy way, because it’s all about speed of automation. 

“If you’ve got the super high speed edge networks and you’ve got this distributed computing and then you’re back into manual or error prone processes, you’re going to have a mess on your hands. So what we’re working on is basically high speed continuous policy-based management of distributed edge systems, built on OpenShift, and really taking this, this problem head on.”

Another wrangle with edge is that edge platforms might host different types of applications, perhaps in hybrid public-private cloud instances, including those of the hyper-scalers. Fitzgerald said managing the “what goes where” management issue is also something Red Hat is currently dealing with.

“This is an area that we’re working on with our advanced cluster management. There’s a challenge about which applications need to be deployed to which edge components.  You’re not going to deploy all applications everywhere. So as you continuously evolve your applications you continuously deploy them – which is the goal. Okay, there’s a fundamental problem okay: where do these things need to get deployed?

“We think that there’s going to be a lot of different kinds of workloads running on these edge systems. Some of them are going to be migrated from other systems. Some of them are going to be written as cloud native applications. Some of them are going to be serverless workloads, responding to IoT devices that are adjacent to the edge compute. And then there’s a fundamental question of how do I know which applications go to which edge servers? So we’ve got capabilities to allow people to specify in a very powerful yet simple way which components need to be deployed to which edge servers, and that’s going to help keep sanity as you think about the complexity of ‘how many different applications or services are you deploying to how many different edge environments, across very widely distributed geographical areas’.

“So that is some of the work that we’ve been doing advanced cluster management specifically around how to target different kinds of applications, regardless of whether they’ve been migrated from another environment or built from scratch as rich container- native applications or they’re serverless Functions-as-a-Service responding in real time to devices.”

The telco use case

Red Hat’s Susan James, Senior Director, Telecommunications Strategy, said 5G and vRAN would make the early ROI running because “it’s going to be much more efficient, from a cost of production perspective, for mobile broadband.”

“But what we also see is, there is a lot of interest in gaming type applications that are driving edge deployments. Another of the ecosystems that we’re doing quite a lot of work with right now is around AI and machine learning types of workloads, because they do take a lot of compute processing and you don’t necessarily want to move all of that data around. So we’ve been doing quite a lot of work with, with that ecosystem to onboard those applications onto the onto the infrastructure.”

“I think it’s one of these things: when you get the capabilities there the workloads will come, but until you have those capabilities it’s going to be very difficult to to work out from an economics perspective. But once you can start to prove that those cases work then the price point comes down. For sure I think there’s enough existing use cases to actually start to see the the economic viability becoming more mainstream.”

James said that a partnership announced this week with Kaloom would bring Kaloom’s cloud edge fabric, Lenovo’s cloud infrastructure and Red Hat Openshift together. 

“It’s really a unified solution for edge sites. And I mean Kaloom brings sophisticated service chaining capabilities to the edge which really gives again a flexibility to those edge sites so there’s some really interesting and breaking edge technology that they’re bringing to the edge there.”

In late 2018 and at events last year, Red Hat was giving a big push to its virtual Central Office reference architecture – designed to give operators a deployable edge cloud architecture in their Central Offices. 

Asked if that approach had seen much traction with telcos, James said Red Hat had done a lot of work with distribution partners like WWT, enabling them to reduce the “overall effort to deploy something there quickly”. 

“It comes back to having a different platform that can be deployed very easily. When you need to bring new capacity online it’s an attractive proposition and vCO has been a successful vehicle for getting capabilities out to the network,” James said.