Are current 5G requirements and the cloud telco incompatible?

I thought it might be interesting to draw attention to a short discussion on Twitter this morning between a few experts in the mobile space. See the Storify below for the discussion.



So here is a bit of more expanded thinking (and I hasten to add this is my thinking rather than GSMA’s at this point).

The 1ms delay requirement that a bunch of organisations quote for 5G requires some pretty fundamental changes. Two use cases are quoted as driving it:-
– Vehicle (to network) to vehicle communications for accident avoidance.
– AR, VR and Immersive Internet delay budget for prevention of cyber sickness.

The first I think is a bit bogus – I can’t see how, or indeed why the network should be included in V2V comms for preventing accidents, although I do also have a small issue with ‘backwards compatibility’ in V2V scenarios if I happen to be driving an unconnected car and the cars either side of me start autonomously braking.

VR and Immersive are not necessarily that ‘mobile’ in the way they will be delivered, since VR is fully ‘virtual’ in that it has no reference to actual relaity (else it is augmented reality, right?) and so it requires a user to be in actuality somewhere where they aren’t going to bump in to things. Immersive Internet is displacement experience to another location (although that location, and the images conveyed from it still have to meet and react to gesture within that 1ms delay).

AR is the real challenge because the content to be served has to keep up with the user’s movements, and also is contextualised by the local environment. That means the content has to be served dynamically and reactively from a source within 1ms of the consumer, which in turn says, at the bottom of the mast, or at worst within a small number of metres (which I think is larger that the number of metres for BBU hosting in C-RAN). That is why the Saguna/Akamai CDN-at-the-base station work is interesting.

The problem with content close to the user is handover. Not only does the RAN need to take account of the shift from one cell to another, but the content needs to move across too. This is not an issue right now because the bearer is uninterrupted, but if the bearer terminates at a cell, and the customer moves to another cell, then the bearer has to shift too, and take the content context with it.

It is worth noting that 1ms delay does not apply to every service identified in association with 5G, so when it comes to things like content cahcing and Zipf law (I spotted Kit’s tweet), because the content delivered for AR is local in context, it’s not such an issue.

So, aside from all the can-you/can’t-you technical aspects (handover etc) – nobody’s going to invest in all that just to enable some AR apps are they – unless there’s some serious dollar in return for the operator’s investment?

I mean, just how many hyper-local pizza restaurant adverts will it take to pay for local content caching at lamposts/rooftops across a CBD and all the rest of the supporting infrastructure?

You say, “It is worth noting that 1ms delay does not apply to every service identified in association with 5G”, Which makes you wonder what the 1ms 5G thing is actually for.

On the car/vehicle thing – it’s almost as if someone has tried to think of a circumstance where 1ms across the network is important. “What is the most extremely time-sensitive IoT application I can think of? Perhaps two cars speeding towards each other.” As you indicate maybe sensors like that, if they come to exist, will rely on some other Direct or local or dedicated connection, rather than being handled by the full force of a mobile network’s core network.

It also strikes me that for most of these use-cases, having “content” at the base station is the least of your worries. It’s the application logic which is going to have to be somewhere else – you’re not going to host a full copy of SAP, sensor data-analysis algorithms, a VR game, or advert-insertion in a local Akamai node, presumably.

… as otherwise we would have already seen those functions replicated in every DSL or fibre broadband distribution point.

Thanks Dan. Good analysis. from a Regulatory perspective, the AR case is interesting, because if all the data sent to a device needed to be identified and local applications are also run on the user data, then for any services needing ‘exotic’ data rates the ability to reconstruct the User data becomes more challenging, even if it is hopefully rarely needed

A couple of points:

1) 5G seems to be an interesting collection of very different applications and mutually incompatible wish lists. Unless we can actually say “what problem are we trying to solve?” then quoting a requirement in isolation seems daft.

2) The LTE 8ms HARQ turnaround is pretty challenging and complicates / precludes a lot of deployment architectures. For example cloud RAN might require dark fibre to meet this timing. So anything that makes it 8X harder had better have some really good rationale.

3) I can’t see the logic for 1ms – even from Dan’s example.
Two cars colliding with closing speed 0f 200mph (which is quite extreme!) is 89m/s – so 89mm in one millisecond. Do we really need to specify a technology in order to make that precision? Let alone the point it should be local, not cellular. And for AR or VR we would be even slower.

The AR/VR requirement relates to cyber-sickness. If you move your head with a headset on and what is displayed on the screen lags behind your head movement it is like an extreme form of motion sickness.

I guess there is a network vs. Device functionality question too – the right place to keep the mapping between ‘reality’ and the ‘augmentation’ could be within whatever the user is viewing the content on, but then you’ve just massively cranked up the processing needs of the device.

VR is easier because there is no need to map between what is real and the overlaid augmentation – with VR, everything being ‘virtual’ removes the mapping to the actual. And it is in a limited space.

The poitn about 5G requirements not being taken as an entire list is a good one, but AR I think is one of the few examples where a service requires two of the tough nut requirements. Really low latency and really high coverage.

Coverage of course is a function of spectrum and deployment strategy, and deployment strategy is dependent upon economics. I am always interested to see 5G use cases with high bandwidth and/or low latency put forward with an associated implication that they could be available in areas where we don’t see 2G at the moment. There would have to be some serious revenues involved.

Keith, thanks for kicking off a fascinating discussion. I’m seldom tempted to join online debates but I’ll make an exception here. To Keith’s point that “nobody’s going to invest in all that just to enable some AR apps are they[?]”, no, clearly this would be an example of the double-headed killer app fallacy i.e. don’t expect to find one and if you think you’ve found one you are almost certainly wrong. The reason I agree with Dan about the Saguna/Akamai approach being shrewd is that it is a multi-faceted, multi-phased platform approach – by no means a one-trick pony for one specific use case. Phased rollout strategies involving a bridgehead based on the cost saving dimension of transparent caching and the side-benefit of QoE improvements for the most popular content are a logical first step. Even within this first step, the devolved functions are multifarious. Take the most simple one of all: the enablement of caching for DNS lookups at the eNodeB. This alone eliminates many, many TCP roundtrips for a typical page download and bucks the received wisdom of those citing Zipf Law. The fact is it works beautifully on reasonably modest served user populations. A second phase involving managed content (aka Akamai and…) is a natural progression that can (if Akamai is considered to be representative) enable the elusive “host the OTT” revenue opportunity. The third phase is the ability to run any server-side application in the eNodeB. To Kit’s point about Lawful Interception and security, Saguna have defined an architecture and set of mechanisms complementary to 3GPP that solves the tricky LI and mobility management issues without wholesale devolution of the entire EPC to the edge. Elsewhere in the ecosystem, Intel is doing some interesting things to ensure the secure “bubble” Meanwhile, to Dean’s skepticism about functional distribution to the edge of networks, the analogy drawn between mobile and fixed is debatable. What is the fixed equivalent of an AR app? What is the mobile equivalent of a set top box? How do latencies compare in FBB vs MBB? How do unit cost structures compare? The differences outweigh the similarities. So, how can you conclude that access-side hosting of latency-intolerant and/or repetitious viral content won’t happen in mobile because it didn’t happen yet in fixed?

Andy – I think you miss my point a bit, especially since fixed networks host most of the really delay-sensitive apps today (eg financial trading, industrial process control, acute medical equipment etc)

I met with Saguna last week & it’s certainly interesting. BUT there’s a big difference between that type of Akamai+ architecture and (essentially) hosting a distributed version of Amazon AWS & MS Azure at every cell-site in the world.

I’d agree that we’re moving in that general direction, but I also suspect there’s a lot of curves for “massively distributed cloud” vs. “advanced data-centre cloud” that don’t intersect ever, if at all.

In a nutshell – telco NFV+SDN isn’t suddenly going to absorb the entirety of the IT cloud industry, just because it’s pushed out to cell sites

Dean, thanks for clarifying. In fact it seems we are pretty much aligned. I do concur that it is not a binary edge vs cloud debate. No way does every AWS service start to spin up in a base station! I see the definite need for a selective smorgasbord of content and web services that yield benefits (in 3 dimensions of cost, quality and revenues) from edge-hosting. In other words the smorgasbord menu consists of a subset of Akamai’s services plus a subset of AWS plus a subset of de facto and proprietary CDN and OTT clouds’ services/distribution layers. Multi-tenancy (to extend the virtualised edge for multiple cloud partners) is key here, as is cooperation with hosted 3rd parties to ensure that the orchestration logic exists to extend selectively certain services/layers to the edge. The latter will no doubt involve direct engagement with the big players (e.g. Akamai etc) and this is where the current energy is being focused but as things pan out, more of a “development kit” approach will emerge for the long tail. Such cooperation is already bearing fruit: take the Saguna/Akamai case for example. By extending Akamai right to the service provider access edge, suddenly the growing volumes of HTTPS traffic managed by Akamai becomes cacheable on the user’s side of the key bottlenecks. This has become possible because of a new synthesis of CDN and access network. Disruptive, huh?

Looking at the last comment:
– Saguna had their access caching system. Thats disappeared for some reason and now they are wrapper around Akamai suddenly
– And the Akamai thing: I believe they have tried to get into the access networks since ages. So now they are going to run fiber to every cellsite of every carrier! Really!

Comments are closed