Let’s start with a simple question, why do we need Edge Computing?
To address this question we can look at Cloud Computing and its limitations. Cloud computing is computing as a utility. It’s a computing model where the processing resources are located in a remote location such as a data center and accessed via the internet. Cloud computing enables or facilitates innovation because less capital is required to access massive computing and storage power.
The first major computing or Cloud computing platforms were released between 2006 and 2010 and include Amazon, Google Apps and Microsoft Azure; and we can see that major applications like Foursquare, Pinterest, and Instagram were launched between 2009 and 2010 – in large part due to their ability to leverage the Cloud and take advantage of its scalable cost model. Existing major services like Netflix and Reddit also migrated to the Cloud in 2009.That shows how fast things can move once the Cloud enables innovation.
How does this intersect with 5G? Well, if you look at the 1 to 5ms end-to-end latency required by 5G for a class of applications which we can call “Tactile Internet”, that kind of latency is simply not feasible with traditional Clouds.
Still, the Cloud faces many challenges, some directly related to the rise of Edge computing. Data transferability is one: if the amount of data to process is really big, the problem is not processing or storage power, but how to get input data to the data center. Another problem is that the Cloud is subject to access network issues and delays: the end user needs to access any application server in the Cloud through an access network, so any delay or connectivity issue inside the access network impacts the overall experience.
How does this intersect with 5G? Well, if you look at the 1 to 5ms end-to-end latency required by 5G for a class of applications which we can call “Tactile Internet”, that kind of latency is simply not feasible with traditional Clouds. The only response to this challenge is to move cloud infrastructure closer to the end user.
One important point is that Edge Clouds are not here to replace data centers but to complement them. Edge clouds are useful in reducing latency and improving user experience, but the amount of processing power and storage is orders of magnitude below traditional clouds. So, for example, an application that needs to support very low end-to-end delay can have one component running in the Edge Cloud and other components running in the distant Cloud.
There are various types of Edge Computing, which we can classify using a taxonomy of cloud-based computing resources. The first is the distant Cloud itself: public and private Clouds located in data centers. Here, the end user is connected to a server in the data center; the second type is proximate mobile Cloud computing, or to be really more general, we could use the term proximate in-network Cloud computing. These are typically computer resources located in the access network or at the end user premises.
Distinguishing between high-end and low-end computing
We can distinguish between high-end and low-end proximate in-mobile Cloud computing. The high-end part of this class is a current focus of the ETSI Mobile-Edge Computing (MEC) standard. In this case, typically a high-end standard server is located in the access network at the base station. And low-end devices can be routers, access points, or home-based nodes. This is what we call MEC for Small Cells, and it’s a current focus of research at InterDigital.
A third type is computing on the end device itself, which can be a mobile device with a computing part: for example, smartphones, laptops, or other devices that can cooperate to perform a task. This also includes fixed-end devices: possibly home appliances or PCs, but also the increasing number of connected devices that constitute the Internet of Things.
Finally, you have hybrid solutions that can combine these main classes together. Hybrid solutions will have their own specific issues to solve. For example, a component could need to determine if it’s more suitable to run the task on an end device in the access network or the distant Cloud.
Mobile Edge Computing challenges
The potential is tremendous from an architectural standpoint, but there are a number of challenges to MEC in the context of small cells, or any sort of deployment of Wi-Fi access points, home nodes or similar types of devices which have small coverage.
- We can expect the platform to have limited computing and storage resources. Some current solutions may become too heavy, and we may only be able to support some applications with limited needs in term of computation power and storage. We may want to investigate other ways like Platform as a Service approaches.
- We can expect limited and possibility variable throughput that includes possibly the backhaul, inter-AP links and AP end user links. Applications relying on real-time connectivity may run badly in this context and need to be adapted. Also, the system should not rely on real-time message between APs.
- Highly heterogeneous hardware and software platforms, resulting in more constraints on the system. We can also assume lower physical security, as well as a limited control of the network operator over the applications. The impact on the MEC system can be significant. For example, the potential impact of a compromised server on the overall system should be minimized.
- There is a problem of orchestration. Algorithms should be expected to be more complex and dynamic since the environment is more constrained for the application.
- Finally, in a small cell network the range of each individual cell is limited. Mobility support becomes more important and solution for a fast process migration may become necessary. This just adds to the existing challenges identified in the high end MEC space – things like transparency to existing networks, application portability, security, performance, resilience, operations. These are all things that standards bodies like ETSI will be working to resolve.
But, despite the challenges, there are tremendous advantages to MEC. Most notable are density and mobility, which enables cooperation between applications, application mobility support, and the ability to use contextual information as inputs into the application. There are opportunities for operators or others, who already share a network, for sharing their Edge computing capacity. This could be a foundation for methods to ensure pervasive low latency. Also, you have innovative deployment scenarios – for example, the possibility to deploy a network for a specific MEC application. This could be the basis for innovative deployment scenarios in relation with existing or all new applications. Detailed radio and network information, which are typically not available to most applications today, are readily available in an MEC setting and can be easily provided to applications. Yet, we may need to expand radio and network information APIs to do so.
MEC has the potential to offer an ultra-low latency, high bandwidth environment that provides real-time access to radio networks at the edge of the mobile network. As a natural development in the evolution of telecom, MEC can enable new vertical business segments and services for consumers and enterprise customers. While there is still work to be done to tackle the challenges presented above, it is clear that there is a need for the tremendous advantages MEC can provide as we move towards the next generation wireless standard.