What is Edge Computing?

Learn all about edge computing technology, how it works, and why it is important.

What is Edge Computing?

Edge computing is the process of retrieving, analyzing, processing, and storing data close to the use case, or the “edge” of the network. By definition, it includes several distributed data center models that move applications and computing functions away from centralized data centers and closer to the data consumers. 

With data much closer—distance-wise—to the processing elements, latency is significantly reduced. Ultra-reliable, low latency communication (URLLC) 5G uses cases and the internet of things (IoT) rely on edge computing to meet lofty customer expectations for speed, bandwidth, and real-time feedback. 

  • Multi-access edge computing (MEC) is a broad term used to describe an edge data center capable of augmenting service for any available access technology. This includes wireless and wireline connections.
  • Mobile edge computing is sometimes used interchangeably with Multi-access edge computing, although the former focuses specifically on wireless technology.
  • Edge networking is the provisioning of network core elements to facilitate communication and application delivery at the edge. Edge network components perform protocol functions and handshakes to establish connections with external devices. 

How Does Edge Computing Work?

As cloud computing evolved, a trend towards concentrated hyperscale data centers gradually increased the distances between data and users. Edge computing architecture has been developed to strategically distribute intelligence and untether applications and decisions from these centralized data centers. 

  • The Internet of Things (IoT): The IoT is based on sensor technology within everyday objects, machines, and equipment continually streaming data to the cloud. Edge computing forms the lynchpin of three-tiered IoT architecture as the gateway for incoming data. Split-second decisions for low latency IoT applications such as advanced driver assistance systems (ADAS) and smart factories are also completed at the edge.
  • Virtualized 5G RAN: Software defined networking (SDN) and network function virtualization (NFV) are central elements of 5G RAN architecture closely intertwined with edge computing. 5G baseband functionality is strategically split between a distributed unit (DU) at the edge and a centralized unit (CU) closer to the core.
  • Public and private edge: New architectural models address a variety of cloud edge use cases and performance requirements. Hybrid (semi-private) edge deployments filter data at the point of use so that the public provider does not have access to confidential customer data packets.

Benefits of Edge Computing

Improvements in mobile communication have heightened customer expectations for performance and reliability. Unlike previous generations, 5G allows no wiggle room for slow or disrupted service. Edge computing technology is essential for meeting the requirements of the most demanding use cases. 

  • Ultra-Low Latency: With processing completed closer to the data source, response times are trimmed significantly. Although some verticals can still tolerate latency in the 100ms range, advanced 5G applications like connected cars rely on latency of < 2ms, which is impossible to achieve without edge artificial intelligence.
  • Privacy: Edge computing technology elevates enterprise and customer privacy by allowing sensitive data to remain onsite rather than transferring it to the cloud for processing. Since large-scale distributed denial of service (DDOS) attacks become more difficult with less concentrated data, network security also improves. 
  • Quality of Service (QoS): The improved bandwidth, speed, and latency of edge computing are just some of the overall QoS benefits. With edge analytics and applications managed closer to the user, a more differentiated level of service than was previously possible can be provided.
  • Near Real-time Optimization: The flexibility and local visibility of edge computing allows conditional changes to be implemented almost immediately. Feedback from the network and local devices can be incorporated to correct performance issues, increase available bandwidth, or adjust power consumption.

Challenges for Edge Computing Providers

As a new form of cloud architecture, edge computing resides at the leading edge of technology. The need for standardization is balanced by market demands for new services and innovation. As deployments multiply, the technical challenges become more evident.  

  • Scalability and flexibility are hallmarks of the edge data center that must be continually tested and verified. Latency and security must also be monitored closely. This visibility is critical to avoid accidents or safety hazards in sensitive use cases such as smart factories and advanced driver-assistance systems (ADAS). 
  • Edge data center propagation drives a need to continually optimize for space, energy, and IT resource consumption. This optimization is leading to more “lights-out” (unmanned) edge computing locations, which in turn require the adoption of advanced remote monitoring and self-healing capabilities. 
  • Predictive and preventive analytics are necessary for applications to be managed reliably at the edge. Intelligence and machine learning capabilities proactively detect issues and automate responses without intervention from the hyperscale cloud data center. This raises the bar higher for edge analytics and computing power. 

Why is Edge Computing Important?

Edge computing is a common enabler of cloud computing and telecommunications advancement as each enters a new era. The use cases made possible by the combination of 5G and edge computing open new revenue streams for operators and endless possibilities for developers.

  • Applications are created more efficiently without the constraints of network infrastructure and legacy protocols. Opportunities abound for innovators and third parties to quickly develop and deploy new applications that leverage the versatility of edge computing platforms. 
  • Autonomous systems are an important aspect of the ultra-low latency use cases projected for smart cities and factories. Edge artificial intelligence (AI) turns data collected from edge devices into actionable, real-time intelligence without leveraging the centralized cloud data center.
  • Reliability of the network is improved by spreading compute functions geographically. Essential services can still be provided at the edge even when core servers experience interruptions. This reliability boost also contributes to a higher level of disaster recovery readiness. 

Edge Computing vs Cloud Computing

Although edge networking is often associated with the cloud, the edge computing definition also applies to a lone server running applications close to the user. The introduction of cloud architecture has added a layer of flexibility to the edge while reducing the computing and storage burdens for hyperscale cloud data centers. 

  • What is Cloud Edge Computing?
    Rather than treating the edge data center as a separate, self-contained entity, cloud edge architecture forms an extension of the disaggregated cloud connected via a data center interconnect (DCI). The containerization software employed by the central cloud also allows applications at the edge to be scaled up or down through orchestration, freeing them from hardware constraints. 
  • Will Edge Computing Replace Cloud Computing?
    More artificial intelligence, machine learning, and storage capacity are finding their way to the edge, creating an option for completely autonomous operation. A more likely scenario is an ongoing integration between public and private cloud deployments and the edge. Massive data centers will continue to take on big data storage and intense analytics while edge computing establishes its nimble and distributed foothold. 

Learn more about how VIAVI supports Edge Computing: