By Carla Petersen – Edge computing is not a new technology. Emerging during the ‘First Wave’ (2000-2010) of the internet, when the need to distribute content closer to the user was a priority, edge started gaining traction.
It was considered the ideal solution to solve the ‘last mile’ problem. Today, it has become such an integral part of computing that the global edge market is projected to reach $155.9 billion by 2030.
Spurred by the ‘Second Wave’ (2010-2020) of the internet defined by the cloud and mobility, and then the emergence of the current ‘Third Wave’ (2020-current), where the focus is now on devices and mobility, edge computing has evolved significantly over the past two decades. F5 CTO Geng Lin covers this evolutionary path in more detail in his paper, “The Third Wave of the Internet.”
Growing threat landscape
One of the key insights from this paper is that, until recently, there has been no need for a platform at the edge. Application design and architecture readily adapted to the cloud. However, the growing digital economy saw increased threat actors, with volumetric attacks disrupting businesses worldwide. In recent years, malicious code and malware have become a path to profit.
Given its nature, the edge could still protect organisations and applications by inserting services closer to the user. This meant bad actors were detected and neutralised before they could disrupt business or manage to breach a company’s defences.
However, more advanced challenges emerge during this ‘Third Wave’ of the internet as new capabilities are injected into devices and applications. The number of devices and users constantly communicating over the network still poses a performance challenge despite the increasing bandwidth we can access. Attackers have grown even more malicious and are leveraging advanced technologies to exploit the pervasiveness of applications and devices.
Platform-centricity
The edge has not been designed to effectively support the distribution of apps and data; this is where the emergence of an edge application platform has become necessary, as one does not simply throw together such a platform.
Bolting on the ability to deploy computing on existing edge networks does not fully address the challenges posed by the ‘Third Wave’ of the internet. Furthermore, it does not fully take advantage of how devices and endpoints can become more active participants in solving these challenges.
After all, applications and devices are no longer passive receivers of information. They can initiate connections themselves and influence the decision-making process. Therefore, a new approach that looks beyond the traditional viewpoint of the edge is based on applications as passive receivers of information becomes critical. This will also provide the impetus needed to realise the power of distributed computing as it has evolved over the past two decades.
More security-minded
That approach ensures the need for security, scale, and speed of applications at the edge. Critically, this must be done without negatively impacting the developer and operational experiences. It also requires attention to parallel trends in technology around observability and using artificial intelligence and machine learning for business, security, and operational automation.
While broad characteristics, such as described by F5 CTO Geng Lin in his Edge 2.0 manifesto, provide overarching guidance for an Edge 2.0 platform, design considerations at the architectural level are also needed.
Terms like ‘secure by default,’ ‘deliver autonomy,’ and ‘provide native observability’ are easily thrown about. But what do those mean regarding technology and approaches that must be considered? More importantly, how should they be incorporated into an Edge 2.0 application platform?
The F5 paper ‘Edge 2.0 Core Principles’ examines this more closely. For the company, Edge 2.0 is defined by a focus on experience-centricity and a platform that is not tied to the location or even type of architecture. User simplicity and being more application- and operational-centric are crucial components of this new environment.
Zero Trust importance
One of the golden threads tying all the concepts of Edge 2.0 together is that of Zero Trust. A distributed cloud amplifies a security-at-scale problem as the number of endpoints increases exponentially. With such a massive scale, the complexity of implementing a solution also scales. The security posture should be that the application assumes the environment it is currently deployed in is already compromised.
Similarly, the infrastructure shall not trust any application that runs in it, i.e. Zero Trust. For it to be successful, Edge 2.0 must incorporate five key principles. Firstly, identification is the foundation on which the edge is built. Each entity must have its own globally unique identity. Secondly, the concept of least privilege should be incorporated into the ‘new’ platform as a matter of course.
There is also continuous authentication to consider. Authentication should not only be explicit through shared secrets or chain of trust but also factor in implicit patterns of behaviour and contextual metadata. Building from here is constant monitoring and assessment: The actions of all actors in the system must be monitored, reinforcing the key role of data collection and warehousing technologies in an Edge 2.0 architecture.
Finally, one must always assume a breach can and will occur. Therefore, a fully mature Edge 2.0 system must be able to make real-time risk/reward assessments based on continuous monitoring and assessment.
The path to taking advantage of the evolution of the edge ecosystem for the ‘Third Wave’ of the internet is clear, and that path is through an Edge 2.0 application platform.
Carla Petersen is the channel manager at Westcon-Comstor