Edge computing has been around for decades and as technology developed and connectivity improved by leaps and bounds, we’ve seen use cases and applications in almost every aspect of human life.
The Gartner Hype Cycle in 2020 is of the opinion that edge computing is approaching the peak of inflated expectation but I would argue that this is the next important step for technology development and with accelerated rollouts of 5G all around the world, we’re going to see edge computing really coming of age this year.
All transformational technologies take place in waves and what we’re seeing this year is the convergence of some really big waves coming together. Arguably, the biggest catalyst is the global pandemic. From a technology point of view, it has led to the biggest shift to digitalisation that the world has ever seen. Whole industries have been disrupted as businesses race to implement technology that will enable them to survive.
We will look back on 2020 as a milestone in humanity’s digital transformation. And distributed edge computing is one of the areas where we are seeing accelerated development.
Distributed edge computing, which is essentially a computing paradigm that brings computation and data storage closer to the location where it is needed to improve response times and save bandwidth, has many guises - the Internet of Things (IoT), IT/OT convergence etc - but in its purest form, it’s about how technology can be automated to improve human life.
Sensors in the fields that help farmers keep track of the health of their crops, self-driving machines that work 24/7 on mining sites, automated factories that operate round the clock with minimal human supervision, little drones that surgeons can send into your body. The applications are limited only by our imagination.
Implications on IT infrastructure
What does this mean for IT infrastructure? What does the CIO or CTO need to think about in order to build IT for their enterprises that can take advantage of distributed edge computing. For one, this will create a deluge of data unseen in human history. Where will we store this data? How will we move it? How do we figure out what’s important?
IDC projects that IoT devices alone will generate almost 80ZB of data, yes Zettabytes. I believe this is only the tip of the iceberg.
Edge computing will cause a serious rethink about how we architect our data centers. Data centers will need to be physically closer to users and support processing and decision support applications closer to where the data is generated. Furthermore, these data centers need to be designed to accommodate huge amounts of unstructured data at high speed, and will need to be built on cloud-native technologies such as containers and to support a much wider variety of application needs.
...these data centers need to be designed to accommodate huge amounts of unstructured data at high speed, and will need to be built on cloud-native technologies such as containers and to support a much wider variety of application needs.
The data gravity challenges that edge applications create mean that data must be processed at the edge and across edge sites - it is simply too costly and prohibitive to move data to a central location. The applications and infrastructure needed to support this must become more distributed in nature - harkening a shift from all application processing in a core cloud, to a core cloud working hand-in-hand with a distributed cloud at the edge.
This proliferation of smaller yet more agile data centers highlights the need for speed, flexibility and operational simplicity in each location. Two challenges present themselves though.
First, these edge sites are small, and there are often thousands of them - so all the data can’t exist in every site.
New architectures are being built where edge applications generate, store, and interact with data at the edge, but these edge sites are tightly-coupled with core data centers where they sync all their data and store less frequently-used data. This requires infrastructure to support data access and movement, as well as building and running applications in a much more distributed manner.
Secondly, edge applications are also being built differently - generally as microservice applications, so they can easily start/stop/scale as users come and go from the services delivered at the edge.
Because edge sites are small, every app can’t have its own dedicated servers or storage, they must all share infrastructure. Containerisation helps solve this problem by allowing applications and their storage to easily be spun-up and down, lasting only for the duration that that user or device is accessing a particular cell tower or edge data center, and run natively on heterogenous infrastructure that is shared by many edge applications.
Where are all these leading to?
We are in an incredibly exciting point of technological development and distributed edge computing is taking technology to places previously unreachable by technology allowing us to change the way we interact with the world. Combined with adjacent technologies such as 5G, AI and augmented reality, we are moving towards a reality where we can soon let go of our interfaces, keyboards and mouse, and just use our voice or gestures to control our devices.
We are due for another quantum shift in the way we interact with technology and quite frankly, we should all be excited.
Matthew Oostveen is Chief Technology Officer & VP, APJ, Pure Storage