When the Pendulum Stops Swinging
Since pretty much the invention of modern computing, we always swang between the cost of communication and the cost of computing. From the perspective of communication, one of the Internet’s design guideline is the end-to-end principle. A good discussion on the topic is nicely laid out here. The end-to-end principle, simply stated, advocates keeping the nodes on the network smart and the network dumb. In other words, the network is generic and meant for all traffic while the edges know what to do with the traffic. Compute was too expensive at the time the principle was created, so the thinking was that it is easier to provide CPU cycles at the edges.
Fast forward to today where Apple just announced that they will be making their own silicon for their laptops (they already make their own chips for all their mobile and wearable devices) and where more and more specialized CPUs are coming to the market. Pensando is one such example with an interesting SmartNIC design and the tune of $278m in funding as well as top Cisco veterans in its ranks. Ampere is another good example of a company looking to add value to the edges. This all plays to the narrative that the edge of the network is where the processing of data is happening.
But…with 5G (the real one, not that bad marketing joke from AT&T) and edge/cloud computing everywhere, both communication and compute cost are approaching zero. Why? 5G offers 100 times faster speeds than today’s LTE technologies (it will take some time) but the cost that your carrier can charge you for that speed is not 100x, but rather around 10 Euro extra per month. It is already happening in Korea. Hence, the cost per bit is quickly approaching the fraction of a cent which means it is “free”.
The hyperscalers are looking to provide massive “cheap compute” capabilities in the cloud and in their edge offerings:
- AWS bought Annapurna Labs in 2016. Annapurna Labs gave rise to AWS Nitro at Re:Invent 2017 which in essence delivered bare-metal performance and virtualization. That’s a lot of compute available in the market since 3 years. Since AWS owns Annapurna Labs and since they have scale, they can drive the cost of their in-house xPUs (DPU? CPU?) way down.
- Microsoft Azure put their bet on FPGA (home-made and probably off-the-shelf like the Intel). It probably makes sense for a hardware (surface, Xbox) and software company to bet on FPGA since they know how to create the necessary tools to allow developers to write RTL code (register transfer level) and manage the ownership of the code and hardware.
- Google Anthos is an interesting question. If anyone knows, I’d love to find out.
So what does it all mean? Compute is getting more powerful and plentiful at the edge and in your rectangle pocket Internet device. Communication cost is approaching zero. What can we expect from a world in which the pendulum stopped swinging?