I work in Deep Learning research and I can tell you something that you might not be expecting to hear.
There is a decent possibility that current backpropagation and matrix multiplication (Linear Algebra) based algorithms will be replaced by a new paradigm of algorithms that do not get benefitted by massively parallel MAC (Multiply–Accumulate) units of GPU like hardware. When these algorithms will first arrive, there won't be any specialized hardware that can run them efficiently like GPU did for deep learning. So initially CPU based general computers will be the only option. With time, there may be development of new hardware architecture that optimizes them, but that is too far out to predict with any confidence.
The key point is, the entire AI hardware industry today stand on the assumption that AI algorithms requires a large number of efficient MAC units to run in parallel, which GPUs happen to be good at for an entirely different reason (Graphics rendering) and got massively lucky to be also the building blocks of all AI hardware. Once this assumption breaks, the current AI hardware companies will suddenly find themselves on a fast track to being obsolete.
It might take a few years, but then again, these things are notoriously difficult to predict. So I can't tell how soon this will happen. But I can tell you, it might be sooner than most big players are thinking and when it happens it will send a massive shockwave through not only the GPU and Accelerator industry but the entire stock market in general.
Leave a Reply