Deep learning has become one of the most relevant trends in modern software technology. From a conceptual standpoint, deep learning is a discipline of machine learning that focuses on modeling data using connected graphs with multiple processing layers. In the last few years, deep learning has become a pivotal technology to power use cases such as image recognition, natural language processing, or even powering some of the capabilities of self-driving vehicles. The popularity of deep learning has expanded beyond just software and now the industry is starting to talk about the first generation of hardware with deep learning capabilities: a deep learning chip.
A few months ago, at its I/O conference, Google announced the design of an application-specific integrated circuit (ASIC) focused on deep learning capabilities and neural nets. Google called this chip the Tensor Processing Unit (TPU) because it underpins TensorFlow, Google’s open source deep learning framework. While Google’s TPU is not the first industry attempt to create a deep learning chip it is certainly the most famous one. However, is a deep learning chip a good idea?
The answer is related to the current time in the evolution of deep learning technologies. While transferring deep learning capabilities ontohardware is certainly a great concept, there are some doubts whether this is the right time in the evolution of deep learning technologies to pursue such an endeavor. Looking beyond the hype, we can identify solid arguments both in favor and against the creation of a deep learning chip at this moment in the industry.
Deep learning is possible because of hardware
The explosion of deep learning technologies has been possible in part because of the breakthroughs in GPU technologies of the last decade. From an execution standpoint, deep learning is an intrinsically parallel model in which algorithms are based in the parallel execution of concurrent tasks. Before GPUs, it was almost impossible to efficiently execute complex deep learning algorithms using mainstream hardware. GPUs made possible the execution of highly parallelizable tasks and opened the door to the evolution of deep learning.
3 arguments in favor of a deep learning chip
As mentioned before, nobody doubts that deep learning chips are going to be a trend in the future but the question remains whether this is the right time in the development of deep learning technologies to make that transition. Some of the arguments in favor of a deep learning chip include:
- Everything works faster in silicon: A deep learning chip can really optimize the execution of deep learning algorithms and optimize it for specific devices.
- Eventually we want deep learning capabilities in our smartphones: A deep learning chip should be a catalyst to execute deep learning algorithms directly in mobile phones which opens the door to many interesting applications.
- Powering the next generation of deep learning hardware: We have to assume that in the future there will be entire hardware infrastructures focused on executing deep learning processes. If that’s the case, a deep learning chip can be a key component of those infrastructures.
3 arguments against a deep learning chip
There is a segment of the deep learning community that believes deep learning chips are a little ahead of their time. Some of the most common arguments against their immediate development include:
- Support for unsupervised machine learning: Most of the deep learning models still rely on supervised models that need to be trained. A deep learning chip is better suited for unsupervised models that can learn independently.
- Algorithms are changing too fast: The rapid pace of evolution for deep learning technologies poses a challenge for deep learning chips as the hardware might not be optimized for future algorithms.
- We don’t know the winning algorithms yet: Complementing the previous point, the deep learning industry is still in a relatively nascent state in which we still don’t have clear winners that can benefit from hardware level optimizations. From this perspective, the creation of a deep learning chip feels like an optimization for problems that we don’t know if they need to be optimized yet.
Google, Intel and NVIDIA are leading the charge
Deep learning chips can and should still be considered very experimental. However, companies like Google and NVIDIA are driving a lot of innovation in the space. Google’s TPU is, without a doubt, the most well-known example of a deep learning chip and one that is already powering mission critical applications at Google such as RankBrain, used to improve the relevancy of search results, and Street View, to improve the accuracy and quality of maps and navigation. Arguably the most famous implementation of TPU was its usage in DeepMind’s AlphaGo engine which defeated Go world champion, Lee Sedol earlier this year.
One of the unique characteristics of Google’s TPU is that it is optimized for executing deep learning processes based on the Tensor Flow stack. Functionally, Tensor Flow is a general purpose framework for executing mathematical computations using data flow graphs. From that perspective, most of the well-known deep learning algorithms can be modeled using Tensor Flow and executed using TPU-powered hardware. In other words, TPU is not constrained to a specific type of algorithm or data structure but can run any type of model built on Tensor Flow.
Google is not the only company truly invested in a deep learning chip. Earlier this year NVIDIA announced its Tesla P100 chip with strong support for deep neural networks. Intel is another company actively innovating in the deep learning chip space with the release of the Xeon Phi Processorcodenamed Knights Landing (KNL).
Even though Intel and NVIDIA claimed that their chips are optimized for deep learning, the truth is that they are optimized for highly parallelizable tasks which are typically required to run deep learning algorithms. From this perspective, Google’s TPU might have an edge as it leverages a specific framework designed for modeling and running deep learning algorithms. Despite the differences in strategy, NVIDIA’s, Intel’s, and Google’s deep learning chips can all be considered relevant contributions to a space that promises to become one of the most relevant trends in the next decade of hardware innovation.
This article is published as part of the IDG Contributor Network. Want to Join?