Since the release of the full-stack, full-scenario AI strategy at the\u00a0end of 2018, Huawei has made a strong breakthrough in AI with its\u00a0powerful computing advantages. In particular, Huawei released the\u00a0world's most powerful AI processor Ascend 910 on August 23 this year,\u00a0which earned a ticket for Huawei to enter the field full of world's top\u00a0players. Giants in the industry quickly realize that Huawei has more than 5G and mobile phones. Huawei is investing greatly in basic research,\u00a0which is helping Huawei seize the high ground in the future.\nWhen we look back, however, the recent years of development has\u00a0not ushered in an AI era with an established ecosystem, which brings more worries about the AI road ahead. Such worries were even stoked\u00a0by the lack of computing power.\nIn the seem-to-come AI winter, Huawei did not slow down its pace.\u00a0Within one year, AI processors and computing frameworks were\u00a0implemented successively. One could not help but wonder how Huawei\u00a0gain insights, why they are confident, and how do they develop their\u00a0technical knockout.\nWe may find answers in Huawei Connect 2019 where the latest AI\u00a0and cloud products and solutions are released to "make computing\u00a0power more inclusive and algorithms simpler".\n\u00a0\nAn AI Winter Is Coming?\nIn 1956, John McCarthy, an assistant professor at Dartmouth\u00a0College, organized a workshop where the definition of artificial\u00a0intelligence was formally proposed for the first time. In the next 60\u00a0years, AI has experienced two periods of slow development, the socalled\u00a0"winters", but its development has never stopped.\nAt a conference in 2018, Kai-Fu Lee, CEO of Innovation Works, said\u00a0in his speech that the biggest breakthrough in machine learning was\u00a0made nine years ago and no major breakthrough was made afterwards.\nSimilar voices can be heard more and more often recently. Over the\u00a0years, deep learning has been at the forefront of the AI revolution. Many\u00a0believe that deep learning will lead us into a new era. However, the tide\u00a0seems to keep receding. Questions and uncertainty are emerging about\u00a0the road of AI ahead.\n\u00a0\nNew Battlefield for Deep Learning\nTo put it simply, AI is implemented after reams of data are\u00a0processed with the deep learning to form a model and this model is\u00a0applied to a specific service scenario. In this regard, deep learning is an\u00a0important driving force for AI.\n Huawei \nOf course, deep learning is just one of the implementation methods\u00a0of AI, and is a subset of machine learning. Deep learning itself is not independent from other learning methods as supervised and\u00a0unsupervised learning are used to train the deep neural networks. But it\u00a0has been developing rapidly in recent years, and some dedicated\u00a0learning methods (such as residual neural network) have been proposed\u00a0one after another, more and more people now regard deep learning as\u00a0an independent method.\nDeep learning was originally a learning process that uses deep\u00a0neural networks to represent features. In order to improve the training\u00a0effect of the deep neural networks, the neuron connection methods and\u00a0activation functions have been adjusted. Many other ideas had been put\u00a0forward in the early years. However, due to insufficient training data and\u00a0computing power, those ideas failed to be incubated.\nWith the increasing volume of annotated data and continuous\u00a0algorithm improvement, deep learning can now be used to perform\u00a0various tasks, making it possible to implement machine-assisted\u00a0scenarios, for example, automated driving.\nThe rapid evolution of deep learning is attributed to the\u00a0improvement of data, algorithms, and computing power. The data that\u00a0can be used for training, especially the data manually annotated, is\u00a0abundantly available. People can learn more from the data.\u00a0Technological developments have made it possible to train ultra-large\u00a0models, such as deep neural networks of thousands of layers, a size one\u00a0could only imagine in the past.\n Huawei \nThe complexity of ultra-large scale models increases exponentially.\u00a0For example, BERT, a popular network in the NLP field, includes a\u00a0maximum of 340 million parameters. Compared with a simple network\u00a0such as AlexNet, ultra-large models require a computing power 10000\u00a0times greater. This is one of the important reasons why OpenAI and\u00a0other organizations say that AI computing power increases by about 10\u00a0times every year.\nDue to the model complexity and the supply shortage by some\u00a0component enterprises, the computing resources have been insufficient\u00a0for research institutes, colleges, and universities. People often queue up\u00a0to submit training assignments, and wait for a few days before they can\u00a0get the results. This raises two fundamental questions: which research\u00a0direction of deep learning does not have high requirements on\u00a0computing power, and how to reduce algorithm requirements on\u00a0computing power.\n\u00a0\nHuawei's AI Breakthrough\nAs surveyed, 20 ZB of new data is generated every year, and AI\u00a0computing power requirements increase by 10 times every year. Such\u00a0speed is much faster than the performance doubling time specified by\u00a0Moore's Law. The industry has explore several ways to tackle this\u00a0problem:\n\nReduce the model size by means of pruning, weight sharing, and algorithm optimization to reduce the requirement for computing power, especially for mobile devices.\nLearn from small-size samples, which can also reduce the workload of data annotation.\nDesign the acceleration hardware dedicated for deep learning to balance the chip area and efficiency of the CPU and GPU.\n\nThe fundamental solution is to improve the computing power\u00a0supply through hardware and system design. For example, Huawei\u00a0optimizes its deep learning capabilities by releasing the Ascend AI\u00a0processors with the AI kernel and Da Vinci architecture, including the\u00a0matrix computing unit (Cube Unit), vector computing unit (Vector Unit),\nand scalar computing unit (Scalar Unit), which combine the advantages\u00a0of the GPU, TPU, and CPU. In particular, the efficiency of matrix multiply accumulate\u00a0operations commonly seen in the deep learning is improved\u00a0severalfold. The Ascend 910 is designed for model training. A single chip\u00a0can provide a computing power as powerful as 256 TeraFLOPS, twice the\u00a0industry equivalent.\nHowever, chips alone are not enough. They need to cooperate with\u00a0high-speed, low-latency networks to release their full capacity. System level\u00a0optimization on data and model processing together can make\u00a0possible a compute pinnacle beyond the current level.\nHuawei has launched their new AI products in this field at Huawei\u00a0Connect 2019. Let's see how they play their ace to provide a stronger\u00a0computing power.