How Does Huawei Rise to Core AI Challenges?

BrandPost By IDG Contributing Editor
Oct 07, 2019
IT Leadership

According to an analysis released by OpenAI, the demand for computing power has increased by more than 300,000 times in the six years after 2012. It grows by about factor of 10 each year, far exceeding the pace set by Moore’s Law.

As a latecomer to artificial intelligence (AI), Huawei boldly proposed to provide the industry with computing power that is accessible, affordable, and easy to use, to meet the exponentially increasing demand for AI computing. Now, one year after the AI strategy was proposed, has Huawei found a way to address the computing power challenges?

In the late 17th century, the British mining industry, particularly the coal mine, was developed to a considerable scale. Human and animal power only was insufficient for eliminating underground water from coal mines. However, abundant and cheap coal resources were available on site as fuel. The pressing need for water elimination spurred people to explore the possibility of extracting water using thermal power. In 1769, James Watt, an Englishman, invented a steam engine that ushered in the First Industrial Revolution in the 18th century.

A hundred years later, Americans invented and popularized the use of electrical power, beginning the Second Industrial Revolution in the 19th century.

1946 witnessed the invention of the first binary computer in the world, which brought humanity into the Third Industrial Revolution in the 20th century. The development of information technology, especially the mobile Internet, brought great changes to people’s lives.

Now in the 21st century, people are embracing the Fourth Industrial Revolution represented by intelligent technologies. New technologies such as AI, the Internet of Things (IoT), 5G, and bioengineering are penetrating into every aspect of human society. AI technologies are driving global macro trends like sustainable social development, new engines of economic growth, smart cities, industry digital transformation, and consumer experience paradigms.

AI is a new general-purpose technology (GPT), covering natural language processing, image recognition, and video analysis. AI is pushing informatization to a new level. Information technology improves efficiency and AI reduces production costs. AI incorporated into a multitude of industries will impact every person, home, organization, occupation, and industry.

no.6 articlepreview image Huawei

AI in the Fourth Industrial Revolution will lead human beings into a new era.

The changes in the 21st century are akin to the tremendous demand for non-biological power in the mining industry in the 17th century. AI requires exponentially increasing computing power. According to an analysis by OpenAI, the computing power used in the largest AI training runs has increased by more than 300,000 times from 2012 to 2018 with a 3.5 month-doubling time (by comparison, Moore’s Law had an 18-month doubling period). There is a huge contradiction between the sharp increase of AI computing requirements and the slow increase of traditional CPU computing (10% per year). Chip development has gained great momentum across the globe, aiming to slash computing costs and facilitate AI adoption.

The three aspects of AI are computing power (industrial cloud computing and edge computing), data (industrial big data), and algorithms (industrial AI). China is a world leader in certain aspects of data and industry applications because of its huge population base and rapid economic development. However, when it comes to computing power, AI computing resources are scarce and expensive. Industry applications require increasingly powerful AI computing capabilities. Currently, AI development faces three major challenges: affordability, ease of use, and accessibility.

  • Affordability: High costs are required by AI training models, such as those for facial recognition, comprehensive transportation management, and automated driving.
  • Ease of use: There is no unified development framework for various application scenarios, such as from training to inference and from public cloud to private cloud, edge, and device. As a result, the workloads for development, optimization, and deployment are huge.
  • Accessibility: The GPUs widely used by the industry for AI computing have a long supply period and are limited in quantity. This makes it difficult to obtain hardware resources.

Major manufacturers that develop large-scale AI training chips in the industry, such as Nvidia, Google, and Huawei, have launched their own AI training chips. Nvidia Tesla V100 GPU delivers up to 125 TFLOPS of deep learning performance with a maximum power consumption of 300 watts. At Google I/O 2018, Google rolled out TPU v3, the third iteration of the company’s Tensor Processing Unit. The TPU delivers up to 90 TFLOPS of deep learning performance. At HUAWEI CONNECT held in October 2018, Huawei unveiled its Ascend 910 processor for AI training. The Ascend 910 AI processor offers the greatest computing density available in a single AI chip. It applies to AI training and delivers 256 TFLOPS of computing power. Its maximum power consumption is 310 watts.

no.6 article image Huawei

Performance comparison between mainstream AI training chips

The scarcity and high cost of computing power restrict the development of AI. Huawei believes that the keys to development of the AI industry are accessibility, affordability, and ease of use.

Huawei has been engaged in the development and deployment of ICT infrastructure for decades, and has an in-depth understanding of the application scenarios of operators and enterprise users. This enables Huawei to enter the AI field with a full-stack, all-scenario AI portfolio aimed at providing inclusive and powerful computing power.

Huawei Ascend AI processors adopt the tensor computing-oriented Da Vinci 3D Cube architecture. This architecture is an all-new design for AI, injecting powerful AI computing power to Ascend AI processors. The chips boast high computing power, energy efficiency, and scalability.

Based on the unified Da Vinci architecture, Huawei provides a variety of Ascend models, including Ascend-Nano, Ascend-Tiny, Ascend-Lite, Ascend-Mini, and Ascend-Max. From IP modules of dozens of milliwatts to chips of hundreds of watts, Huawei products cover all deployment scenarios across device, edge, and cloud. “The Da Vinci architecture is highly elastic. Its application ranges from Nano to Max and from wearables to cloud, and covers all scenarios. The MindSpore framework we launched will work with the Da Vinci architecture to meet the requirements of all scenarios. In other words, training and inference can be implemented across device, edge, and cloud, and collaboration is supported. This is impossible on any other computing framework,” said Eric Xu, Huawei’s Rotating Chairman, in a media interview.

The time required for AI training is closely related to model complexity, dataset, and hardware resource configuration. Hardware resources are especially vital in large-scale training such as training for astronomical research, automated driving, weather forecasting, and oil exploration. The rapid development of AI can be attributed to the improvement in hardware and cloud computing technologies, and more importantly, the massive amounts of data generated by digital transformation of various industries, which can be used for model training. The development platform is required to manage tens of millions of models, datasets, and service objects throughout the life cycle, from raw data labeling to data training, algorithms, models, and inference services.

With in-depth development of full-stack technologies, AI has become a basic service of the cloud for training and inference in the cloud. AI deployed in the cloud supports online and batch inference to handle large-scale, concurrent tasks. Cloud, AI, and IoT can work together to create a blue ocean market. In scenarios such as Smart Home, IoT, and Internet of Vehicles (IoV), a holistic cloud+AI+IoT solution can be formulated to tap into new AI markets.

Huawei’s AI strategy includes investment in research to foster basic machine learning capabilities that are data-efficient (fewer data requirements), energy-efficient (lower computing power and energy consumption), secure, trusted, automatic, and autonomous in fields such as vision computing, natural language processing, inference, and decision-making. The strategy also covers independent and integrated full-stack solutions for all scenarios (device, edge, and cloud) to offer abundant and cost-effective computing resources as well as an easy-to-use, efficient, and end-to-end AI platform.

The Huawei Global Industry Vision (GIV) predicts that global data volume will increase from 32.5 ZB in 2018 to 180 ZB in 2025. Enterprise demand for AI computing power doubles every three months, and AI adoption will rise to 80% by 2025. This may be the best of times for Huawei, which has made significant breakthroughs in the computing power field.