Several years after the artificial intelligence (AI) industry took off again, Huawei was eying the AI computing industry. At HUAWEI CONNECT 2018, Huawei's annual flagship marketing event, Huawei released its AI strategy and the full-stack, all-scenario AI solution. The solution is hierarchical and encompasses terminals to clouds, AI chips to deep learning training.\nBy full-stack, Huawei means solutions covering chips, chip enablement, training and inference framework, and application enablement.\nFirst, the Ascend AI processors. They are the core chip layer in the full stack. Huawei provides a rich series of Ascend AI processors and IP cores based on a unified, scalable architecture. The Ascend series includes Max, Mini, Lite, Tiny, and Nano models.\nSecond, the Compute Architecture for Neural Networks (CANN). CANN provides chip operator libraries and highly automated operator development tools to improve development efficiency.\nThird, MindSpore. The all-scenario AI computing framework supports independent and collaborative training and inference across the device, edge, and cloud.\nFourth, ModelArts. It provides end-to-end services, layered APIs, and pre-integrated solutions for application enablement.\n Huawei \n\nFigure 1 Huawei's full-stack, all-scenario AI solution\n\n\nBy all-scenario, Huawei means different deployment scenarios for AI, including public clouds, private clouds, edge computing in all forms, industrial IoT devices, and consumer devices.\nOn October 10, 2018, Eric Xu, Huawei's Deputy Chairman of the Board and Rotating Chairman, announced the Ascend 310 AI processor at HUAWEI CONNECT 2018. This processor is an AI system on chip (SoC) with ultra-high energy efficiency. The half-precision (FP16) computing power reaches 8 TeraFLOPS, and the integer precision (INT8) computing power reaches 16 TeraOPS. Ascend 310 supports 16-channel full HD video decoding (H.264\/265), with a maximum power consumption of only 8 W.\nAlso at the event, Zheng Yelai, president of Huawei's Cloud BU, released the ModelArts AI development platform. Compared with other development platforms in the industry, ModelArts is a faster development platform positioned for inclusive AI. It covers the entire AI development pipeline, including data acquisition, data labeling and preparation, and model training, tuning, and deployment, providing one-stop services for AI application development.\nIn real-world cases, ModelArts processes over 4000 parallel training jobs on a daily basis, totaling 32000 hours. Among them, about 85% are visual jobs, 10% voice jobs, and 5% machine learning. ModelArts so far has over 30,000 developers.\nOn April 10, 2019, Michael Ma, president of Huawei's Intelligent Computing Business Dept, announced the official launch of the Atlas AI computing platform, unlocking a new chapter in the commercialization of Atlas. Based on Huawei Ascend series AI processors, the Atlas AI computing platform provides various product form factors, including modules, cards, edge stations, and appliances. The Atlas portfolio includes the Atlas 200 AI accelerator module, Atlas 200 DK AI developer kit, Atlas 300 AI accelerator card, Atlas 500 AI edge station, and Atlas 800 AI servers.\nThe Atlas family is oriented to AI infrastructures in all scenarios, including the device, edge, and cloud. It can be widely used in smart city, carrier, finance, Internet, and electric power industries. Integral to Huawei's full-stack, all-scenario AI solution, the Atlas AI computing platform unlocks supreme compute power to help customers embrace an AI-fueled future.\nOn August 23, 2019, Eric Xu released the industry's most powerful AI processor, Ascend 910, and the all-scenario AI computing framework, MindSpore, completing the finishing touch to Huawei's full-stack, all-scenario AI solution. This launch event signifies a new stage in Huawei's AI strategy.\nThe Ascend 910 AI processor boasts the industry's highest computing density per chip. It delivers a half-precision (FP16) computing power up to 256 TeraFLOPS, integer precision (INT8) computing power up to 512 TeraOPS, and supports 128-channel full HD video decoding (H.264\/265). All these are done with a maximum power consumption of 310 W.\nMindSpore is an elastically scalable framework that adapts to different running environments and can be independently deployed in all scenarios. MindSpore provides gradient and model information that has been collaboratively processed and does not contain privacy information, instead of providing data itself, to implement cross-scenario collaboration while securing user privacy.\nIn addition to privacy protection, MindSpore also deploys model protection into the AI framework to ensure model security and reliability. MindSpore natively adapts to device, edge, and cloud scenarios and supports on-demand collaboration, implementing AI algorithms as code to enable friendly development with much less model development time. Take a typical natural language processing (NLP) network as an example. Compared with other frameworks, MindSpore reduces the core code volume by 20%, greatly lowering the development threshold and improving the overall efficiency by more than 50%.\nLeveraging the MindSpore framework innovation and the Ascend AI processors, Huawei delivers runtime-efficient computing to help industries tackle the most complex AI computing issues and address the challenges to diversified computing power. In addition to the Ascend AI processors, MindSpore also supports other processors such as GPUs and CPUs.\nSince the release of Huawei's AI strategy, the Atlas and Mobile Data Center (MDC) products based on the Ascend 310 AI processor have seen full commercialization. On the MDC side, Huawei deeply collaborates with mainstream carmakers inside and outside China in scenarios such as campus bus, new energy vehicle, and autonomous driving. Huawei has also developed dozens of partners for the Atlas series cards and servers and implemented AI solutions in many industries, such as smart transportation and smart electricity.\nHUAWEI CLOUD also provides services based on Ascend 310. More than 50 APIs of HUAWEI CLOUD image analysis, optical character recognition (OCR), and video analysis services are based on the Ascend 310, with over 100 million times of daily calling on average and still increasing rapidly. It is estimated that the average daily calling times will exceed 300 million at the end of this year.\nOn September 18, 2019, Ken Hu, Huawei's Deputy Chairman of the Board and Rotating Chairman, announced the Atlas 900 AI cluster. Atlas 900 delivers ultimate computing power for enterprise AI services. The Atlas 900 AI cluster consists of thousands of Ascend 910 AI processors. It is currently the industry's fastest AI training cluster. Atlas 900 delivers up to 256\u20131024 PFLOPS at FP16, equivalent to the computing power of 500,000 PCs. The Atlas 900 AI cluster provides powerful computing capabilities for large-scale dataset neural network training. It can be widely used in scientific research and business innovation, enabling researchers to quickly train AI models such as images, videos, and speech. Atlas 900 enables faster astronomical research, weather forecast, oil exploration, and faster time-to-market for autonomous driving.\n Huawei \n\nFigure 2 Huawei Atlas 900 AI cluster\n\n\nHuawei aims at enabling inclusive AI that is affordable, effective, and reliable, which will contribute to data-efficient, energy-efficient, secure, trusted, and autonomous machine learning capability for computer vision, NLP, decision making, and inference. Huawei has been stepping up effort in building full-stack, all-scenario AI solutions that can run independently or collaborate efficiently across the cloud-edge-device, providing abundant, economical computing resources on an easy-to-use, efficient, E2E AI platform.\nTechnologically, Huawei has been at the cutting edge of AI computing and is well poised for the new AI era.