Huawei has taken a huge step forward in realizing its dream of inclusive AI. The company has officially made MindSpore open source, and in doing so has provided the open source community with an invaluable tool for deep learning. The all-scenario MindSpore has cloud-edge-device collaboration, lower entry barriers to development, and efficient distributed parallel computing. So how exactly will MindSpore change the AI world?\nHuawei began with hardware development and later grew to include AI. But one thing has remained constant: It has never been afraid of a challenge in pursuing innovation.\nIt first built chips. For more than a decade, Huawei poured CNY400 billion into R&D investment, and rolled out chips such as Tiangang, Kirin, and Ascend. Huawei is now eyeing AI frameworks. It aims to lower the entry barriers to AI development, help developers port code more easily, and foster collaboration on demand in all scenarios. Huawei is driven by a single goal: To build a fully connected, intelligent world.\nHuawei's Rotating Chairman Eric Xu debuted MindSpore at HUAWEI CONNECT 2018. It was Huawei\u2019s message to the industry: A new major player has entered the arena of AI frameworks. Now the leading scientist behind MindSpore, Dr. Chen Lei, has announced that China's first all-scenario AI computing framework is officially going open source. MindSpore will be open for alpha this April for real-world application by developers.\nMindSpore: Huawei's compass in uncharted waters\n"A new horizon in the computing industry worth $2 trillion is waiting," said Huawei Vice Chairman Ken Hu at HUAWEI CONNECT 2019. Huawei has already built the hardware foundation for exploring this new opportunity. The next step will be building the software. However, it will not be easy.\nThe first obstacle involves international roadblocks to technical and trade development. Huawei must therefore have an independent computing framework for all scenarios to solve these problems which threaten to stop the project before it can even begin.\nThe second obstacle concerns the current state of deep learning frameworks. Existing open source deep learning frameworks sometimes deter developers from active contributing or adopting with high entry barriers, high operating costs, and difficult deployment.\nDespite these difficulties, MindSpore's original mission remains the same: to be an open source framework for deep learning training and inference in all scenarios.\nMindSpore is customized for computer vision (CV), natural language processing (NLP), and other similar fields of AI. It provides data scientists and algorithm engineers with easily accessible development with run-time efficiency and provides native support for Ascend AI processors and software\/hardware co-optimization.\nMindSpore will globally build an open AI community and drive a thriving ecosystem for AI software\/hardware co-optimization.\n\u00a0\nPowerful core features for practical development\nMindSpore only requires developers to master the basics of tensors, operators, cells, models, and Python programming without a steep learning curve on many of the underlying complexities.\nDr. Chen Lei introduced MindSpore\u2019s key features in its roadmap and emphasized Huawei\u2019s commitment to continuously incorporating requirements from MindSpore community participants to create an ever-improving framework.\nLet's look at the core features of MindSpore.\n\u00a0\nAutomatic Differentiation\nCurrently, mainstream deep learning frameworks have three automatic differentiation technologies: TensorFlow converts a model into static data flows during compilation and performs automatic differentiation on the static graphs. PyTorch dynamically generates data flows by overloading the operators and performs automatic differentiation on the dynamic graphs. MindSpore performs automatic differentiation based on source code conversion. It performs automatic differential conversion on intermediate representation (expressions of programs during compilation) using the just-in-time (JIT) compiler. MindSpore supports complex control flow structures, such as while\/if\/for, and flexible function programming, such as higher-order functions and closures.\n\u00a0\nAutomatic Parallelism\nAutomatic parallelism uses serial algorithm code to implement distributed parallel training and maintain high performance. The paradigms of distributed parallel training include data, model, and hybrid parallelism. MindSpore uses a new type of distributed parallel training that integrates these paradigms.\n\u00a0\nStreamlined Data Processing\nMindSpore uses MindData for pipeline processing during training, including data loading, data augmentation, and import training. It provides easy-to-use programming interfaces and rich data processing for all scenarios, including CV and NLP. MindData provides the c_transforms and py_transforms modules for data augmentation. Customized operators are supported for data augmentation.\n\u00a0\nEfficient Engine for Graph Execution\nThe graph processing operations of MindSpore vertically divide into three layers: execution control, service function, and data management. The graph operations horizontally divide into six steps: preparation, splitting, optimization, compilation, loading, and execution. The MindSpore graph engine can convert the graph from the front end so that it can efficiently run on Ascend hardware.\n\u00a0\nDeeply Optimized for Model Zoo\nMindSpore will provide more than 30 deeply optimized models by Q4 2020 in Model Zoo for direct use by developers. MindSpore also provides a visualization tool for single training and tracing the source of a multi-trained model to quickly detect problems in the model training process.\nAdvanced design lowers entry barriers for all scenarios\nMindSpore utilizes Huawei's expertise and extensive research of industry pain points. Huawei has observed that there is a huge gap between AI research and application. MindSpore bridges this gap with three design concepts.\nNew Programming Paradigm\nThe new AI programming paradigm uses mathematical native expression to foster AI innovation and exploration. Developers can automatically search for parallel policies through one line of code and implement parallel processing regardless of the underlying architecture.\nNew Execution Paradigm\nMindSpore fully utilizes hardware computing power with full-image offload on devices and deep graph optimization. MindSpore shortens image classification model training by 23% with ResNet-50 and the duration of Chinese pre-training models by 62% with BERT.\nCollaboration On Demand in All Scenarios\nMindSpore deploys one framework for devices, edge, and cloud. This strategy of develop once and deploy everywhere boosts development and deployment efficiency.\nDevelopers, universities, and open source communities are vital to a thriving ecosystem. MindSpore will foster global collaboration with developers, universities, and open source communities. There will be customized support programs for each group.\nDevelopers:\n\nFree online resources\nAt least 10 technical salons annually\nMindSpore developer contest\n\nUniversities:\n\nFinancial incentives for special innovation\nMindSpore teaching support\n\nOpen source community:\n\nTop experts in technical committees\nCommitters for core projects\nInclusive community for enterprises and organizations\n\nTo learn more about Mindspore, please click here.