During, Huawei Developer Conference 2021 (Cloud) in China. During the event, the company released MindSpore 1.2, the first domestic AI computing framework that supports large model training with hundreds of billions of parameters.
The latest version 1.2 brings three major innovations in the field of AI frameworks, “automatic parallelism, full-scenario AI, and interpretable recommendation model”, allowing developers to enjoy AI development.
MindSpore realizes the rapid application, efficient operation, and effective collaboration of hardware devices in different scenarios of cloud, edge, and end. With the full-scenario AI capability, the recognition rate of Huawei Watch GT’s wrist lift has been increased by 80%, the delay is less than 5ms, and the model is less than 1KB, which greatly improves the user experience.
- In the cloud: through adaptive model segmentation and distributed parallel scheduling technology in the service, it can support the inference deployment of super-large models on multiple accelerator cards, and the inference performance is improved by 30% compared with the current industry-leading serving service mode;
- On the edge side: Through adaptive model compression technology, the CV (Computer Vision) model is compressed by 2/3, the reasoning time is shortened by 50%, and the measured accuracy loss on the user side is <1%, which can effectively solve the computing power bottleneck on the edge side;
- On the end side: the model is the code, and the model is compiled into the code to achieve a very small ROM (Read-Only Memory) occupancy. At the same time, the operator data rearrangement technology is used to increase the end-side cache hit rate, which can reduce the reasoning delay and solve the problem of device type and memory limitations when deploying ultra-lightweight IoT devices.
Fully automatic parallel
MindSpore is the industry’s first fully automatic parallel framework based on network topology and cluster resource automatic perception. Based on the fully automatic parallel capability, it has developed the industry’s first 200 billion Chinese pre-training model with parameters.
In the static graph mode, MindSpore integrates three parallel technologies: pipeline parallelism, model parallelism, and data parallelism. Developers only need to write single-machine algorithm code and add a small number of parallel tags to realize automatic segmentation of the training process and make the parallel algorithm performance. The tuning time has been reduced from monthly to hourly, and training performance has increased by 40% compared to industry benchmarks.
In the dynamic graph model, MindSpore’s unique functional differential design can be easily extended from the first-order differential to the high-order differential, and the performance of the entire graph is optimized to greatly improve the performance of the dynamic graph; combined with innovative communication operator fusion and multi-streaming Parallel mechanism, compared with other AI frameworks, MindSpore dynamic graph performance is improved by 60%.
Huawei affirms that MindSpore is the industry’s first fully automatic parallel framework based on network topology and cluster resource automatic sensing. It has developed the industry’s first 200 billion Chinese pre-training model based on fully automatic parallel capabilities.
According to Huawei, In a typical neural network for natural language processing (NLP), MindSpore has 20% fewer lines of core code than leading frameworks on the market, and it helps developers raise their efficiency by at least 50%.
Additionally, MindSpore helps ensure user privacy because it only deals with gradient and model information that has already been processed. It doesn’t process the data itself, so private user data can be protected even in cross-scenario environments. In addition, MindSpore has built-in model protection technology to ensure that models are secure and trustworthy.
The MindSpore AI framework is adaptable to a series of scenarios and provides on-demand cooperation between them. Its “AI Algorithm As Code” design concept allows developers to develop advanced AI applications with ease and train their models more quickly.
MindSpore built-in the industry’s first semantic-level interpretable recommendation model TB-Net, based on the original knowledge map bidirectional transmission technology, from the massive relationship paths of the knowledge map, accurately identify the core features and key paths that affect user behavior, and provide personalized recommendations and semantics Level interpretation, the interpretability evaluation index is 63% higher than the industry model.
Since the open source in March 2020, the MindSpore community has more than 170,000 developers and more than 240,000 software downloads, which are used in more than 10 industries. In addition, MindSpore ranks first in terms of code activity, influence, community activity, team building, and fashion trends on Gitee. At present, MindSpore is the fastest growing AI open source community.
(Source – Huawei)