Customers around the world rely on Microsoft Azure to drive innovations related to our environment, public health, energy sustainability, weather modeling, economic growth, and more. Finding solutions to these important challenges requires huge amounts of focused computing power. Customers are increasingly finding the best way to access such high-performance computing (HPC) through the agility, scale, security, and leading-edge performance of Azure’s purpose-built HPC and AI cloud services.
Azure’s market-leading vision for HPC and AI is based on a core of genuine and recognized HPC expertise, using proven HPC technology and design principles, enhanced with the best features of the cloud. The result is a capability that delivers performance, scale, and value, unlike any other cloud. This means applications are scaling 12 times higher than other public clouds. It means higher application performance per node. It means powering AI workloads for one customer with a supercomputer fit to be among the top five in the world. It also means delivering massive compute power into the hands of medical researchers over a weekend to prove out life-saving innovations in the fight against COVID-19.
This year during NVIDIA GTC 21, we’re spotlighting some of the most transformational applications powered by NVIDIA accelerated computing that highlights our commitment to edge, on-premises, and cloud computing. Registration is free, so sign up to learn how Microsoft is powering transformation.
AI and supercomputing scale
The AI and machine learning space continues to be one of the most inspiring areas of technical evolution since the internet. The trend toward using massive AI models to power a large number of tasks is changing how AI is built. Training models at this scale requires large clusters of hundreds of machines with specialized AI accelerators interconnected by high-bandwidth networks inside and across the machines. We have been building such clusters in Azure to enable new natural language generation and understanding capabilities across Microsoft products.
The work that we have done on large-scale compute clusters, leading network design, and the software stack, including Azure Machine Learning, ONNX Runtime, and other Azure AI services, to manage it is directly aligned with our AI at Scale strategy.
Machine learning at the edge
Microsoft provides various solutions in the intelligent edge portfolio to empower customers to make sure that machine learning not only happens in the cloud but also at the edge. The solutions include Azure Stack Hub, Azure Stack Edge, and IoT Edge.
Whether you are capturing sensor data and inferencing at the edge or performing end-to-end processing with model training in Azure and leveraging the trained models at the edge for enhanced inferencing operations—Microsoft can support your needs however and wherever you need to.
Visualization and GPU workstations
Azure enables a wide range of visualization workloads, which are critical for desktop virtualization as well as professional graphics such as computer-aided design, content creation, and interactive rendering. Visualization workloads on Azure are powered by NVIDIA’s world-class graphics processing units (GPUs) and RTX technology, the world’s preeminent visual computing platform.
Recapping 2021 moments with Azure and NVIDIA technologies
Wildlife Protection Services
From deforestation to wildfire management to protecting endangered animals, studying wildlife populations is essential to a sustainable future. Learn how Wildlife Protection Services works with Microsoft AI for Earth to provide the monitoring technology that conservation groups need to keep watch over wild places and protect wildlife, using an infrastructure of Azure High Performance Computing virtual machines with NVIDIA V100 GPUs.
Van Gogh Museum
With tens of thousands of Chinese visitors each year, the Van Gogh Museum wanted to create something unique for this audience. Enter WeChat, an app that could transform portrait photos into digital paintings reminiscent of Van Gogh’s art. Users, able to see how the artist would have painted them, would ideally be drawn closer to his art through this unique, personal experience. Read about how the Van Gogh Museum accomplished this through the use of Azure High Performance Computing, Azure Machine Learning, and more.
FLSmidth has an ambitious goal of zero emissions by 2030 but they were hampered by latency and performance limitations of their on-premises infrastructure. By moving to Microsoft Azure in collaboration with partner Ubercloud, FLSmidth found the perfect vehicle for optimizing the engineering simulation platforms that depend on high-performance computing. The switch has removed all latency, democratized their platform, and produced results 10 times faster than their previous infrastructure.
Previous 2021 Azure HPC and AI product launches
Azure announces general availability of scale-out NVIDIA A100 GPU Clusters: the fastest public cloud supercomputer—the Azure ND A100 v4 Virtual Machine—powered by NVIDIA A100 Tensor Core GPUs – are designed to let our most demanding customers scale up and scale out without slowing down.
In the June 2021 TOP500 list Microsoft Azure took public cloud services to a new level, demonstrating work on systems that took four consecutive spots from No. 26 to No. 29 on the TOP500 list. They are parts of a global AI supercomputer called the ND A100 v4 cluster, available on demand in four global regions today. These rankings were achieved on a fraction of our overall cluster size. Each of the systems delivered 16.59 petaflops on the HPL benchmark also known as Linpack, a traditional measure of HPC performance on 64-bit floating point math that’s the basis for the TOP500 rankings.
Azure announces the DeepSpeed-and Megatron-powered Megatron-Turing Natural Language Generation model (MT-NLG), the largest and the most powerful monolithic transformer language model trained to date, with 530 billion parameters. It is the result of a research collaboration between Microsoft and NVIDIA to further parallelize and optimize the training of very large AI models.