Latest FeedsTechnology NewsMicrosoft combines Azure with Nvidia chips to produce supercomputers

Microsoft combines Azure with Nvidia chips to produce supercomputers

-



The company first started working with OpenAI.

Microsoft is promoting its efforts to build supercomputers using its Azure cloud computing program to help OpenAI with its ChatGPT chatbot. It also announced a new AI virtual machine that uses updated GPUs from NVIDIA.

Microsoft’s new ND H100 v5 VM uses NVIDIA’s H100 GPUs, which is an upgrade over the previous A100 GPUs. Companies that need to add MI features can access this virtual machine service, which has the following features:

• 8x NVIDIA H100 Tensor Core GPUs connected using next-generation NVSwitch and NVLink 4.0.
• 400Gb/s NVIDIA Quantum-2 CX7 InfiniBand per GPU at 3.2Tb/s per VM in a non-blocking fat-tree network.
• NVSwitch and NVLink 4.0 with 3.6TB/s bisectional bandwidth between 8 local GPUs per VM.
• 4th generation Intel Xeon Scalable processors
• Connection between PCIE Gen5 host and GPU with 64 GB/s bandwidth per GPU
• 16 channels of 16-channel 4800 MHz DDR5 DIMMs

This is in addition to Microsoft’s previously announced ChatGPT in Azure OpenAI, which allows third parties to access chatbot technology through Azure.

In a separate blog post, Microsoft talks about how the company first started working with OpenAI to help build the supercomputers needed for ChatGPT’s large language model (and Microsoft’s own Bing Chat). This meant connecting thousands of GPUs in a completely new way. On the blog, Nidhi Chappell, product manager for high-performance computing and artificial intelligence at Microsoft Azure, explained.

He explained that to train a large language model, the computational load is distributed across thousands of GPUs in a cluster. In certain phases of the calculation – during the so-called allreduce – the GPUs exchange information about the work they have done. The InfiniBand network speeds up this phase, which must be completed before the GPUs can begin the next part of the calculation.

This hardware is combined with software that helps optimize the use of both the Nvidia GPUs and the network that runs all the GPUs together. According to Microsoft, it is constantly expanding GPUs and expanding the network, while using cooling systems, backup generators and uninterruptible power supply systems to keep GPUs running 24 hours a day, reported the Neowin.

Hardware, software, tests, interesting and colorful news from the world of IT by clicking here!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest news

A completely new opening of the series from Deep Silver Volition literally from the first trailers quite strongly...
Mar 28, 2023Ravie LakshmananRansomware / Endpoint Security Multiple threat actors have been observed using two new variants of the...

Must read

Top 10 Credit Cards in India 2023

As the Indian economy grows, so does the demand...