Microsoft Launches Azure Maia 100 AI Accelerator and the Azure Cobalt 100 Processor

Last month, reports surfaced that Microsoft and OpenAI were developing their own AI chips, with plans for a November release. These chips are designed to more efficiently and cost-effectively adapt to the requirements of large language models (LLMs), anticipating future computational demands. Microsoft has previously invested billions of dollars in OpenAI, utilizing Nvidia’s latest H100 compute cards to meet the immense computational loads needed for processing, training, and scaling AI.

During the recent developers’ conference, Microsoft announced the launch of two custom chips designed by its chip laboratory: the Azure Maia 100 AI Accelerator and the Azure Cobalt 100 Processor.

Microsoft has stated that these chips are currently in the “first phase of deployment” and are expected to be operational in data centers by early next year. They will initially be used to run services like Copilot, Azure OpenAI, and others. While the exact number of available chips was not disclosed, Microsoft affirmed that it would continue to utilize chips from partners such as Nvidia and AMD to serve its customers, even with its custom chips in play.

The GPU used in the Azure Maia 100 AI Accelerator is part of a project codenamed “Athena,” manufactured using TSMC’s 5nm process. It boasts 105 billion transistors, and initial servers are currently undergoing testing with GPT 3.5 Turbo. The Azure Cobalt 100 Processor features 128 cores, supports the 64-bit Armv9 instruction set, and utilizes Arm’s custom-designed Neoverse CSS platform for Microsoft, likely based on the Neoverse N2 core.

Microsoft claims that the servers for Maia are built with a fully customized Ethernet-based network protocol design, enabling enhanced scalability and end-to-end workload performance.