Microsoft and OpenAI are developing AI chips

At the dawn of this year, ChatGPT captivated a myriad of online enthusiasts, momentarily ascending to the pinnacle of popular discourse. Hailing from the innovative labs of OpenAI, this conversational AI model, also recognized as a tool steered by cutting-edge natural language processing techniques, engages by imbibing and fathoming human lexicon. Its prowess extends to not just conversation contextualized by preceding interactions, but also to the drafting of emails, video scripts, copywriting, translations, and even coding.

GPT-5 trademark

Microsoft, in its visionary endeavors, has funneled billions into OpenAI’s coffers, harnessing NVIDIA’s avant-garde H100 computation card to satiate the insatiable computational demands intrinsic to AI training and scaling. As per industry insiders, both tech juggernauts, Microsoft and OpenAI, are in the throes of birthing their very own AI chip. Their aspiration is to craft a more streamlined and cost-effective conduit for their expansive language models (LLM), envisaging future computational necessities. An unveiling of this AI marvel is slated for Microsoft’s imminent annual developer convocation next month. In the grand tapestry of their strategy, this move also resonates with their intent to pare down their reliance on NVIDIA, averting potential bottlenecks.

Whispers months ago alluded to Microsoft’s bespoke AI processor, codenamed “Athena”, a prodigious feat forged from the crucibles of TSMC’s 5nm process. Its raison d’être is the acceleration of AI workloads, envisaging its expansiveness to span hundreds, if not thousands, of chips operating in tandem. Rumor has it that this endeavor, initiated as early as 2019, promises a palpable cost advantage compared to NVIDIA’s GPU prowess, potentially slashing expenses to a mere third.

Given that the H100 is replete with a plethora of CUDA cores, compounded with NVIDIA’s decade-long relentless refinement of their development platform, their ecosystem exudes maturity, outclassing competitors by a country mile. Microsoft, in its ambitious foray, needs to not only augment the computational acumen of its AI chip but also hone the complementary developmental software, a journey that might demand patience. Tech titans like Amazon, Google, and Meta, given their colossal AI-centric requirements, pioneered chip design for AI training eons ago. For Microsoft, however, this odyssey into custom chip craftsmanship is but in its infancy.