Ranking the World’s Most Powerful AI
Traditional computers cannot think. In other words, regular computers lack intelligence, but that may quickly change in the near future.
Over the last couple of decades, there has been tremendous interest in imparting intelligence to computers. With such Artificial Intelligence (AI), a computer could differentiate between a cat a dog, recognize you from a photo, talk to human beings intelligently, interpret the text to extract meaning, sentiment, and context of words, and perform a host of other tasks that regular computers cannot.
The enhanced capabilities of AI computers are built atop deep learning, which aims to simulate the neural networks in a human brain. Using Artificial Neural Networks (ANNs) with many layers, AI computers can process huge amounts of data with complex patterns to generate correct classifications, rational analysis, and accurate predictions.
AI computing outperforms High-Performance Computing (HPC — computing with comparable hardware but without AI algorithms) in processing large amounts of unstructured data by a factor of 4:1 already, and that performance chasm is set to widen to 10:1 in the next couple of years.
Before an AI computer can do any exalted AI tasks, it has to be trained rigorously with huge quantities of data. This creates massive requirements of resources — memory, processors (CPUs and GPUs), storage, network bandwidth.
Supercomputers that host such magnitudes of infrastructure resources become an ideal platform for hosting AI applications. Supercomputers tuned with AI algorithms are termed ‘AI supercomputers.’ AI supercomputers are hyper-converged infrastructure (HCI) systems with hardware clusters that offer very high levels of density, scalability, and flexibility.
Why is AI Important?
AI is forecast to become widespread by 2024, used by three-quarters of all organizations globally. 20% of all workloads and 15% of enterprise infrastructure is expected to be devoted to AI applications.
The explosion in AI has been necessitated by a variety of needs that traditional computing cannot meet:
- Computer vision
- Natural language processing with GPT-3
- Personalized content or product recommendations based on prior user activity
- Content analysis and filtering
- Pattern recognition and anomaly detection.
In 2020, private investments in AI from the United States amounted to almost USD 24 billion. China ranked second with nearly USD 10 billion in funding. Britain, France, Japan, Germany, Israel, Canada, and South Korea are all billion-dollar ticket players in the AI sector.
The future holds even more fireworks. China, for instance, plans to invest USD 150 billion in AI by 2050, aiming to be the global leader in AI.
The World’s Most Powerful AI
Non-AI supercomputers measure their speeds in terms of floating points operations per second (FLOPS) at 64-bit floating-point (FP64) precision. AI supercomputers measure their flops differently.
AI flops are rated at FP16 precision and measure how fast a computer can perform deep neural network (DNN) operations. DNN operations are AI algorithms that learn to recognize patterns in vast amounts of unstructured data.
This ability has given rise to the wonders of AI, such as speech recognition and computer vision. GPU accelerators are specially designed for performing such DNN operations at dazzling speeds.
AI supercomputers enable training GPT-3, the world’s largest natural language processing (NLP) model. If your organization works on NLP applications, operationalizing GPT-3 with Spell tutorial is easy enough.
Nvidia’s Perlmutter
In May 2021, Nvidia unveiled the world’s most powerful AI supercomputer yet, a gigantic machine named ‘Perlmutter’ built for the US National Energy Research Scientific Computing Center (NERSC). Still just half-complete, Perlmutter races along at four exaflops. For comparison, Fugaku — the world’s fastest supercomputer located in Japan — can perform AI operations at a ‘mere’ 1 exaflop.
The $146m Perlmutter supercomputer will be built in two stages, with the first phase completed. The first phase housed the machine with over 1,500 compute nodes, each containing 256 GB RAM, four Nvidia A100 Tensor Core GPUs, and one AMD Milan Epyc processor for a total of more than 6,000 of Nvidia’s latest A100 GPUs and 1,500 AMD server chips.
When Phase 2 arrives later this year, Perlmutter will take in another 3,072 CPU-only nodes, each node with two 3rd Gen AMD EPYC processors and 512 GB of memory per node. Perlmutter’s final speed for AI operations is expected to be in the range of 10 exaflops.
What The Future Holds
AI supercomputers are more in the phase of revolution than evolution. Perlmutter won’t keep its top billing for too long. In 2023, a Swiss AI supercomputer named Alps is set to debut. Alps will rocket up AI performance benchmarks at over 20 exaflops. Alps is expected to be able to train GPT-3 in just two days.