Although AWS has no specific instructions, there are rumors that Graviton3 uses the Armv9 architecture and Neoverse N2 core, and supports DDR5 memory. Graviton3 will provide support for AWS EC2 C7g instances, suitable for computing-intensive workloads such as HPC, EDA, distributed analysis, and CPU-based machine learning inference. The EC2 C7g instance provides 30 Gbps of network bandwidth and Elastic Fabric Adapter (EFA) support and is currently in the preview stage.
Trainium is a self-developed cloud inference chip developed by AWS. It is used to train deep learning models. It will be released in 2020. This time it provides the latest Trn1 instance based on the Trainium chip. AWS said that it will enable applications such as image recognition, natural language processing, and fraud detection to achieve the best price-performance ratio.
In addition, the AWS EC2 M6a instance uses AMD EPYC 7003
series processors based on Zen 3 architecture. Compared with the previous generation of EC2 M5a instances, the cost performance is increased by 35%, and the cost is reduced by 10% compared to other EC2 instances based on x86 processors. It is understood that AWS plans to introduce more instances of AMD’s third-generation EPYC series processors in the future.