Google uses RISC-V as its custom AI chip
SiFive emerged as the world’s inaugural chip design manufacturer based on the RISC-V architecture, established in 2015. Over the years, SiFive has ascended to become a pivotal entity within the RISC-V ecosystem, distinguishing itself as a preferred source for the fabrication of miniature, cost-effective cores. However, the company has encountered some turbulence in its trajectory over the past two years, with reports last year of undergoing significant restructuring and substantial layoffs, primarily affecting engineers and personnel in product and sales departments.
According to media sources, SiFive anticipates its revenue for 2024 to range between $240 million and $280 million, stemming from new contracts and licensing income. It is speculated that SiFive is poised to secure another significant contract with Google, which would involve providing core designs for the latter’s Tensor Processing Units (TPUs), thereby facilitating an upsurge in revenue.
SiFive harbors high expectations for Google’s second-generation chips intended for Artificial Intelligence (AI) servers, although the specifics of the transaction remain undisclosed at this juncture, potentially earmarking a crucial revenue stream for SiFive in the future. It is conjectured that SiFive will grant Google a license to employ the Intelligence X390 core, tailored for AI and machine learning workloads.
The Intelligence X390 boasts a single-core configuration, double the vector length, and dual vector Arithmetic Logic Units (ALUs) compared to its predecessor, the X280, thereby quadrupling vector computation capabilities and simultaneously enhancing sustained data bandwidth fourfold. With the incorporation of SiFive’s Vector Co-Processor Interface Extensions (VCIX), users are afforded the flexibility to append their vector instructions and/or accelerate hardware designs. Additional features include a 1024-bit Vector Length (VLEN), a 512-bit Data Length (DLEN), single/double vector ALUs, and VCIX (2048-bit output/1024-bit input).
Google has previously utilized the Intelligence X280 as a coprocessor to orchestrate devices and furnish the Matrix Multiplication Units (MXUs) with requisite data for processing. Moreover, Google’s decision to continue incorporating SiFive’s designs in its next-generation AI systems is significantly motivated by the imperative of maintaining backward compatibility.