Meta’s Codec Avatars technology allows Metaverse applications to significantly shorten distances

In a podcast hosted by Russian-American computer scientist Lex Fridman, a sophisticated virtual reality interview was conducted with Meta CEO, Mark Zuckerberg. Utilizing Meta’s advanced flagship virtual reality headset, Quest Pro, launched last year, along with the personal facial Codec Avatars technology first unveiled in 2018, the conversation spanned hundreds of miles, taking place entirely within the metaverse.

Throughout the dialogue, both Fridman and Zuckerberg donned the Quest Pro, harnessing the Codec Avatars technology to reproduce their facial expressions in real-time within the virtual setting. This rendered the entire metaverse-based conversation exceedingly lifelike.

However, constrained by the current capabilities of Codec Avatars technology, only facial expressions can be depicted in real time. A full-body representation might necessitate additional wearable technology.

Presently, the Codec Avatars technology once required the computational prowess of an NVIDIA Titan X-level graphics card. But recent iterations, as disclosed, employ a custom RISC-V chip developed by Meta Reality Labs, occupying a mere 1.6mm², allowing its integration into standalone virtual reality headsets.

In comparison to its earlier renditions, where Meta’s Horizon Worlds showcased avatars composed of basic lines, the facial representation made possible by the Codec Avatars technology is undoubtedly more persuasive in convincing audiences of the impending trajectory of the metaverse.

Beyond Meta’s metaversal pursuits, firms including Google and Logitech are endeavoring to facilitate more authentic “face-to-face” interactive experiences using existing camera equipment. Yet, such attempts face challenges of high setup costs and the impracticality of spontaneous interactions. Thus, Meta’s virtual reality headset, combined with customized chips and relevant technological applications, offers individuals a more accessible means to engage in the metaverse, mitigating the feeling of detachment through hyper-realistic facial expressions.

The current VR headsets still necessitate supplementary methods to portray full-body movements within the metaverse. Perhaps forthcoming innovations might present simpler, more intuitive solutions, possibly even advancing to the realm of “tactile” sensations, allowing users to interact within the metaverse in an even more organic manner.