For a long time, VR video has stayed at the stage of 180 degrees or 360 degrees, but the resolution is constantly improving, while the progress of 6DoF video is slow. After all, under 6DoF, the human perspective can be moved freely, which is a huge challenge for early shooting and production as well as later storage and playback.
According to mixed-news, researchers from Carnegie Mellon University, Reality Labs Research, Meta, and the University of Maryland are now working on a rendering method called HyperReel that enables high-resolution real-time rendering with lower memory. Different from the current mainstream AI learning methods such as NeRFs, HyperReel uses AI learning to calculate the intersection between the rays and the geometric primitives, instead of the hundreds of points along the ray path that is common in NeRFs.
In terms of performance, HyperReel can achieve a playback speed of 6.5 to 29 frames per second on the RTX 3090, which is mainly related to the scene and model size. The best 29FPS was achieved on the Tiny model though, at a very low resolution.
In terms of file size, NeRFPlayer requires about 17 megabytes per image, Google’s Immersive Light Field Video 8.87 megabytes per image, and HyperReel only 1.2 megabytes. Although HyperReel is still not suitable for VR video streaming, the smaller file size means that it will be easier to implement this function in the future.
In general, HyperReel is still a laboratory project, and there is still some distance from real-time VR applications such as live broadcasting. Because real-time
VR video applications need to achieve a rendering frame rate of 72FPS, stereo sound must be added. However, Meta also said that HyperReel is based on the original version of PyTorch, and there is still room for improvement in the future.