Intel engineers talk about XeSS technology

Although Intel’s own research and development of XeSS, which uses AI to stretch the screen resolution, has not yet come out, it is clear that Intel has already made a lot of plans for the future of this technology. Karthik Vaidyanathan, a chief engineer from Intel, revealed a lot of data about XeSS and its subsequent development in an interview with Wccftech.

First of all, one of the issues that everyone is most concerned about is whether XeSS needs to be trained for each game because NVIDIA’s DLSS 1.0 is required to do so. In this regard, Karthik Vaidyanathan said that Intel’s goal has always been to ensure that XeSS does not need to be trained for individual games and can become a general-purpose technology, just like the version after DLSS 2.0. This should be good news for game developers. After all, this means that the time cost of adding XeSS to the game can be greatly reduced.

One of the biggest limitations of NVIDIA DLSS technology is its requirements for dedicated hardware. GPUs without Tensor Core cannot use this technology at all. Will Intel’s XeSS be the same?

You should be at ease about this because Karthik Vaidyanathan said that XeSS will have two solutions for different GPUs. For Intel’s own Arc GPUs, XeSS will use them for XMX matrix acceleration. This solution can also maximize the efficiency of XeSS. As for other GPUs, as long as they support Microsoft’s Shader Model 6.4, XeSS can also use these GPUs for dot product acceleration (DP4a).

At the same time, because the XeSS of these two schemes uses the same API interface, developers only need to use the same API interface to allow their games or software to provide XeSS of these two acceleration schemes at the same time. In contrast, the efficiency of dot product acceleration is definitely not as efficient as XMX matrix acceleration, but at least the dot product acceleration can be applied to XeSS, which is a change from scratch. This is of course welcome for non-Intel GPU players.

However, XeSS’s current support for non-Intel GPUs is limited to the NVIDIA Pascal architecture and newer GPUs, while AMD’s side is the first-generation RDNA architecture or later GPUs. This is because XeSS will not provide fallback support for FP16/FP32 at least when it is launched, which means that in addition to the aforementioned GPUs, older GPUs cannot use XeSS.

As a stretching technique using AI algorithms, XeSS uses 64 samples for training for each pixel, Karthik Vaidyanathan believes that what NVIDIA said to train DLSS with 16K images actually refers to the number of samples per pixel, therefore, the sample size of XeSS is 4 times higher than that of NVIDIA DLSS.

Finally, Karthik Vaidyanathan also talked about the issue of XeSS open source. He said that there will be XeSS 2.0 and 3.0 versions in the future, and these versions are likely to be implemented after XeSS itself becomes open source. Don’t worry about the issue of open source, because Intel will eventually release XeSS as open-source, but it will take some time.