Deep Learning Super Sampling

[6][7] In 2019, the video game Control shipped with real-time ray tracing and an improved version of DLSS, which did not use the Tensor Cores.

[15] When using DLSS, depending on the game, users have access to various quality presets in addition to the option to set the internally rendered, upscaled resolution manually: The first iteration of DLSS is a predominantly spatial image upscaler with two stages, both relying on convolutional auto-encoder neural networks.

Using just a single frame for upscaling means the neural network itself must generate a large amount of new information to produce the high resolution output, this can result in slight hallucinations such as leaves that differ in style to the source content.

[14][24] This first iteration received a mixed response, with many criticizing the often soft appearance and artifacts in certain situations;[25][6][5] likely a side effect of the limited data from only using a single frame input to the neural networks which could not be trained to perform optimally in all scenarios and edge-cases.

[citation needed] DLSS 2.0 is a temporal anti-aliasing upsampling (TAAU) implementation, using data from previous frames extensively through sub-pixel jittering to resolve fine detail and reduce aliasing.

The data DLSS 2.0 collects includes: the raw low-resolution input, motion vectors, depth buffers, and exposure / brightness information.

[13] It can also be used as a simpler TAA implementation where the image is rendered at 100% resolution, rather than being upsampled by DLSS, Nvidia brands this as DLAA (Deep Learning Anti-Aliasing).

This helps to identify and fix many temporal artifacts, but deliberately removing fine details in this way is analogous to applying a blur filter, and thus the final image can appear blurry when using this method.

[13] DLSS 2.0 uses a convolutional auto-encoder neural network[25] trained to identify and fix temporal artifacts, instead of manually programmed heuristics as mentioned above.

Because temporal artifacts occur in most art styles and environments in broadly the same way, the neural network that powers DLSS 2.0 does not need to be retrained when being used in different games.

In practice, this means low resolution textures in games will still appear low-resolution when using current TAAU techniques.

[19][18] The fourth generation of Deep Learning Super Sampling (DLSS) was unveiled alongside the GeForce 50 series.

[32] Nvidia claims that 75 games will integrate DLSS 4 Multi Frame Generation at launch, including Alan Wake 2, Cyberpunk 2077, Indiana Jones and the Great Circle, and Star Wars Outlaws.

[37] They are used for doing fused multiply-add (FMA) operations that are used extensively in neural network calculations for applying a large series of multiplications on weights, followed by the addition of a bias.

Andrew Edelsten, an employee at Nvidia, therefore commented on the problem in a blog post in 2019 and promised that they were working on improving the technology and clarified that the DLSS AI algorithm was mainly trained with 4K image material.

[43] The transformer-based AI upscaling model introduced with DLSS 4 received praise for its improved image quality with regard to increased stability, reduced ghosting, better anti-aliasing, and higher level of detail, as well as its backward compatability and higher training scalability regarding future improvements.