Skip to main content

Nvidia Wants to Use AI to Enhance Games, Improve Ray Tracing

By Joel Hruska
At SIGGRAPH this week, both AMD and Nvidia are announcing various hardware and software technologies. SIGGRAPH is an annual show that focuses on computer graphics and advances in rendering techniques. At the show this year, Nvidia showcased the ways AI could be used to improve gaming or to create extremely realistic images, without the enormous computational horsepower that would be required to brute force certain visual standards.
This last bit is of more than incidental concern. The problem is simple: If you compare a top-shelf character animation from 2017 versus the best hardware of 2005, you’ll obviously notice the difference. At the same time, however, you’re unlikely to be fooled into thinking that even the most amazing CG is actually a movie. Slowing silicon advances make it less and less likely that we’ll ever be able to simply computationally force the issue. Perhaps more to the point, even if we could, brute-forcing a solution is rarely the best way to solve it.
To be clear, this is an ongoing research project, not a signal that Nvidia will be launching the new GTX 1100 AI Series in a few weeks. But some of the demos Nvidia has released are quite impressive in their own right, including a few that suggest there might be a way to integrate ray tracing into gaming and real-time 3D rendering much more smoothly than what we’ve seen in the past.
A new blog post from the company illustrates this point. Aaron Lefohn reports on how Nvidia worked with Remedy entertainment to train GPUs on how to produce facial animations directly from actor videos. He writes:
Instead of having to perform labor-intensive data conversion and touch-up for hours of actor videos, NVIDIA’s solution requires only five minutes of training data. The trained network automatically generates all facial animation needed for an entire game from a simple video stream. NVIDIA’s AI solution produces animation that is more consistent and retains the same fidelity as existing methods.
Simply drawing animations isn’t the only thing Nvidia thinks AI can do. One of the reasons why ray tracing has never been adopted as a primary method of drawing graphics in computer games is because it’s incredibly computationally expensive. Ray tracing refers to the practice of creating scenes by tracing the path of light as it leaves a (simulated) light source and interacts with other objects nearby.
A realistic ray traced scene requires a very large number of rays. Performing the calculations to the degree required to make ray tracing preferable to the technique used today, known as rasterization, has generally been beyond modern GPU hardware. That’s not to say that ray tracing is never used, but it’s typically deployed in limited ways or using hybrid approaches that blend some aspects of ray tracing and rasterization together. The work Nvidia is showing at SIGGRAPH this week is an example of how AI can take a relatively crude image (the image on the left, top) and predict its final form much more quickly than actually performing enough ray traces to generate that result through brute force.
AI-Denoise
Using AI to de-noise an image.
Ray tracing isn’t the only field that could benefit from AI. As shown above, it’s possible to use AI to remove noise from an image — something that could be incredibly useful in the future, for example, if watching lower-quality video or playing games at a low resolution due to panel limitations.
ai-antialiasing
AI can apparently also be used for antialiasing purposes.
In fact, AI can also be used to perform antialiasing more accurately. If you follow the topic of AA at all, you’re likely aware that every method of performing antialiasing — which translates to “smoothing out the jagged pixels that drive you nuts” — has drawbacks. Supersampled antialiasing (SSAA) provides the best overall image quality, but sometimes renders an image blurry depending on the grid order and imposes a huge performance penalty. Multisample antialiasing reduces the performance impact, but doesn’t fully supersample the entire image. Other methods of simulating AA, like FXAA or SMAA, are much less computationally expensive but also don’t offer the same level of visual improvement. If Nvidia is right about using AI to generate AA, it could solve a problem that’s vexed GPU hardware designers and software engineers for decades.

Comments