Nvidia's DLSS 5 promises to bring you out the other side of the uncanny valley

The latest generation of Nvidia’s AI image enhancer brings characters to life

by · The Register

GTC Computer graphics have come a long way from chasing Donkey Kong around a 2D board and fragging 3D demons in Doom. However, even with the most powerful graphics cards, human faces in games still look surreal and lifeless, with dead eyes,saran-wrap-smooth faces, and beards that blend into their chins. With Nvidia’s upcoming DLSS 5, you can play with characters that look like they’re stepped out of a movie screen – and we’re not talking about a Pixar movie either.

Announced at Nvidia’s GTC 2026 keynote and due out this fall, DLSS 5 takes an application’s existing 3D content, colors, and motion and uses AI to add photorealistic lighting. The AI understands what typical human elements such as skin, hair, and clothing should look like. Then, even after viewing a single frame, it’s able to adjust the lighting and colors to make them all the more real.

Nvidia CEO Jensen Huang said on stage that the process DLSS 5 uses is called “neural rendering,” because it “fuses” AI and 3D graphics. DLSS (Deep Learning Super Sampling) debuted in 2018 on RTX 20 series cards. It uses AI to squeeze more performance out of GeForce GPUs by upscaling natively lower resolution frames or inserting additional frames generated by AI instead of relying entirely on traditional rendering.

“About 10 years ago, we thought that AI would revolutionize computer graphics,” Nvidia CEO Huang noted before he showed a demo video of DLSS 5. “Just as GeForce brought AI to the world, AI is now going to go back and revolutionize how computer graphics is done altogether.

In a demo video, Nvidia showed a series of games with characters staring directly at the camera and a transition from DLSS 5 off to DLSS 5 on. In the first example, a blonde-haired female character stares at the viewer in Resident Evil: Requiem. With DLSS off, it’s obvious that she’s computer-generated, from her lifeless eyes to her flat, poreless skin and nearly colorless lips. Her face falls squarely into the uncanny valley, that realm of perception where computer-generated characters are semi-realistic, but just "off" enough to feel deeply disturbing.

With DLSS 5 turned on, the woman looks like she stepped from a photo – perhaps an AI-generated photo, but one of the better ones. Now, we’re on the other side of the uncanny valley. If you look carefully, you can tell that she’s computer animated, but her face won’t disgust and frighten you. It’s very close to looking real.

DLSS on and off - Click to enlarge

In another scene, this time from Starfield, a female character’s face looks completely lifeless and fake, her completely dead pupils staring into your soul and trying to extract it.

DLSS 5 off in Starfield - Click to enlarge

However, with DLSS 5 turned on, she comes to life completely, with real lines on her face, pupils that accurately reflect light, and even eyebrows that seem like they were plucked from a human. There are some fine lines and discolorations on her skin, along with bubbling in her down jacket and leather vest. Imperfections make it believable.

DLSS 5 on in Starfield - Click to enlarge

In another example, we see a soccer player in EA Sports FC. It’s easy to make out the pixels in his face. Blotchy skin on his left cheek looks like a uniform patch of pixels and it’s difficult to tell his beard from his chin skin.

DLSS 5 off in EA Sports FC - Click to enlarge

However, with DLSS 5 on, we can clearly see the imperfections that make someone human. There’s a dark patch on his left check, along with some prominent lines in his forehead. We can clearly make out the difference between his facial hair and shadows on his skin.

DLSS 5 on in EA Sports FC - Click to enlarge

If you’ve watched some of the best recent special effects in Hollywood movies or tried out one of the many text-to-video AI generators, you might wonder why games don’t already have photorealistic images. The advantage these other media have is time. According to Nvidia, a game frame might need to be ready in just 16 milliseconds, where movie VFX can take hours to render. Even AI generators like Sora 2 have some delay while you wait for them to process.

“We combined 3D graphics, structured data, with generative AI, probabilistic computing,” Huang said. “One of them is completely predictive, the other one probabilistic yet highly realistic. We combined these two ideas. Controlled through structured data, controlled perfectly, and yet generating at the same time and, as a result, the content is beautiful, amazing, as well as controllable.”

As with earlier versions of DLSS, game developers will have to integrate the technology into their titles before players can take advantage of them. In a press release, Nvidia said that it already has major publishers such as Bethesda, Capcom, NCSoft, Tencent, and Warner Brothers Games onboard. In the demo video, we saw clips from Resident Evil: Requiem,Hogwarts Legacy, Starfield, and EA Sports FC.

So what hardware is this going to take? Nvidia told us the early preview demo shown at GTC ran on two GeForce 5090s. One RTX 5090 was dedicated to rendering the game while the other was dedicated for running the DLSS 5 model, but DLSS 5 will run on a single GPU at release.

Unfortunately, for gamers that’s about all Huang wrote. GTC is no longer a conference for gamers or PC enthusiasts, but one of AI. ®