Photorealistic Graphics: The Future Looks Just Like Real Life

Thomas De Moor
OneBonsai
Published in
6 min readAug 24, 2018

--

“Let there be light,” God said: and there was light.
And God saw the light, and it was good.

In what’s perhaps a statement of the obvious, without light we wouldn’t be able to see. Out of all the five senses, we rely most heavily on our sense of vision to construct our perception of reality. So it’s safe to say that light is important. It’s the raw data that dives through our pupil into our brain, where it’s translated into useful information that we use to navigate the complex world in front of us.

In our attempts to create something that imitates real life, lighting plays a vital role. Anyone who has ever picked up a pencil and spent an hour trying to realistically draw an apple will understand this.

As society transitioned from working with their hands in the real world to typing on their keyboards in a digital world, the question of how to accurately light a 3D object on a 2D screen has received plenty of attention. That’s because photorealistic graphics is the Holy Grail of computer graphics, and lighting plays an enormous role in achieving it. Here’s a simple example:

There’s no lighting on the left. On the right, there’s a single light and a lightmap that stores the brightness of the surfaces. Just that single light makes the game look much more realistic. But while adding lights in a game is relatively easy, and something we’ve been doing for decades, getting light to bounce off, reflect, refract, and move around realistically is a much more difficult problem to solve.

The forefront of photorealistic graphics today is seen in the computer-generated imagery (CGI) used in movies. If you’ve seen the (fantastic) 2018 movie Avengers: Infinity War, you could reasonably argue that CGI is pretty much photorealistic already. The effects blend in perfectly with the real life around it, and that’s because big-budget movies use a lighting technique called ray tracing for their CGI.

Ray tracing calculates the path of a beam of light backwards from your eye (or viewpoint) to the objects that the light interacted with. It’s what makes reflections, shadows, and refractions look extremely realistic.

But here’s the catch. Ray tracing is so computationally intensive that it takes hours, days, or even weeks to render (a fancy word for creating). And we’re not talking about a single computer here. Rendering these images is done by a group of high-performance networked computers endearingly called “render farms.”

But despite the computational resources, only a movie’s budget dictates how long it should take to generate these frames. A movie is made, and then shown. It doesn’t quite matter that an image might take seven hours to render, because the movie will only be released in nine months. There’s no need for anything to render in real-time.

This is different in computer games and VR/AR applications, where these don’t use ray tracing. Instead, they use a technique called rasterization.

Rasterization is a trick. It’s fooling the eye. It turns lots of triangles (like the one above) that have data points with information such as colour and depth into a 2D image that can be displayed in the form of a pixel. This requires far fewer computer resources than ray tracing does. It’s fast enough to render in real time.

And we’ve become increasingly sophisticated at rasterizing images. In fact, techniques such as light building have made it so that graphics with rasterization look pretty realistic. But you would still know it’s not real. You wouldn’t be able to place your finger on it, but something would be off. Here’s an example:

Although this image is a few years old and we have become better at rasterizing images, it nicely shows the difference between rasterization and ray tracing. In the rasterized image, the cup of tea and the teapot seem to float on the table. On the ray-traced image, they’re firmly put down on the table and seem to have some real weight. Shadows are soft, light interacts realistically with objects, and area lighting is rendered properly.

So that’s the state of lighting technology today. Ray tracing is used in CGI in movies, and computer and VR/AR games and applications use rasterization, because the technology for ray tracing in real-time isn’t there yet.

Except that it is.

American technology company Nvidia has been working on new graphics cards powered by its new Turing architecture: the GeForce RTX 2080 Ti and the GeForce RTX 2080. Both cards have dedicated cores for use in ray tracing, and a single graphics card will now be powerful enough to render ray-traced images in real-time.

RTX NOT enabled, as shown during keynote of NVidia at Gamescom 2018
RTX enabled, as shown during keynote of NVidia at Gamescom 2018

The first image has hard shadows and a lack of realistic transparency. Although it’s possible to improve this with various techniques, Nvidia’s new RTX technology means this kind of lightning can now be improved significantly. The second image shows notably improved and softer shadows, as well as umbra: more realistic refraction and (dynamic) global illumination.

This might seem like another boring technological innovation, but it’s a huge leap towards photorealistic images in computer games and anything else that requires an immersive experience. Nvidia released a video to show what that looks like. It’s impressive.

To make it clear, the above isn’t a cinematic screenshot that’s taken days to render in a render farm. No, this is how real-time computer graphics will look like using graphics cards built on Nvidia’s Turing architecture.

Nvidia’s CEO Jensen Huang said in his Gamescom keynote that the Turing architecture allows GPUs to perform six or seven times faster than GPUs based on their previous Pascal architecture. That’s a significant leap forward. Obviously, this also is largely in part of the new architecture that NVidia is using in their new cards.

Of course, no technology is useful if it’s not used. Luckily, Nvidia has already received support for its new architecture from the developers of key graphic applications such as the Unreal Engine and Unity. Millions of designers, artists, and scientists use both engines to create their games and applications.

The implications that these developments have for VR and AR are big. VR is the most immersive digital experience possible right now. But just like anything else that you want to immerse yourself in, it requires the suspension of disbelief. You inhabit a new world, and you know it’s not real, because it doesn’t look quite real, but you’re willing to immerse yourself in it.

A VR experience with real-time ray tracing will immerse you like no other medium has ever done before. It will create a photorealistic world indistinguishable from real life. The suspension of disbelief will no longer come from the graphics, but might now come from slightly unrealistic animations or just the fact that you’d still be holding a controller or using a keyboard instead of actually holding that sword.

For businesses, this means VR applications will become even more effective. A VR flight simulator will be able to more accurately reflect the light in- and outside of the pilot’s cockpit. A VR travel experience will be able to show the Northern Lights reflecting on the frozen lake. A VR disaster management simulation will be able to accurately portray fire and its effect on the environment.

My point is that ray-traced graphics will narrow the gap between what’s real and what’s not. It will allow for better immersion and for better applications of VR in businesses, because developers will be less constrained with hardware and can focus on other areas of development (such as gameplay). Although it might take one or two years before such graphics cards are reasonably priced enough and integrated into the various applications and games to become mainstream, ray tracing in real time has finally arrived. For our business applications, this already right now opens a lot of exciting possibilities!

The future is here, and it looks just like real life.

--

--

Creator of Wormhole Stories. Writes interactive fiction at the intersection of storytelling, technology, and community.