Previous | Next --- Slide 5 of 52
Back to Lecture Thumbnails
emmurphy

shadow is missing in rasterization

diegom

I remember that when we talked about the Marbles at Night demo that NVIDIA just released, Prof. Keenan mentioned that it wasn't entirely ray-traced, but rather combined rasterization with ray tracing. How does that work? Do you rasterize the whole scene and only cast a select number of rays? If so, how do you decide where to cast these rays?

bepis

The image on the left also misses the reflection of nearby objects. It makes sense that the rasterizer is unable to determine the placement of shadows and reflections and the ray tracer is able to since we can only determine these effects using ray intersection.

Isaaz

I guess real-time raytracing only deals with indirect lighting, reflection, refraction, etc.

emmaloool

@diegom I found an article on the new techniques NVIDIA had used for that demo, called ReSTIR: https://blogs.nvidia.com/blog/2020/11/02/marbles-at-night/. It looks like it uses spatiotemporal resampling (https://research.nvidia.com/sites/default/files/pubs/2020-07_Spatiotemporal-reservoir-resampling/ReSTIR.pdf)

0x484884

Are video games pretty much completely rasterized? I've noticed that they try to avoid having glass like in windows but its getting more popular. Since in a game, you're probably only looking through a maximum of a couple windows, it seems like we could use some tricks with rasterization that wouldn't work for an arbitrary number of overlapping transparent objects.

Arthas007

reflection is also missing on the left

idontknow

At what point does rasterization turn into ray tracing? It seems like the little tricks used to create things such as shadows and reflections in rasterization are very similar to their ray tracing counterpart (shadow polygons vs shadow rays, for example). And the slide mentions that rasterization can achieve effects similar to the one on the right.

pw123

Was what we did on the midterm a combination of both techniques?

keenan

@idontknow For very simple scenes (or crazy "GPGPU"-type tricks...) it may be possible to produce identical results with rasterization and ray tracing. Even then, rasterization and ray tracing will not be equivalent as algorithms: they will process primitives in a different order, resulting in very different performance characteristics. That being said, rasterization can be used, via a variety of tricks, to generate images that look very realistic/pleasing to the human eye. A big part of graphics is understanding where physical accuracy is needed, vs. where an approximation does as well (or better) at conveying the intended information.

keenan

@pw123 Yep! One of the questions on the midterm used ray tracing in a very basic way within a rasterizer to draw spheres. You could easily take this a step further and, say, bounce those rays off the spheres to render a perfect specular reflection of an environment map. But going much further than that starts to get tricky: suppose you want spheres reflected in other spheres (say). Then at rasterization time, you need to somehow know all the other spheres in the scene, so that you can detect the closest one hit by a reflection. Now you need data structures that start to look a lot more like a full-blown ray tracer, e.g., a BVH (or at least a flat list!) of all the other spheres.