Previous | Next --- Slide 37 of 69
Back to Lecture Thumbnails
L100magikarp

I'm not sure that I understand how we can represent the pre-computed lighting and shadows. What exactly is the environment map, and how does it get used to light the scene?

Shadows seem like an interesting challenge since you would need to model the ray that the light travels along. And if the light ray intersects a non-lambertian surface like the glossy table top, you also need to model the reflections.

keenan

@L100magikarp For the ambient occlusion map, you precompute the percent visibility among all directions in a hemisphere above each surface point, and store these values in a texture map. Visibility is determined (often) using ray tracing, which is slow, but since this is a precomputation it's not a big deal. When you go to rasterize the model, you sample a value from the ambient occlusion map, and multiply that value by whatever other values you're using for color/shading (e.g., texture, directional lighting, etc.). So, at rasterization time, no visibility checks are needed. Of course this doesn't properly account for deforming geometry, or other objects in the scene that may cast shadows.

The simplest application of an environment map is to add a perfectly shiny "specular" reflection. In short, for each screen sample you know the direction $E$ from the sample to the eye, and the normal direction $N$, from which you can compute the direction $R$ of the eye direction reflected in the plane of the normal. You then use $R$ to look up into the environment map, but not in the usual way: since $R$ is a unit vector, you can think of it as a point on the sphere, and map the three components $(x,y,z)$ to some point $(u,v)$ on a 2D projection of the sphere (as seen in the "Grace Cathedral" image). Adding some fraction of this color value to your shader gives the appearance of reflecting the lighting in the environment. A nice alternative is to use a cube map which does a better job of distributing resolution over all directions, and is perhaps somewhat easier to look up into.

From there, there are all sorts of different ways to cache illumination and quickly apply it at run time; a now-classic technique is precomputed radiance transfer. More recently, people are directly training neural networks to provide illumination information, e.g., with neural radiance fields (NeRFs).