Previous | Next --- Slide 47 of 50
Back to Lecture Thumbnails
brandino

I understand that we have one ray for every pixel on screen, but how exactly are these rays positioned? Do they all have their origins at the "camera" and each pass through their respective pixel? Or do they each originate at the pixels themselves?

Heisenberg

My guess is that it shoots out rays to the center of every pixel on the near plane.

I have a question here. For diffuse reflection, does it mean that it shoots out several rays at the hit point?

keenan

@brandino It's exactly our earlier model of a pinhole camera: you can imagine your image is on the back of a cardboard box with a hole poked through the opposite side. Which ray of light hits each point of the image? It can only be the ray through the hole.

In graphics, for simplicity, you can flip things around and imagine the camera sits at the origin, and that the image plane is a distance 1 away (say) along the -z axis. If you imagine putting a regular WxH grid on this image plane, you can easily calculate the location of the center of each pixel in space. The rays you trace are indeed then just rays from the origin through the pixel centers.

...and, as with everything in life you can make things more complicated. For instance, you could use a more realistic camera model (involving mirrors, lenses, etc.). But the basic pinhole model is the starting point for most renderers.

tarangs

The pinhole camera model seems like the perfect way to render scenes especially when its a single screen we are targeting, to make it similar to something that would have been captured from a camera(movies, games played on a screen). In other applications, like VR or AR, how does the ray generation process change?

Just thinking aloud here: In VR it seems like we could do it with two pinhole cameras(placed like human eyes), but I don't know if that allows for rendering things in the peripheral vision of human perception. And for AR it seems like the ray would probably depend on the AR user's perspective in someways(like the pose of the screen or eyeglasses they use as the AR interface). Could this be how ray tracing works in VR/AR?

rgrao

@tarangs, for VR they do have some new technologies like foveated rendering that tracks the focus of each eye and tries to do something similar to what we do with a spatial data structures - allocates more compute/rendering for whatever objects in the scene that we are focusing on, and reduces the compute required for the objects in the periphery. I'm not entirely sure which headsets have this implemented, but it was definitely cutting edge research about 3-4 years ago, and probably is even today.

Edit: Facebook's Oculus says they provide foveated rendering APIs for developers as of 2019. Their Chief Scientist also said they had utilized and published work based on this. https://en.wikipedia.org/wiki/Foveated_rendering