Previous | Next --- Slide 44 of 53
Back to Lecture Thumbnails
lucida

shouldn't we do the depth test before going through the trouble of computing color values at covered pixels?

kayvonf

@lucida. Good idea! And that's a common optimization in many GPUs. However, it should be noted that the fragment program that computes the fragment's color is also allowed to programmatically change the fragments depth, so the optimization you propose is only valid when the fragment program does not modify depth.

lucida

@kayvonf Could you provide an example of when we may want to update the depth of a fragment after computing its color?

kayvonf

Actually the most common example is for shader to "kill" the fragment which means the depth/color buffers shouldn't be updated as a result of the fragment (so in a sense it's modifying the fragment's depth by setting it to infinity).

This is common if the shader computes a color for the surface where alpha turns out to be zero. A common example is a shader using an alpha texture -- e.g., rendering each leaf in a tree using a rectangle and then using the alpha component of the texture to kill some of the fragments. Slide 26 has an example, and this Stack Overflow discussion is a good example.