Previous | Next --- Slide 13 of 71
Back to Lecture Thumbnails
acpatel

In this example, regardless of the amount of the pixel covered by each triangle, the entire pixel should be colored the same as triangle 4 if triangle 4 is closer to the camera than all other triangles. So it seems that sampling to find the amount of pixel covered by each triangle is not sufficient to color it, we also need to know triangle depths. But say that triangle 4 is behind all the other triangles, then we would have to find a medium between the colors of triangles 2, 3 and 4 proportional to how much they cover the square.

How can point-in-triangle tests resolve this issue? Is it that for each sample within the pixel we only get the color of the closest triangle to the camera? So then when we average out all the samples we get the correct proportional color? How does the hardware-defined computation of point-in-triangle tests resolve the discrepancy in depths between triangles for a single sample?

sbhilare

Irrespective of the rule we define or the sampling strategy that we deploy, how can we ensure scale invariance in the rule (a case where the triangle is much much smaller than the pixel size itself)? If the small triangle numerical satisfies the rule we decide, do we assign the pixel to the triangle or have a size filter on the triangles to ignore them? But I reckon this would create gaps in the rendering.

keenan

@acpatel Right---we haven't talked at all about how to handle occlusion/depth yet. This is coming in a later lecture. :-)

keenan

@sbhilare You're right: if we adopt the sampling (or even supersampling) strategy outlined later in these slides, then even a super tiny triangle might cause the pixel to "light up." This is an inherent challenge with any sampling-based technique: if you know only the values at a set of points, all you can do is make an intelligent guess about what the true signal looks like. In this particular situation (one tiny triangle) it's not so bad, since that triangle will coincide with at most one of your supersamples. So if you have, say, 16x supersampling, then at most you'll have about 1/16 or ~6.25% brightness; and there's only a very small chance this will happen. Seems to me like a pretty reasonable way to represent a tiny triangle in a big pixel. Other aliasing problems of course can still occur...