When handling occlusion with sampling, can we apply the same break down logic used in the "early-in", "early-out" example later in the course? I imagine that without this logic, checking the relative order of the triangles at each location is quite costly. However, there is one more problem towards using a coarser granularity for sampling: we cannot take an average of the percentage of samples being yellow or green. It has to be either or for the occlusion problem right?
birb
In the intro lecture, you mentioned the same problem of choosing which pixels to shade when drawing a triangle, and one solution was seeing whether the line goes through a diamond in the middle of the pixel, suggesting that area is more "important" than the rest of the pixel. Is there a similar way of weighing sampling to differentiate between the more or less important parts of a pixel?
Coyote
So wait, from the the description in the slides, it sounds like we're testing multiple sample points per pixel? That seems kind of inefficient.
When handling occlusion with sampling, can we apply the same break down logic used in the "early-in", "early-out" example later in the course? I imagine that without this logic, checking the relative order of the triangles at each location is quite costly. However, there is one more problem towards using a coarser granularity for sampling: we cannot take an average of the percentage of samples being yellow or green. It has to be either or for the occlusion problem right?
In the intro lecture, you mentioned the same problem of choosing which pixels to shade when drawing a triangle, and one solution was seeing whether the line goes through a diamond in the middle of the pixel, suggesting that area is more "important" than the rest of the pixel. Is there a similar way of weighing sampling to differentiate between the more or less important parts of a pixel?
So wait, from the the description in the slides, it sounds like we're testing multiple sample points per pixel? That seems kind of inefficient.