Previous | Next --- Slide 10 of 79
Back to Lecture Thumbnails
whc

A question here: typically how big is a triangle (of an object mesh) compared to the size of a pixel?

My vague sense is that a 256 x 256 image can display an intricate geometric object, which must be built on some fine mesh. In that case, each triangle would be much much smaller than the size of a particular pixel. After reading the slides later, I guess supersampling is the answer here? i.e. even though a single pixel is huge, we are actually sampling on a much finer grid whose individual unit size is tiny.

keenan

@whc There's no rule in general. Different applications will produce primitives that cover different numbers of pixels, and someone designing a rasterization pipeline will have to plan for that---perhaps with some intuition about the important common cases. It is true, however, that when triangles start getting smaller than pixels you may want to think differently about how to rasterize. In fact, there is a whole architecture called Reyes built around the idea of "micropolygon rendering," which is the basis for Pixar's RenderMan. There was also a brief burst of activity in point-based graphics where one of the motivating factors was densely-sampled point clouds. Basically if you have more points than pixels, why even bother using triangles as primitives? (There are good reasons to still use triangles, but it's a worthwhile question to ask.) Such techniques and questions start to become relevant again for doing things like differentiable rendering, though the jury is still out on exactly how these techniques might affect graphics pipelines.