How do graphics cards, which I would associate more closely with massive SIMD capabilities with little sequential speed, have the ability to sort the surfaces in back-to-front order? Does every transparent sample get added to a 3d buffer with a small z-depth, and then they're sorted for each separate pixel?
keenan
@clam The answer is, on modern GPUs: they don't! Instead, people have developed algorithms for order-independent transparency (like depth peeling) that perform multiple passes. A totally weirdo exception was the Sega Dreamcast which takes a somewhat different approach to the rasterization pipeline we've outlined in class. It used something called tiled rendering, which is pretty common among GPU architectures, but did something that basically no other tile-based GPU did before or has done since: it sorted triangles within each tile, in hardware, so that they could be drawn from back-to-front order. Another nice consequence is that you can then do deferred shading in hardware: rather than evaluate the color of every sample as you draw it, wait to see which triangle is in front and only shade that sample. Since each tile (usually 16x16 or 32x32 pixels) sees only a fairly small number of triangles in a typical scene, doing this sorting in hardware becomes somewhat reasonable. ...But apparently not reasonable enough to make its way into subsequent GPU architectures!
bepis
So what happens if the semi-transparent triangles intersect each other or the opaque triangles?
eryn
I have the same question as @bepis. What if semi-transparent triangles interpenetrate each other? In that case, sorting is impossible. But I guess one way-out is to do sorting at every single pixel. But wouldn't it be too expensive? It gets even more expensive if we are doing supersampling.
How do graphics cards, which I would associate more closely with massive SIMD capabilities with little sequential speed, have the ability to sort the surfaces in back-to-front order? Does every transparent sample get added to a 3d buffer with a small z-depth, and then they're sorted for each separate pixel?
@clam The answer is, on modern GPUs: they don't! Instead, people have developed algorithms for order-independent transparency (like depth peeling) that perform multiple passes. A totally weirdo exception was the Sega Dreamcast which takes a somewhat different approach to the rasterization pipeline we've outlined in class. It used something called tiled rendering, which is pretty common among GPU architectures, but did something that basically no other tile-based GPU did before or has done since: it sorted triangles within each tile, in hardware, so that they could be drawn from back-to-front order. Another nice consequence is that you can then do deferred shading in hardware: rather than evaluate the color of every sample as you draw it, wait to see which triangle is in front and only shade that sample. Since each tile (usually 16x16 or 32x32 pixels) sees only a fairly small number of triangles in a typical scene, doing this sorting in hardware becomes somewhat reasonable. ...But apparently not reasonable enough to make its way into subsequent GPU architectures!
So what happens if the semi-transparent triangles intersect each other or the opaque triangles?
I have the same question as @bepis. What if semi-transparent triangles interpenetrate each other? In that case, sorting is impossible. But I guess one way-out is to do sorting at every single pixel. But wouldn't it be too expensive? It gets even more expensive if we are doing supersampling.