When do we set the clip values? Do we set it on a frame-by-frame basis or is it set before a program executes?
Just a note, the example on the right doesn't show up on this slide.
If the issue is around floating point precision, why does the pixels which are closer to 0 show up fine (e.g. on the edges of the cubes), but the pixels that are further away from 0 (the ones near the intersection of the two cubes) look messed up?
Why this phenomenon happens when far values are further and near values are smaller? Is there an intuition?
I'm a little confused as to why increasing the clipping region reduces the precision since aren't the points located at the same coordinates? Or do the points get scaled onto the clipping region?
How does floating point precision affect these cubes if they are in the same position? Doesn't the z-clipping not affect the z values of the objects that are within them?
Is this a property of the float type in any programming language?
How would varying opacities play in here?
How do we draw objects at the horizon or in the landscape?
Similar to above - if an issue w fp precision for denormalized numbers is the gap is too narrow, why not a custom fp format with a larger denormalized range so we get higher resolution further down the band?
Instead od set far and near, can we use any high precision number type to overcome this problem?
Would the buffer be set as a continuous function of the depth?