The order shouldn't matter: max is associative.
The order does not matter since at the end only one sample (the nearest one) gets rendered in the occlusion case.
Since order doesn't matter, does this offer an opportunity for parallelism when computing the depth buffer?
I also think the order should not matter, we will just overwrite the value in color buffer if we find there is a closer object.
I'm not sure if this makes it easy for parallel processing though. Everything needs to share the same z-buffer and will need to be constantly reading and writing from the buffer.
How does this method scale with the coverage/nummber of triangles hit at these samples? Is this a reason to cap our "far" value for the view frustum so that we don't have to do depth calculations for an excessive amount of scene geometry?
Is super sampling also applies to this