If we are sampling so much more per pixel, how significant would the hit to performance be in order to display an image?
It looks like when we use multiple sampling rather than single sampling, the edges look more blurred. This doesn't look as good for the nearby tiles. Is there a way to get around this issue besides having more pixels?
When is it generally not worth it to supersample bigger boxes of pixels due to the performance cost being greater than the gain in quality?
Agree with WhaleVomit - this really looks more like a blur convolution on the right. Is this generally the case, or just my perception here?
In modern graphic applications, what is the usual size of sampling that we encounter (perhaps for a document we are viewing or a game we are playing)?
How can we measure/ quantify the effectiveness of an anti-aliasing method?