Did we ever figure out why jaggies look the way they do? My only guess is "aliasing" but that seems to vague.
I think it has to do with the fact that if you don't have enough samples then a line will look zig-zagged when it's rendered. Thinking back to Bresenham's algorithm, if for a few iterations $x$ is incremented a few times and $y$ stays the same, then when $y$ is eventually incremented the line will look jagged. If we used supersampling then there would be blurriness between levels of $y$.