If triangles are what graphics hardware/software is optimized to, why do many 3D modeling/animation pipelines focus on quadrilaterals as opposed to triangles?
i dont quite understand defining a point using two triangles, so whiere is the exact spot of the point? is it at a vertex of a triangle? or the center of the triangle?
Since the triangle is the optimally fundamental shape in rasterization, why are the monitor's pixels arranged in a rectangular shape? Will it be easier to display images if we use triangular pixels?
Is it because we are constructing a 3d image that makes the triangle the best? If we want to construct a 4d image(in the future), maybe a square is better?
Building off saphirasnow's question, are the quadrilaterals in graphics hardware/software implemented through triangles?
When making models in 3D softwares like Maya, it is usually advised that the mesh consists of only squares, and triangles should be avoided as much as possible. How do these two concepts differ from each other?
I understand how triangles are the most basic shape created from the least number of vertices/edges, but it seems unintuitive to me how triangles became the default shape used in graphics. What if you had a circular/spherical object you wanted to rasterize? How are triangles a better fit than, say, squares here?
Are lines always treated as a rectangular with thickness so it can be expressed with two triangles?