I don't really understand what is slide is trying to explain.
I think I get the relationship between the flat pic and the correct pic, which is we want to project the original chessboard picture to a seemingly tilted surface. I don't understand how the "affine" one works and why we can split the quad.
motoole2
In both the "affine" and "correct" cases, the quad is split into two triangles in the exact same way; the geometry is identical here. And it is important to split up the quad into two triangles, since are graphics pipeline only renders triangles anyways.
The difference is in how we interpolate the UV coordinates used for texture mapping each triangle.
Consider two vertices corresponding to the edge connecting our two triangles. One vertex might be assigned a UV coordinate (0,1) and the other vertex would then be assigned a UV coordinate (1,0). Computing the barycentric coordinates based on these 2D coordinates for any point on this edge would result in UV coordinate (a,1-a) for some value a. This produces our affine image shown in the slide.
The missing piece is that this computation ignores the perspective division operation used when mapping 3D vertices to 2D coordinates. We need to take into account this perspective division when computing our UV coordinates (or any attribute); this is described in more detail in the following slide.
I don't really understand what is slide is trying to explain. I think I get the relationship between the flat pic and the correct pic, which is we want to project the original chessboard picture to a seemingly tilted surface. I don't understand how the "affine" one works and why we can split the quad.
In both the "affine" and "correct" cases, the quad is split into two triangles in the exact same way; the geometry is identical here. And it is important to split up the quad into two triangles, since are graphics pipeline only renders triangles anyways.
The difference is in how we interpolate the UV coordinates used for texture mapping each triangle.
Consider two vertices corresponding to the edge connecting our two triangles. One vertex might be assigned a UV coordinate
(0,1)
and the other vertex would then be assigned a UV coordinate(1,0)
. Computing the barycentric coordinates based on these 2D coordinates for any point on this edge would result in UV coordinate(a,1-a)
for some valuea
. This produces our affine image shown in the slide.The missing piece is that this computation ignores the perspective division operation used when mapping 3D vertices to 2D coordinates. We need to take into account this perspective division when computing our UV coordinates (or any attribute); this is described in more detail in the following slide.