What does this "point" here mean? Do you mean to render a square point we render two triangles instead of directly filling the square with pixels?

siliangl

The previous suggested that the 3D input primitives include all the triangles and possibly colors. I understand colors mean red apple or green trees, however, I have a question about colors on this detailed picture. I can see each triangle has slightly different shades of colors. Thus, are shades counted towards different color input, or will they be calculated based on some physics and math later on? I imagine, if I could move this object around the light source, each triangle will result in different shades of colors. Thus, I'd imagine those shades are calculated afterward.
To sum up, I have two questions:
1) Are those shades counted as colors
2) When are those shades calculated in the pipeline

adam

@siliangl To partially answer your first question - while I don't know what's being done for this specific rendering, one option in general is to use a color space that has includes a channel to encode lighting such as the CIELAB color space. This is a 3-channel system where the first channel encodes "lightness" and the next two split the work of showing actual color. You can read more about it here: https://en.wikipedia.org/wiki/CIELAB_color_space

keenan

@siliangl We'll talk about color representations later on in class; right now you can just imagine that there are one or more scalar values at each vertex that specify data like colors or other useful information. These values then get interpolated across the triangle, almost always by using simple "linear" interpolation. Really, this is not linear but affine. In particular, suppose that $p_1,p_2,p_3 \in \mathbb{R}^2$ are the 2D coordinates of the vertices, and $\phi_1,\phi_2,\phi_3 \in \mathbb{R}$ are scalar values at vertices. Then the interpolant is an affine function $f(p)$ such that

$$ f(p_i) = \phi_i $$

for $i=1,2,3$. Since every affine function has the form

$$ f(p) = \langle a, p \rangle + b $$

for some constant vector $a \in \mathbb{R}^2$ and scalar $b \in \mathbb{R}$, the three conditions uniquely determine this function. In other words: if you give me three values at three points, there is a unique affine function that will pass through those values at the specified points.

This scheme is usually called "piecewise linear interpolation" since the function is a linear polynomial over each triangle. It is closely related to barycentric coordinates, which we'll talk about plenty in class.

keenan

@harveybia Yes. The most basic kinds of points (in some graphics APIs) are single pixels, which get sent down the graphics pipeline as a pair of triangles. Other APIs define fancier, bigger points, which may get rendered as "round" points, e.g., polygons with many sides. But there is (weirdly enough) no way to tell the standard rasterization pipeline to just turn on a single pixel, apart from directly uploading data to the framebuffer.

What does this "point" here mean? Do you mean to render a square point we render two triangles instead of directly filling the square with pixels?

The previous suggested that the 3D input primitives include all the triangles and possibly colors. I understand colors mean red apple or green trees, however, I have a question about colors on this detailed picture. I can see each triangle has slightly different shades of colors. Thus, are shades counted towards different color input, or will they be calculated based on some physics and math later on? I imagine, if I could move this object around the light source, each triangle will result in different shades of colors. Thus, I'd imagine those shades are calculated afterward. To sum up, I have two questions: 1) Are those shades counted as colors 2) When are those shades calculated in the pipeline

@siliangl To partially answer your first question - while I don't know what's being done for this specific rendering, one option in general is to use a color space that has includes a channel to encode lighting such as the CIELAB color space. This is a 3-channel system where the first channel encodes "lightness" and the next two split the work of showing actual color. You can read more about it here: https://en.wikipedia.org/wiki/CIELAB_color_space

@siliangl We'll talk about color representations later on in class; right now you can just imagine that there are one or more scalar values at each vertex that specify data like colors or other useful information. These values then get

interpolatedacross the triangle, almost always by using simple "linear" interpolation. Really, this is not linear but affine. In particular, suppose that $p_1,p_2,p_3 \in \mathbb{R}^2$ are the 2D coordinates of the vertices, and $\phi_1,\phi_2,\phi_3 \in \mathbb{R}$ are scalar values at vertices. Then the interpolant is an affine function $f(p)$ such that$$ f(p_i) = \phi_i $$

for $i=1,2,3$. Since every affine function has the form

$$ f(p) = \langle a, p \rangle + b $$

for some constant vector $a \in \mathbb{R}^2$ and scalar $b \in \mathbb{R}$, the three conditions uniquely determine this function. In other words: if you give me three values at three points, there is a unique affine function that will pass through those values at the specified points.

This scheme is usually called "piecewise linear interpolation" since the function is a linear polynomial over each triangle. It is closely related to barycentric coordinates, which we'll talk about plenty in class.

@harveybia Yes. The most basic kinds of points (in some graphics APIs) are single pixels, which get sent down the graphics pipeline as a pair of triangles. Other APIs define fancier, bigger points, which may get rendered as "round" points, e.g., polygons with many sides. But there is (weirdly enough) no way to tell the standard rasterization pipeline to just turn on a single pixel, apart from directly uploading data to the framebuffer.