What's Going On
keenan commented on slide_008 of Accelerating Geometric Queries ()

@Cake Right. Also if t_max is negative, for instance.


keenan commented on slide_026 of Accelerating Geometric Queries ()

@strikekids: Yes, but only because we're determining the partition by the centroid of the primitive.


keenan commented on slide_006 of Accelerating Geometric Queries ()

@fengyupeng Yes, great question. In this case, the min and max methods are defined componentwise, i.e., they would look something like this:

Vec3D min( Vec3D a, Vec3D b )
{
   Vec3D c;

   c.x = min( a.x, b.x );
   c.y = min( a.y, b.y );
   c.z = min( a.z, b.z );

   return c;
}

(and similarly for max).


keenan commented on slide_025 of The Rendering Equation ()

@sickgraph Really not about sphere vs. hemisphere; as you say, if the material absorbs any light at all, the integral will be less than 1. This is true whether we're just reflecting (hemisphere) or reflecting and transmitting (sphere).


keenan commented on slide_021 of The Rendering Equation ()

@sickgraph: Exactly. The surface will absorb some light, and this absorption could be unequal in different wavelengths. So there will be some coloration of the outgoing radiance, and in general the total outgoing radiance should be no greater than the total incident radiance.


sickgraph commented on slide_025 of The Rendering Equation ()

Since, we are integrating over the hemisphere we get the <= 1 condition. This could be because some of the light gets absorbed (or undergoes subsurface scattering? Are they the same?). If we were to integrate over a sphere that might integrate to 1?


sickgraph commented on slide_021 of The Rendering Equation ()

Is the exitant radiance the normalized integration of the incident radiance?

Edit : I see how that could be wrong since it doesn't account for the color of the surface itself.


fengyupeng commented on slide_006 of Accelerating Geometric Queries ()

how would std::min() behave on two 3d vectors? For example, isn't it hard to tell which one is bigger: v1 = Vector3D(1,3,5) or v2 = Vector3D(2,3,4). What will min(v1, v2) return? Will it return (1,3,4)? The smallest of each field? or just crash?


strikeskids commented on slide_026 of Accelerating Geometric Queries ()

Shouldn't this really be N-2 split points in a single axis? If we have primitives a, b, c in order, choosing to split on the right side of a would give us the same construction as choosing to split on the left side of b.


Cake commented on slide_008 of Accelerating Geometric Queries ()

t_max < t_min

Also if the t-interval calculated here is disjoint from the ray's own t-interval


keenan commented on slide_050 of The Rendering Equation ()

@intrepidowl Because you're sampling in proportion to p. So if, for instance, you're sampling twice as often from a region A than a region B, each individual sample obtained from region A should contribute half as much as those from region B. Otherwise, region A will appear brighter than it really is (since you're picking twice as many samples from there).


keenan commented on slide_050 of The Rendering Equation ()

@zhengbol Yeah, exactly: spectra are multiplied component-wise. This is a fairly realistic model for most materials (e.g., incoming light of some frequency gets scattered to light of the same frequency). Some more exotic materials will fluoresce and produce a different color.


keenan commented on slide_022 of The Rendering Equation ()

Yeah, exactly. (And assuming that "z" is up.)


intrepidowl commented on slide_050 of The Rendering Equation ()

Why do we divide by pdf? Wouldn't we want to multiply by pdf to weight the amount of light received from this direction by the probability of getting a ray from this direction?


intrepidowl commented on slide_050 of The Rendering Equation ()

Component-wise (in spectrum.h at least).


omegaiota commented on slide_022 of The Rendering Equation ()

So the brightness of the center of the circle is the radiance in (0,0,1) direction (assuming unit normal)?


zhengbol commented on slide_050 of The Rendering Equation ()

How should we understand the multiplication of two spectrum? (Say, f * Li in the code above)?


keenan commented on slide_027 of Accelerating Geometric Queries ()

Yes and yes.

This is also the reason we (grossly) approximate the cost of subtrees rather than doing a full recursive evaluation. You may be able to use dynamic programming to get a more accurate cost estimate, but empirically it’s not clear such an estimate is a big win in terms of the evaluation cost for the final tree.


cma commented on slide_027 of Accelerating Geometric Queries ()

A few clarifying questions about the listed algorithm:

1) On the step that says "For each of the B-1 possible partitioning planes evaluate SAH", if I'm understanding this correctly, it means to take the union of the buckets on each side of the planes, and then evaluate the SAH for each of the two sides?

2) On the step that says "Recurse on lowest cost partition found (or make node a leaf)", does "lowest cost" mean the combined costs/SAH of each side (i.e. adding the cost of the left and right partitions)?


keenan commented on slide_044 of Color ()

This picture stinks!

Green is not nearly that bright in CMYK.


keenan commented on slide_057 of Color ()

Printers have gamut, just like displays. A large fraction of printers use CMYK and their gamut does indeed have to do with the quality of the inks—not to mention the paper! (You definitely get a bigger gamut by printing on white paper than black paper, for instance. ;-)). Some printers use more than just CMYK, e.g., hexachrome printers use orange and green in addition to CMYK.

There are also factors that contribute to print quality beyond color gamut; a big one for instance is the question of whether prints are "archival" quality, i.e., will they hold up over time, or will the colors fade? This question has to do with things like the basic constitution of the inks: do you use "dyes" where color is dissolved in liquid (akin to a solution like salt water) or "pigments" where color is suspended in a liquid (akin to a colloid like milk). Here there may be a trade off between gamut and archival quality, i.e., archival inks may be duller in the beginning but retain a larger gamut over time.

Here's a nice chromaticity diagram showing roughly where printer CMYK (in red) and Hexachrome (in yellow) fall relative to the sRGB color space that is fairly common for displays:


keenan commented on slide_060 of Color ()

The picture's not great; the key idea is that if you set the value of the red channel to .8, you don't necessarily get something twice as bright in red as .4


keenan commented on slide_025 of Accelerating Geometric Queries ()
  1. Yes, because the BVH needs to be view-independent. I.e., rays could come in from any direction, not just the direction of the camera (consider light bouncing off the wall and then hitting an object).

  2. Yes. The surface area could be approximated a number of ways; a simple way would be that SN is the surface area of the current (interior) node's bounding box, and SA,SB are the bounding box areas of the two children. But you are of course free (and encouraged) to try different things and see what works best.


intrepidowl commented on slide_025 of Accelerating Geometric Queries ()

Two questions:

  1. We just compute surface areas based on actual area, and not considering things like the area of their projection onto the view plane, right? For example, if we have a very large triangle viewed edge-on and almost perpendicular to the view plane might create fewer ray-triangle intersections versus a smaller triangle viewed parallel to the view plane. This is just a case where the approximation doesn't do so well, and we wouldn't account for this (particularly in the BVH task of HW 3), right?

  2. To summarize/check my understanding, are the following true?

C = total cost

C_trav = cost of traversing an interior node (e.g. load data, bbox check)

C_isect = cost of computing any ray-primitive intersection (where we assume for simplicity that the cost is the same for all primitives)

S_N = total surface area of all primitives in both child A and B

S_A = total surface area of primitives in child node A

S_B = total surface area of primitives in child node B

N_A = number of primitives in child node A

N_B = number of primitives in child node B


cma commented on slide_060 of Color ()

Could someone clarify this idea of the nonlinear relationship and the placement of the line on the color graph?


cma commented on slide_057 of Color ()

What differentiates a high quality printer from a low quality printer, given that low quality printers print in CYMK, which causes the colors to look worse? Do they use a different color scheme for their inks?


@lwan Good insight. Yes, this is a nice way to argue. Since you know quaternion multiplication can be used to encode 3D rotations, and 3D rotations don't commute, there's no way multiplication could commute (in general). Much easier then writing out the full product both ways. :-)


I think there are two reasons why the product doesn't commute:

  1. qp and pq are linear operations, and we know that rotation + translation composition is nonlinear.
  2. We also know that 3D rotations aren't commutative as well.

So $$ pq = qp $$ iff translation is 0, and rotation is identity.


Ah, right. Yes, @joel is right - I didn't notice the expression for the color itself.


keenan commented on slide_026 of Accelerating Geometric Queries ()

Whoops, look like @joel and I responded at the same time. ;-)


keenan commented on slide_015 of Vector Calculus ()

Excellent. Yes, you're right that this argument applies only in the special case of $u + v + w = 0$, even though the Jacobi identity applies for any triple of vectors. So, still some mystery there! :-)


keenan commented on slide_058 of Introduction to Geometry ()
  1. For each cubic Bézier curve the degrees of freedom are the four control points $p_0, p_1, p_2, p_3$. You can either think of these as four vector degrees of freedom, or eight scalar degrees of freedom (since each point has two coordinates: $x$ and $y$). I generally prefer to think in terms of vector degrees of freedom, since counting is easier (unless at some point you have to deal with a scalar-valued constraint).

  2. To get a curve with no "gaps" or "kinks", both the positions and the tangents must agree at endpoints. In particular, for each cubic Bézier curve, you have to make sure that:

  • the left endpoint is equal to the right endpoint of the previous segment
  • the right endpoint is equal to the the left endpoint of the next segment
  • the tangent at the left endpoint is equal to the tangent at the right endpoint of the previous segment
  • the tangent at the right endpoint is equal to the tangent at the left endpoint of the previous segment

In total, four (vector-valued) constraints. Note in particular when we say that the "tangents must agree," that means that both the direction and the magnitude of the derivative need to be the same. So this is really a constraint on both components of the 2-vector, not just the direction of the corresponding unit vector.

Since there are four degrees of freedom and four constraints, a cubic Bézier segment can always match the positions and tangents of its neighbors.

  1. Right: the only way to get tangent continuity with a linear Bézier curve is to have a perfectly straight line. For a quadratic Bézier you still have problems, but it's less obvious. The quickest way to see it is simply that a quadratic Bézier has only three (vector) degrees of freedom, but there are four constraints. So we can get, for instance, the endpoints to always match, but not both tangents. We can get one of two tangents to match, for instance, or we could find some middle ground where the two tangents match the two neighboring tangents "as well as possible."

This analysis is ultimately why cubic Bézier is a common choice in applications/file formats/etc. E.g., Adobe Illustrator, Inkscape, etc., allow users to draw with cubic Béziers; likewise (I believe) SVG encodes curves via cubic Bézier.

Here are some images from an excellent article discussing the tradeoffs between quadratic and cubic curves, especially in the context of font design:


keenan commented on slide_026 of Accelerating Geometric Queries ()

@Cake because you can put the split plane at either extreme point of each primitive along the given axis. The minus-two comes from the fact that putting the split plane to the "right" or "left" of all the primitives isn't actually a splitting. ;-)

This count is basically indicated in the figure: notice that each vertical line touches the leftmost or rightmost point of some triangle. But the very leftmost and very rightmost point in the entire collection is omitted (no vertical line).


keenan commented on slide_015 of Accelerating Geometric Queries ()

I agree!


keenan commented on slide_024 of Digital Geometry Processing ()

@a4anna Good question. More broadly, aliasing happens anytime you have a signal processing strategy that causes two different signals to end up looking the same, i.e., it makes one signal look like another signal that it is not. In this case, it's our interpolation strategy rather than our sampling strategy that fails: by using Catmull-Clark subdivision to interpolate the positions and normals of a triangle mesh, our nice smooth signal ends up looking as though it actually came from a highly bumpy signal.


blue = large value (1) black = small value (0)

The barycentric coordinate for vertex c should get bigger as we approach c, so that when we're at c the triple of barycentric coords is equal to (0,0,1).


keenan commented on slide_044 of Accelerating Geometric Queries ()

@rasterize Right: if you just use a single set of equally-sized tiles, then you get something like a uniform grid: a grid of larger boxes, each of which contain several primitives (in this case, individual fragments). You could of course take it up a notch by adding an additional level of even bigger boxes on top of this one, which starts to look more like a quadtree. Modern GPUs, AFAIK, use only a single "level", especially for things like "early Z-cull", i.e., doing a coarse-level depth check before going on to rasterize fragments at a fine level.


keenan commented on slide_034 of Accelerating Geometric Queries ()

@rasterize Plugging in a big number is an ok strategy, but doesn't always work! For instance, which function grows faster: $x^2 \sin(x)$, or $x^2 \cos(x)$? You can of course pick enormous numbers for which either one is bigger! To be absolutely sure your strategy of comparing values succeeds, you would need to determine, for instance, the largest value $x_{\mathrm{max}}$ for which the two functions are equal, and then check which one is bigger for a value $x$ larger than $x_{\mathrm{max}}$. Another strategy would be to take the ratio of the two functions and consider what happens as $x$ goes to $\infty$: does the ratio approach a value above 1 or below 1? Etc.


keenan commented on slide_008 of Accelerating Geometric Queries ()

@rasterize That's certainly one case where there's a "miss." Can you think of some others?


keenan commented on slide_020 of Introduction to Geometry ()

@rasterize That's actually a tough question! It depends very much on how much you can say about your point sampling. If you have a really "nice" sampling, things get easier; if you have a totally random and unpredictable sampling, it's pretty hard. A "nice" sampling might mean something like: the biggest distance between any point and its closest neighbor is no more than some fixed constant $\alpha$, AND the smallest distance between any point and its closest neighbor is no less than some fixed constant $\beta$. This way you know that samples are evenly spread out, and can more easily downsample/upsample according to the current screen resolution (e.g., how big is the size of a projected pixel relative to $alpha$ and $beta$?).

You can make this sampling even nicer by asking for a so-called "blue noise" property, which says, roughly speaking, that if you do a Fourier transform of the points (for now pushing under the rug exactly what this means...) you have very close to a uniform distribution of power in the frequency domain. This is the kind of point sampling that is also nice for, say, stippling images.


joel commented on slide_026 of Accelerating Geometric Queries ()

In 2D, we can split the space on either the left or right of each triangle's centroid, as shown in the diagram above. This gives us 2N possible splits for N triangles in total, assuming that the triangles have distinct centroids. Then, we observe that the split to the left of the leftmost triangle and the split to the right of the rightmost triangle don't make any difference and are kind of useless. So we have 2N-2 splits we might actually want to make.


keenan commented on slide_009 of Vector Calculus ()

Actually, you don't even need to adjust the norm: to rotate $u$ by an angle $\theta$ you can just write

$v = \cos(\theta) u + \sin(\theta)(n \times u).$

This vector will have the same norm as $u$, since

$|v|^2 = |\cos(\theta)u|^2 + 2\langle \cos(\theta) u, \sin(\theta)(n \times u) \rangle + |\sin(\theta)(n \times u)|^2 = (\cos(\theta)^2 + \sin(\theta)^2)||u|^2.$

The reason the cross-term disappears is that $u$ and $n \times u$ are orthogonal. The reason we can combine the $\cos(\theta)$ and $\sin(\theta)$ term is that $u$ and $n \times u$ have the same norm, even though they're not the same vector.


keenan commented on slide_007 of Geometric Queries ()

Shoot - the LaTeX renderer isn't working great here. In ASCII

[ x1 ]^T [ y1 ]                  [ y1 ]
[ .  ]   [ .  ]                  [ .  ]
[ .  ]   [ .  ]  = [ x1 ... xn ] [ .  ] = x1 y1 + ... + xn yn
[ .  ]   [ .  ]                  [ .  ]
[ xn ]   [ yn ]                  [ yn ]