It's also unbiased because the expected value is the correct value, i.e., if we compute this estimate many times and average the results, we will get the correct integral (because we're effectively just applying standard Monte Carlo).
I believe it's inconsistent because I_n does not approach I as n-> infinity (since we always only take a single random sample).
Would this be inconsistent and not unbiased? Because for one point, the estimator would only be unbiased and consistent if the image was just one color.
I think it's to illustrate Lambert's Law (described a few slides over). Bigger relative angle between normal and sun rays means less irradiance.
I believe this representation of the Euler-Lagrange equation might be confusing. I don't think that q and q' are independent just because we are taking partial derivatives. An alternative expression would be:
Let L(x, y, z) be such that L(q, q', t) = K - U. Then the Euler-Lagrange equation gives:
(d/dt) ((dL/dy) (q, q', t)) = (dL / dx) (q, q', t).
On the left: (1) Take the partial with respect to the second coordinate where L is a function of x, y, z. (2) Substitute in (q, q', t). (3) Differentiate with respect to t.
On the right: (1) Take the partial with respect to the first coordinate. (2) Substitute in (q, q', t).
I believe this is why q and q' are "unrelated" in the Euler-Lagrange Equation.
Should probably be e1, e2 and not e1, e1.
What is the comment about dropping the n^2
What is the phi for? Is it to account for rays in 3D?
What exactly is albedo?
Think of this point as the center of a hemisphere. The bottom of the car is projected onto the surface of the hemisphere, which is shown here in that square image with black background.
What is the rationale behind the lamp (on the foreground ceiling) and glassware (on the tables) getting darker with one-bounce global illumination as opposed to direct illumination? And then getting lighter again from 4 bounces onwards?
Why does the BSSRDF image look not only brighter, but also blurrier? What causes the "blurry" effect?
For spherical lighting, why does the bottom of the hippo appear dark? Wouldn't we play the same game of putting a hemisphere at each point of the mesh and projecting the light source onto it to see how much light each point receives, and the points on the bottom would "see" the same kinds of light that the top points "see," right? Or is the hippo actually rendered with a hemispherical light source?
For ambient occlusion, is this robust to other types of lighting or lighting changes? For example, in a stormy night type of scene where lighting is generally low except for flashes of lightning, would ambient occlusion be a good choice?
Upward (see the small white dot in the first three images)
One method might be anti-aliasing after the TV (or playback device, for example) has received the signal, but only on the color channel. Would this help? From what I remember, the reason we downsample is to reduce the amount transmitted over the air or stored, but we might still be able to do some tricks "at the edge" to improve the final image presented to the user.
Another method might be to do some kind of adaptive blocking of the color channel according to places in the image where there is more information, but the overhead involved in this method might be more than it's worth.
It affects the tangents of the segment closest to the point that was perturbed, and by the second set of equations, the changes propagate down to the rest of the spline.
How does moving one point affect the whole curve? Wont it just affect at most 2 piecewise functions?
I think its about n splines rather than points.
But we reference f_0, .. f_n in the first set of equations.
Should the t in f_(recon)(t) be an x? If not, could you explain why not?
I am having difficulty considering the view of the hemisphere from the given point. Any help please!
what happens when theta = 90 degrees
which way is the light moving?
Not sure what the purpose of the normal to the globe is in this slide?
There are n total points.
In the previous slide it seems that we can only define the discrepancy of X based on "some family of regions S". Where is the condition in the Koksma's theorem? Is it included in V(f)?
What does perceptually significant mean? Aren't all colors equally significant?
Are there n or n+1 total points?
Is radiance constant along a ray because both irradiance and solid angle decrease quadratically with respect to distance?
How can we reduce this artifact ? By reducing the downsample rate ?? Any other alternative ?
Yes, because as n-> infinity, we approach the actual image, but for any fixed n, the expected value of the estimator is not the actual image (there is no randomness involved!)
So, frequency determined by temperature and color is determined by frequency!
It is because the particles get heated up and start oscillating at the frequency that corresponds to the red color.
Is it consistent & not unbiased?
@lwan Yeah, it's definitely interesting to think about how this choice affects energy behavior. You're also totally right that you can evaluate it at the midpoint. Actually, even for nonlinear functions f something like this is possible. You can either use an update
(q_k+1 - q_k)/tau = f( (q_k + q_k+1)/2 )
and solve a nonlinear equation for q_k+1, or
(q_k+1 - q_k)/tau = ( f(q_k) + f(q_k+1) ) / 2
and solve a different nonlinear equation for q_k+1. The first rule is called the "midpoint rule" and the second one is called the "trapezoid." These will have different behavior (stability, accuracy, etc.) depending on exactly what system you're integrating. For instance, for certain systems the midpoint rule will exactly preserve the total energy of the system (offering an alternative to symplectic Euler).
Wait.. where do we evaluate the velocity function?
Wait.. where do we evaluate the velocity function?
I guess this would depend on whether you want to know if your approximation is upper or lower bounded by the true velocity? Or alternatively, we could also evaluate the velocity at the midpoint of the configurations, if they're linearly interpolatable. .
@fengyupeng Good question, and there is some question about what "tangent" means. To be clear, when I say tangent in this context I simply mean the first derivative of the curve with respect to time. So the full set of constraints on a quadratic Bézier curve c(t) is something like
c(0) = f0
c(1) = f1
c'(0) = u0
c'(1) = u1
where f0,f1 are the endpoints, u0,u1 are vectors (not necessarily unit length!), and c' is the derivative with respect to time. In other words, these last two constraints say not only what direction the tangent should point, but also how big the derivative is, i.e., "how fast we're moving along the curve" at t=0 and t=1.
This way, the curve is uniquely determined: we have four vector degrees of freedom (the four control points) and four vector constraints (the two endpoints, and the two velocity vectors).
@Cake. Yes—good catch!
That seems to just correspond to moving the white points further along the blue+dashed line segments. The "alignment" is just a consequence of the position of the white control point.
When creating the curve, moving one point of the endpoint/tangent pair influences the previous and next sections of the curve (modulo boundary cases) but no other sections. So this is also a nice way to visualize that each section of the curve is determined by 4 parameters: the endpoint/tangent pairs at each of its endpoints.
In class you said for each section of the Bezier curve, there are 4 restriction: 2 end points and 2 tangents. However, in programs like illustrator, you can not only control the orientation of the control bar at each point, you can also control how "long" that control bar is, the "longer" the bar is, the more the curve is "aligned" to the tangent. wouldn't that create two more restriction, making it 6 in total then?
Should the coefficients be c_j instead?
@zhengbol Yes, and it will reduce variance (or rather, ensure that variance does not increase) for the same reason that it works for a uniform distribution: the integrand is partitioned into regions, each of which have variance no greater than the whole integrand.
However, it may indeed be true that uniform bins are no longer optimal in the sense that they do not provide the greatest possible reduction in variance.
If you know, ahead of time, that you are going to "warp" your uniform samples (as in the case of, say, cosine-weighted importance sampling), how might you choose bins differently in order to reduce variance? What's the guiding principle?
If the target distribution is not uniform, can we use stratified sampling?
@Cake Right. Also if t_max is negative, for instance.
@strikekids: Yes, but only because we're determining the partition by the centroid of the primitive.
@fengyupeng Yes, great question. In this case, the min and max methods are defined componentwise, i.e., they would look something like this:
Vec3D min( Vec3D a, Vec3D b )
c.x = min( a.x, b.x );
c.y = min( a.y, b.y );
c.z = min( a.z, b.z );
(and similarly for max).
@sickgraph Really not about sphere vs. hemisphere; as you say, if the material absorbs any light at all, the integral will be less than 1. This is true whether we're just reflecting (hemisphere) or reflecting and transmitting (sphere).
@sickgraph: Exactly. The surface will absorb some light, and this absorption could be unequal in different wavelengths. So there will be some coloration of the outgoing radiance, and in general the total outgoing radiance should be no greater than the total incident radiance.