@pavelkang: Nominally, it's O(n^3) in 3D and O(n^2) in 2D. :-)
@pavelkang because you also need to store the value of the function at that x,y location
edit:
oops! read that wrong, storing the value of the function still makes it O(n^2) in 2d
Huh, so perfect hashing is kind of like an asymptotic certification of hashing algorithms, in that they have theoretical probabilistic guarantees, but we can also construct a system that is practically guaranteed to match the efficiency expectations.
This also seems kind of like the halting problem in that we must ask ourselves given more time will the value of this pixel change or will it stay the same no matter how long we run our monte carlo estimator. Are we "done".
Amazing that you humans can't see further into infrared.
@ak-47 Correct---that's the difference between using reduced coordinates and using constraints, as outlined a few slides later.
Another way of thinking about it is: assume you're starting out with a homogeneous fluid that has not been compressed. At this initial moment in time, (\rho) is certainly constant. Now what has to be true about the velocity field that takes us forward to the "next moment in time" in order to remain incompressible? This condition is expressed in terms of the current mass density, which is constant. Again, no circular reasoning there.
Assuming conservation of mass, we get the first equation. Assuming we're working with a liquid that has constant mass density (e.g., water rather than oil & vinegar) we get the second equation. We didn't start with any assumption on the velocity field (u), so the argument isn't circular---we're just re-expressing our assumption about incompressibility in terms of the velocity.
@ak-47 If you replace "day" with "yoctosecond," then yes. This is the instantaneous change in the temperature due both to the fact that the fluid may be heating up/cooling down, and the fact that it may be moving.
Understanding check: Images at different virtual depth in VR require different vergences (because parallax is preserved in VR), but the accommodation stays constant (because the screen is still the same difference from your eye)?
Amazing that Skynet would settle for monochromatic vision on the Terminator.
It seems like we have two options to solve this: add a LaGrangian constraint OR just use $\theta$ as our variable and treat the length as a constant when doing our calculus.
"Since density is constant, we get our incompressibility constraint"...that sounds like circular logic? What is the empirical fact we start with here and what do we conclude from it?
Understanding check: This is the difference you'd get between consecutive readings by sticking your thermometer in a single spot of a moving fluid.
E.g. you put a thermometer at The Point, let the rivers flow for a day, then put the thermometer at The Point again, and the difference is the material derivative.
@BryceSummers: Those phenomena are certainly needed to get a "correct" image in the sense that it matches real-world experiments, but the idea on this slide is a bit different. Here we're saying: even if you allow a relatively simple model of scattering, etc., can you guarantee that the program produces a correct image within some given tolerance (\epsilon)? In other words, can you come up with a sampling strategy that guarantees the numerical estimate of an integral agrees with the true integral (up to a fixed tolerance)? This problem actually doesn't have anything to do with rendering: consider numerically integrating a function (f: \mathbb{R} \to \mathbb{R}). If this function looks like a tiny spike (not quite a Dirac delta, but almost), then we could throw billions of samples at the problem without ever getting a nonzero value. One might then (incorrectly) conclude that the integral is zero, even though the magnitude of the spike (and hence the integral) can be arbitrarily large.
@dvernet: The most important thing to keep in mind is that there is always some reasonable chance that you'll transition to a darker sample; we're not always going from dark to light. This basically has to be true in order for anything like Metropolis-Hastings to make sense: if we just accepted all transitions with, say, probability 1/2 (independent of the sample value), then our final collection of samples would look nothing like the integrand. That's why we have to go "uphill more" and "downhill less." (Of course, one must always prove that all of this converges... but that's the intuition.)
@dvernet: Because in that example I said that the function I'm integrating is half red and half blue. ;-) In other words, the importance density isn't picking the sample location based on color; it's picking it based on the location in the domain. So if half of the domain is red, and half of the domain is blue, but we have an importance sampling strategy that yields a very different distribution of samples, then we are doing something wrong!
Completely understanding sources of bias in Monte Carlo rendering can be a rather tricky issue. For instance, suppose you use interpolated vertex normals for shading---already your result will be biased, because the illumination does not correspond to the actual geometry. So, you have to be careful. If you want to understand all the issues in great detail, a great place to look is the thesis by Veach.
@whdawn: Not sure I understand your question, but yes, in an intuitive sense the L2 inner product captures something about "how much two functions overlap," just like the Euclidean inner product captures something about "how much two vectors line up."
@whdawn: The short answer is that they use frequency information to choose their samples more intelligently; the long answer is that you should read their paper and find out! :-)
@kapalani: Yep. And the even more amazing fact is that Eigen is not particularly fast, as far as linear solvers go! For sparse linear algebra, a lot of "heavy duty" numerical linear algebra is done using a package called SuiteSparse (for instance, this is what gets used inside MATLAB and Mathematica). In some sense, it shouldn't be too surprising that sparse linear solvers scale well, since (and this is only an extremely rough piece of intuition) the number of nonzeros is roughly linear rather than quadratic in the matrix dimension. So at least something like matrix-vector multiplication shouldn't be expensive. On the other hand, numerical linear algebra is one of the great successes of 20th century scientific computing... as indicated by this slide! In short: you should be impressed! :-)
Well, my matrices didn't format correctly... but hopefully you get the idea. :-)
@pavelkang: I think those are all good thoughts. Another way to do it is to say: in n dimensions, minus the identity matrix $-I$ can be expressed as the product of n elementary reflections about each of the axes. E.g.,
$$ \left[ \begin{array}{rrr} -1 & 0 & 0 \ 0 & -1 & 0 \ 0 & 0 & -1 \end{array} \right] = \left[ \begin{array}{rrr} -1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 1 \end{array} \right]\left[ \begin{array}{rrr} 1 & 0 & 0 \ 0 & -1 & 0 \ 0 & 0 & 1 \end{array} \right]\left[ \begin{array}{rrr} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & -1 \end{array} \right]. $$
Since each reflection reverses the orientation (and reversing the orientation twice preserves it), (-I) is orientation-reversing in an odd number of dimensions and orientation-preserving in an even number of dimensions.
@WALL-E: Correct. For instance, there is not even an additive identity! (I.e., no number "0" such that 0+x=x+0=x.)
@BryceSummers: That's the right idea---the simplex method is a method specifically for solving linear programs (LPs), i.e., minimize a linear objective subject to linear inequality constraints. Actually, the simplex method predates descent-based methods for LPs by quite some time. The main problem, though, was that the simplex method has (worst-case) exponential complexity. Later on, people discovered that you can solve LPs in polynomial time using interior point methods, which are (very roughly speaking) descent-based approaches that account for constraints. Still, the simplex method remains a common method of choice due to (often) good real-world performance on certain problems.
Since we're not solving a PDE here, it's pretty easy to prescribe arbitrary values and derivatives at the boundary points and find an interpolating function---in fact, that's exactly what we did with cubic spline interpolation in Assignment 4. However, once we actually need to satisfy some additional criteria on the interior, we need to be more careful (as the next few slides reveal).
@dsaksena: Ok! Let me know if you want to chat more.
Indeed. Here is the video I showed in class.
@kapalani: I suspect that most "real-world" IK problems don't have analytical solutions due to the complexity of real systems, but you're right that having closed-form solutions for simple systems (or even simple components of larger systems) can be useful. One way to see that general IK problems are difficult (and perhaps even impossible) to solve exactly is that many of them boil down to hard problems in algebraic geometry. In general, there are a lot of connections between algebraic geometry and kinematics, path planning, etc., and it is definitely a subject worth studying.
Right. By the way, inverse kinematics doesn't always have to be about putting an end effector at a particular position---one could also describe more general (and interesting!) goals, like "the center of mass of the robot/figure/creature should be at a certain location," etc. And, since you've already been through the exercise of starting with an IK goal defined in terms of an energy (in Assignment 4) and going all the way through to a numerical algorithm, you should now be able to design other interesting/fun/useful IK tools. :-)
@BryceSummers: Right, it's kinematic because one can use, say, a spline curve to approximate any measured data points without having any notion of where this data came from or what it means.
Good question. In Schoenberg's case (if I remember correctly), he approximates a curve by its height over a given tangent line, because he only wants to consider small displacements. What's the difference between its second derivative of a height function and the curvature of its graph? Quite a bit, actually; take a look at the Wikipedia article on curvature for some explicit calculations.
Depends on the budget. Take a look at some Saturday morning cartoons; nowhere near as smooth as feature film. Plus, from now on, you'll never be able to not notice how jerky the motion is. :-)
Right---one typically sticks with low-order polynomials in order to avoid oscillations, but there are polynomials specifically designed to avoid Runge's phenomenon, like the Chebyshev polynomials.
@whdawn: You have to integrate them all against the S, M, and L curves, and from there determine what linear combination of the (S, M, and L-coefficients of the) first three give you the coefficients of the fourth laser.
@whdawn: I would try not to draw close analogies between the brain and a computer: this is a classic mis-comparison of technologies that has been repeated throughout the ages. For instance, in the 19th century people made all sorts of analogies between the brain and a steam engine! Probably 100 years from now it will be something else entirely. The brain is not a steam engine, nor a computer; it is its own fascinating and complicated mechanism---that deserves a little reverence! :-)
Oh, I see. Thanks Keenan!
Yes, it seems that few people use higher than cubic
Thinking out loud: Note that if you're goal is to capture a lot of photons, making your lens smaller is a bad idea. We see this in ray space; the green band's area is how many rays we capture.
So in bottom little square we fix s and t, and we are making the query "How does the camera at v and u see me?" "Me" is the s t pixel.
I think of it as all the information generated by a 2d array of eyes each producing a 2d image.
So a modern camera's software automatically tries to compensate for bad rows?
Why does the micro lens prefilter the signal?
To solve this distortion, why not display the image on the inner surface of a sphere so the distance from the user to each pixel of the image will be the same.
What do this IR camera do?
I am confused about the images in slides 54 - 59. Are they coming from each part of sensors, and adding up to the whole image, in that correct?
The derivative of the CDF P(x) is the pdf p(x).