How exactly is this better than forward/backward Euler? They seem similar enough that I find it hard to believe that one will produce a significantly better one over the other.

Is this similar to aliasing where if the value you choose is too small/different, it would cause the values to be incorrect.

If these special weights are expensive to compute (for example if you need to compute square roots and division operations), might it be more computationally efficient to use a larger number of inexact sampling points that are easier to compute

Is there a better way to generalize rejection sampling to higher dimensional spheres? In 2D, pi * r^2 / (2r)^2 = pi/4 which is about 78.5% but in 3D, 4pi/3 * r^3 / (2r)^3 = pi/6 which is about 52.4% and I imagine it drops quite rapidly when you go up to 10D spheres

How different is it to solve the heat equation in a Lagrangian vs. Eulerian framework? I imagine in an Eulerian framework you could use a grid solver like you said in the slide, but in a Lagrangian framework the neighbors are in irregular positions which would require using a spatial data structure like a BVH or quadtree or something similar

Is there any geometric reason for why these equations are also called "elliptic", "parabolic", and "hyperbolic"? Like what does the wave equation have to do with a hyperbola?

Is there a mathematical explanation of the decreased variance?

This sounds to be biased. How can we decide the distribution density value of the chosen path?

After we switch our sample distribution, the biasness has not changed right? What about consistency?

So we can combine as many distributions as we want into one distribution to do importance sampling?

Does this scene looks darker because of loss of energy?

The BRDF term looks quite abstract. Given any pair of direction, it produces a scalar. How would you store such a function?

Does radiometry related calculation always come with raytracing? Is there any practice to incorporate them into rasterization?

Does the "screen space" means this AO is not pre-baked but real-time calculated with information of screen space pixels?

Is the disadvantage compared to K-D Tree caused by inflexible division point decision?

Would bounding box not aligned to axis in some way performs better because of a tighter volume? How much is the benefit gain compared to the performance loss?

Should the equivalent statement in 1D as 'two dirivatives are the same'?

How to add boundary conditions to the linear system in the previous page?

Is it also necessary to optimize the positions of the joints, in addition to the IK we did in A4?

Can we view the calculation of p2 as a sequence of linear transformation, where (u0 + theta1 u1) is rotated as a whole by theta0?

Is the backward euler evaluating the new velocity using next configuration?

What do u stand for in this equation?

This assignment is crazy. I am wondering why this picture's quality is much better than mine? How much samples are used in this result?

How does the error play out in this method of approximating, as we are approximating twice would the errors compound?

I understand how the Eulerian method would be calculated by just checking the flux at fixed locations but for the Lagrangian method, how would we keep track of all the moving particles and would that ever be efficient as supposed to the fixed locations?

What is the difference between strong and weak convexity and how does that affect how we can solve the problem?

Are more problems we deal with in computer graphics discrete or continuous? Is it better to leave a discrete problem as is or is converting it into a continuous problem better?

How would we account for the error and collisions in such a large system of equations?

Is the equation for animation actually this simple, how do other forces and changes in acceleration play a part in the animations?

When animating for something big like a movie, do the animators do this individually for each character, or do common animation sequences like walking have a sort of template to work off of?

What are some methods for generating inbetweens with a computer? How can we tell exactly how some parts are moving, for example the hook, in the keyframes to create the inbetweens?

How is this the same as laplace?

When is the numerical solution more preferable?

how to look for local/global min and max in this situation

Will FK and IK be in conflict with each other/form a cycle for one joint?

Other than particle based fluids, are there other algorithms/models for fluid simulation?

I heard that for curly hair, hair is simulated by the spring system. What would be the best physical system to simulate straight hair?

Does this use the similar algorithm with today's mesh?

Are there different principles for interpolations between keyframes?

What could be an inner product function for comparing images, like in the last slide?

Are there substantial advantages in complexity when solving these problems on sparse matrices? My intuition is that some of the operations on sparse matrices (like finding eigenvalues) might not be easier than on dense matrices, but things like solving linear equations, for example, might be.

This numerical solution (averaging with neighbors) seems to exhibit great parallelism; is there a usage where certain hardware could leverage this to make it more feasible?

How practical is it to compute gradient descent every time someone is reaching for something for example? Are there quicker estimation algorithms?

Is there a similar way to approach the problem when we need to find the global min/max instead of a local one? Is it practical to run this algorithm a bunch of times and take the min or max of all the local min/maxs found?

It seems that this would be very costly in practice. Is there another alternative to store the sparse matrix information that is more efficient in terms of storage?

By mixing the two methods, does that mean you would use the Lagrangian in some scenarios and Eulerian in others?

Does the Eulerian take much less computing power in a scene with lots of particles since the variables that are stored are not dependent on the number of particles in the space?

How do we decide what prr is