This assignment is crazy. I am wondering why this picture's quality is much better than mine? How much samples are used in this result?
How does the error play out in this method of approximating, as we are approximating twice would the errors compound?
I understand how the Eulerian method would be calculated by just checking the flux at fixed locations but for the Lagrangian method, how would we keep track of all the moving particles and would that ever be efficient as supposed to the fixed locations?
What is the difference between strong and weak convexity and how does that affect how we can solve the problem?
Are more problems we deal with in computer graphics discrete or continuous? Is it better to leave a discrete problem as is or is converting it into a continuous problem better?
How would we account for the error and collisions in such a large system of equations?
Is the equation for animation actually this simple, how do other forces and changes in acceleration play a part in the animations?
When animating for something big like a movie, do the animators do this individually for each character, or do common animation sequences like walking have a sort of template to work off of?
What are some methods for generating inbetweens with a computer? How can we tell exactly how some parts are moving, for example the hook, in the keyframes to create the inbetweens?
How is this the same as laplace?
When is the numerical solution more preferable?
how to look for local/global min and max in this situation
Will FK and IK be in conflict with each other/form a cycle for one joint?
Other than particle based fluids, are there other algorithms/models for fluid simulation?
I heard that for curly hair, hair is simulated by the spring system. What would be the best physical system to simulate straight hair?
Does this use the similar algorithm with today's mesh?
Are there different principles for interpolations between keyframes?
What could be an inner product function for comparing images, like in the last slide?
Are there substantial advantages in complexity when solving these problems on sparse matrices? My intuition is that some of the operations on sparse matrices (like finding eigenvalues) might not be easier than on dense matrices, but things like solving linear equations, for example, might be.
This numerical solution (averaging with neighbors) seems to exhibit great parallelism; is there a usage where certain hardware could leverage this to make it more feasible?
How practical is it to compute gradient descent every time someone is reaching for something for example? Are there quicker estimation algorithms?
Is there a similar way to approach the problem when we need to find the global min/max instead of a local one? Is it practical to run this algorithm a bunch of times and take the min or max of all the local min/maxs found?
It seems that this would be very costly in practice. Is there another alternative to store the sparse matrix information that is more efficient in terms of storage?
By mixing the two methods, does that mean you would use the Lagrangian in some scenarios and Eulerian in others?
Does the Eulerian take much less computing power in a scene with lots of particles since the variables that are stored are not dependent on the number of particles in the space?
What are some edge cases where the approximations would produce huge error?
How much is the latency difference between the two methods?
How much error is in this approximation?
I’m a bit confused: how can we get the right from the left? I see the connection but not quite sure how they correspond to each other.
Does the geometry of the object affect how the heat distribution spreads (e.g. the valley after a large hill may not get as much heat as a valley before a hill), or is it solely dependent on distance?
How do we get those quantities/what happens if we don't know/cannot invert it? Doesn't inverting it mean knowing which value of the random variable the CDF's value corresponds to?
What are other examples of fundamental model equations?
For Eulerian, do we require less space because we're using grid positions instead of tracking all particles?
What are the choices on discretizing the Laplacian?
Why doesn't it work well on other types of grids?
Is this something we can use as long as we assume the data follows some polynomial curve that we can fit to (if we wanted a good approximate integral of a wonky curve)?
Are there other ways to discretize space or do other methods tend to build on these two?
With Neumann condition, would we somehow have to save data from across other boundaries to get this specific difference, or maybe I am misunderstanding?
What's the difference of an incompressible fluid from a compressible fluid? The properties listed seems like they could apply to all fluids.
What do the convolutions on the triangle mesh look like?
I am also confused with how heat equation after a long time is the same as laplacian. Is this heat different from the physics heat we think about?
How do we avoid the compounding of errors? Since this method involves approximation.
Can this be thought of intuitively as 'how fast the gradient changes as we follow it'?
What is the complexity of this?
I don't think there is an easy way of just combining the two conceptually? Is it similar to our ray tracer where we flip a coin to determine/decide whether or not we want to sample?
Is lagragian similar in concept to the idea of Brownian motion and particles suspended in space?
How do we deal with PDEs that are unsolvable/have specific rules given their type/format?
Intuitively it seems like uniform heat should be around the source, but it seems like the heat is flowing more inwards. Is this also explained by Laplace?
Does the loop over the grid ever become too inefficient when the grid is too large, even if it only remains in 2D? Which of the boundary conditions is usually used?
Is PDEs are only way to simulate this effect? Is there any approximation that works well?