Previous | Next --- Slide 17 of 31
Back to Lecture Thumbnails

I was thinking about when the algorithm should stop and thought about two possibilities: checking if the function value drops below a certain threshold, or multiplying the gradient by a scalar between 0 and 1 until the point converges. In practice, which of these methods are used? Which of these methods are better?


How much significant is the memory tradeoff in practice of storing the closest point?


How would we know what the bounds should be for the grid cells that we are storing information in?


Which direction should we go when doing the gradient move?


Is there a way to vary the step size when we are finding a closest point, since we want to step faster when we are far away from the zero-distance surface and slower when we are closer?


Is this a kind of greedy algorithm?


In 2D, we can just store the closest point in each pixel since we already have that nice grid. But how would that work in 3D? Would you just store a 3D grid of x, y, z positions? This doesn't seem to work as nicely as 2D (among other issues, how would you decide how deep to go in z when building the grid?)


Another question: why do we need to take small steps? Why can't you just move a distance equal to the value of the distance function in the direction of the negative gradient, in one big step?


How do we know what to make the step size? In some cases, it might be really inefficient to have a small step size, but in other cases, we might miss the boundary if we have a large step size.


Wouldn't this be less accurate than the explicit representation one?


Is there an intelligent way to update the closest point of each grid point if we move/animate the object?


This is much faster than the explicit expression. And you can do tradeoffs between performance and memory.


I think I forgot what we talked before, but what is an implicit surface? From my search, I found it is some surface cannot be computed for x,y,z. Why is it hard for computing x,y,z for it?


Do we ever want to apply gradient descent to an explicit representation or would this be impractical to do compared to the ray tracing approach?


This looks like a gradient descent. Would Newton method give a better result here if the distance function is differentiable?


Is this the standard way for computing these points in the real world or is there some other similar-type algorithm that is performed on graphics hardware?


Which direction should the step we take be in?


What are the tradeoffs between implicit and explicit closest point queries? Are there cases where we would convert from one representation to the other just for doing queries?


Are there any guarantees that we would find the closest point on the surface? Intuitively, it seems that the greedy approach might give us approximately the closest point most of the time, but it might go wrong sometimes.


How accurate would the closest point have to be? Since explicit would be more accurate right?


Usually does the specialized hardware aim to improve on the speed or memory of these alogrithms?


If we try this on larger graphs, would there be situations where it takes up so much memory that the cache slows things down rather than speeding it up?


What is tradeoff between accuracy and computational complexity?


How difficult is it to store a signed distance function for a surface instead of the surface itself?