Previous | Next --- Slide 16 of 31
Back to Lecture Thumbnails

How do people in computer graphics balance the efficiency improvement that spatial data structures provide against their overhead? For example, in one of the early lectures, it was mentioned that a hierarchical structure could be used for rasterizing a triangle, yet in practice, such structures generally were not used due to their overhead. Does something similar occur for geometric queries?


What would happen if we wanted to reconstruct a mesh that was not a triangle mesh? Are there algorithms to do this? Or are there algorithms to break non-triangular meshes into triangular meshes?


Would it be a large waste to compute on all triangles in parallel?


Could this be done much more efficiently (but still using the same representation) by utilizing a coarse-to-fine approach?


How would we select or define subset of triangles to loop through?


Would there be any situation this 'simple' way is preferred over complex but faster alternative solutions?


Can we project the point to the mesh in some way, and find the closest triangle?


Are there special ways to downsample a mesh which try to maintain closest distance query accuracy?


Are there algorithms that find this without checking every single triangle by say starting at a triangle and moving closer and closer to the point? Or is that inefficient/incorrect?