I think in one of the previous lectures, when testing for rasterization, there was a decomposition idea about subdividing the problem. Can we apply the same technique and test on "undersampled" mesh to narrow down the set of triangles, before testing on a more fine-grained mesh?
Implicit representation probably makes equation solving easier.
Why would HalfEdge data structure, which does not encode any distance relation, help this problem?
Maybe I missed when you said it, but what is the point of these closest point queries? What use cases are there for such a query?
For the halfedge data structure, we've made so many assumptions on how the halfedges' vertex and face are stored. Would this also be a problem?
Maybe implicit representation will make things easier? We can plug in the coordinate of point into the function and the value we get can represent the distance of the point to the surface.
I’m somehow find it hard to believe that our halfedge data structure can help determine points of intersection? Unless there is a distance parameter I’m missing, I think you need to have other information to help calculate closest points and stuff
Maybe we can use halfedges to find some points connected with this given one, so that we narrow down the scope?
Is there some way to utilize the normal of vertices to find the minimum distance?
If it is a close enough point rather than a strict closet point? it is ok?
I think for the first task, an implicit approach is easier. just express the distance from p as a sphere function, and we can find the points that are in the smallest radius possible
What scenarios would we want to find the closest point?
Is it possible to use a mix of geometry representations to get better closest point queries?