What information exactly does the orientation of the ray give in the applications given on this slide? I can see that, for collision detection, it might be important to know from what direction a collision is happening, but why does the direction matter for things like inside-outside test and visibility?

Sleepyhead08

For any point, I can see that in practice the ray would know in which direction to aim in order to locate the object (just get the mesh's central coordinates), but how would we know when to stop traversing the ray? Since this is supposed to work in general, and we pick a random direction, how do we know we don't just have a really large mesh and we're on the inside of it?

keenan

@ericchan The basic idea of the inside-outside test really requires that you shoot a ray from a certain point (the point where you're "standing") rather than just intersecting with a line. Otherwise, what exactly are you testing? :-) It doesn't matter which direction you shoot the ray, though; the even/odd parity will be invariant with respect to the ray direction and orientation (unless the ray grazes the surface tangentially). Likewise, visibility is directional: what I can see in front of me is very different from what I see behind me! (Perhaps I'm also just not understanding your question.)

keenan

@Sleepyhead08 It's really important here to realize that, algorithmically, we are not "stepping" along the ray. We are doing a single, closed-form test that determines whether an intersection occurs anywhere along the infinite ray. Imagining that we are moving along the ray is perhaps useful conceptually, but it's really not how the algorithm works. For instance, if we intersect a ray $r(t) = o + td$ with a plane $\langle N, x \rangle = c$, then we can just use some simple algebra to directly solve for the t value at which the intersection occurs, namely

$$ t = \frac{c - \langle N, c \rangle}{\langle N, d \rangle}. $$

The size of the mesh makes absolutely no difference here.

What information exactly does the orientation of the ray give in the applications given on this slide? I can see that, for collision detection, it might be important to know from what direction a collision is happening, but why does the direction matter for things like inside-outside test and visibility?

For any point, I can see that in practice the ray would know in which direction to aim in order to locate the object (just get the mesh's central coordinates), but how would we know when to stop traversing the ray? Since this is supposed to work in general, and we pick a random direction, how do we know we don't just have a

reallylarge mesh and we're on the inside of it?@ericchan The basic idea of the inside-outside test really requires that you shoot a ray from a certain point (the point where you're "standing") rather than just intersecting with a line. Otherwise, what exactly are you testing? :-) It doesn't matter which direction you shoot the ray, though; the even/odd parity will be invariant with respect to the ray direction and orientation (unless the ray grazes the surface tangentially). Likewise, visibility is directional: what I can see in

frontof me is very different from what I seebehindme! (Perhaps I'm also just not understanding your question.)@Sleepyhead08 It's really important here to realize that, algorithmically, we are

not"stepping" along the ray. We are doing a single, closed-form test that determines whether an intersection occurs anywhere along the infinite ray. Imagining that we are moving along the ray is perhaps useful conceptually, but it's really not how the algorithm works. For instance, if we intersect a ray $r(t) = o + td$ with a plane $\langle N, x \rangle = c$, then we can just use some simple algebra to directly solve for thetvalue at which the intersection occurs, namely$$ t = \frac{c - \langle N, c \rangle}{\langle N, d \rangle}. $$

The size of the mesh makes absolutely no difference here.