Do we accomplish this by traversing partially from the light and the eye to different points and connecting at those points?
Isn't this method prone to rounding errors? In particular, what if we sample from the light source and the camera, and instead at meeting at some expected point, they meet every so slightly off from that point?
Would this work for all situations? It seems like if a lot of surfaces are mirrors, when we connect a path from a light to a path from the eye, we likely won't get the exact reflection of the mirror so that path will not contribute anything to the integral
Will this algorithm cost much memory space to store the ray paths in both directions.
How do we incorporate importance sampling in this strategy is the path direction is predetermined by the bounce pattern?