Why is Russian roulette used? Is this so potentially far away features might still get captured? Why wouldn't you just have a cutoff when the contribution is low enough?
If indirect illumination gets bounced many times, doesn't it still end up as a ray to the eye? I'm a little confused as to what makes the difference between direct and indirect illumination.
It seems like this entire process is fairly involved, and may greatly differ from scene to scene, and depending on the creators goal. Does specialized hardware ever implement this, or is this generally done with software, which might use specialized hardware?
Is direct/indirect illumination the same as local/global illumination?
What would be some common optimizations for this process, if there were a lot of steps/surfaces that a light source would reflect off of in the process?
What are some ways that we could partition and parallelize this algorithm?
How does this work in practice? I feel like it would be very slow.