Would some of this noise be solved with some slight blurring, and would a technique like that be used in practice?
In the real world, does a entire scene use a standard prr value or is there some algorithm that you can run that computes some optimum Russian roulette value to optimize run-time?
Are there any heuristics/intrinsic qualities about a scene that informs us what percent of low-contribution samples that we can terminate?
Are there ways to add post-processing without any information on the primitives/lightsource in the original scene to recover the first rendering without roulette?
Is there a standard way to figure out the right balance between efficiency and accuracy?