Why is this systematic error important? If it contributes very little anyways, won't the new convergence still be somewhat a reasonable approximation?
How can the tentative contribution be calculated without knowing where the direct light is?
In the previous case we experimented to determine the number of bounces for a nice rendering. Don't we still need to experiment to determine the proper value for the new probability distribution in this case?
Given two images, one render with the "systematic error" and the other with "randomly discarded samples", is there a good way to compare which one is better? With randomly discarded samples, is the idea to hopefully make small enough mistakes that are not noticeable?
For randomly discarding low-contribution samples, should we relate the probability of discarding with the contribution? If the contribution is lower, could we increase the probability to discard it?
Is there any way to avoid systematic error?
How to do "random" discard while making the result unbiased?
I think in the previous slides you mentioned something about cos(theta) but I could not remember exactly. Is this the magic which allows Russian roulette to ignore low distribution samples?
How significant is the systematic error? Can we assume it does not enough of an impact on the final image?
What does ignoring low-contribution samples bias the result towards?
How can we make sure the amount we are discarding is indeed leaving the estimator unbiased?
Does this mean we are biased towards higher contributing samples? Why may this be bad, even if it converges to a different value?