Previous | Next --- Slide 61 of 63
Back to Lecture Thumbnails
BryceSummers

Not to mention, that we eventually need would need to handle contributions due to light diffraction... and other more obscure light behavior.

kmcrane

@BryceSummers: Those phenomena are certainly needed to get a "correct" image in the sense that it matches real-world experiments, but the idea on this slide is a bit different. Here we're saying: even if you allow a relatively simple model of scattering, etc., can you guarantee that the program produces a correct image within some given tolerance (\epsilon)? In other words, can you come up with a sampling strategy that guarantees the numerical estimate of an integral agrees with the true integral (up to a fixed tolerance)? This problem actually doesn't have anything to do with rendering: consider numerically integrating a function (f: \mathbb{R} \to \mathbb{R}). If this function looks like a tiny spike (not quite a Dirac delta, but almost), then we could throw billions of samples at the problem without ever getting a nonzero value. One might then (incorrectly) conclude that the integral is zero, even though the magnitude of the spike (and hence the integral) can be arbitrarily large.

BryceSummers

Huh, so perfect hashing is kind of like an asymptotic certification of hashing algorithms, in that they have theoretical probabilistic guarantees, but we can also construct a system that is practically guaranteed to match the efficiency expectations.

This also seems kind of like the halting problem in that we must ask ourselves given more time will the value of this pixel change or will it stay the same no matter how long we run our monte carlo estimator. Are we "done".