Previous | Next --- Slide 14 of 63
Back to Lecture Thumbnails
jifengy

Practically speaking, how would we evaluate consistency in most cases? Since it relies on knowing the value of the true integral, which may not be easily computed (since that's the reason we're using estimators at all).

0x484884

This definition of consistency seems a little restrictive to me. I think that an estimator could still converge, at least in an intuitive sense, while still always having a positive probability of the error being nonzero. If the probability that the error is greater than any epsilon > 0 goes to zero as n goes to infinity, then this seems like it would still converge, but I don't think this would imply the definition above.

I'd think that it would actually be pretty common that the probability of the estimate being exactly equal to the true integral would be 0 for any n, which would mean that the probability in the definition would be 1 for any n, and the limit would be 1, even though the estimator is consistent.

keenan

@jifengy Two things:

  1. You would prove that your algorithm is consistent before you even implement it. :-)

  2. You would check that your implementation is correct by running it on a simple test scene where you can figure out the true solution (or where you can compare against a known working reference).

(3. I suppose you could also try to formally verify your code... AFAIK nobody does that for something as complex as a renderer!)

keenan

@0x484884 You can read a bit more about these definitions and their justification in Section 2.4.3 of Eric Veach's academy award-winning PhD thesis.