Previous | Next --- Slide 14 of 63
Back to Lecture Thumbnails
lucida

With n = 1 you get an unbiased estimator. The estimator is unbiased because if you take the expectation of the estimator over all possible sample points in the image, you get the actual integrand. It's important to note that even thought the expected value of the estimator is the true value of the integrand, the estimator itself will never take on the true value of the integrand. Just like how the expected value of a fair coin flip is 0.5 but the result of a coin flip will always turn out to be either 0 or 1.

Now for the case where we let n go to infinity: Here I assume we multiply each sample by the area of the image, sum them up, then divide the sum by the number of samples. I also assume that we are uniformly sampling over the pixels.

Then in the limit as n goes to infinity, we will sample each pixel the same number of times since we are drawing from a uniform distribution. Thus in the limit, each pixel's value will be "counted" in the final sum the same amount of times and so when we divide the sum by the number of samples, we get the true integrand's value because in the true integrand, each pixel gets counted once.

So as n goes to infinity the estimator is consistent.

kmcrane

Right. So in general: consistent does not imply unbiased, and unbiased does not imply consistent. But in practice, if you already have an unbiased estimator, it is usually not too difficult to construct a consistent one!