Previous | Next --- Slide 16 of 63
Back to Lecture Thumbnails
rbunny

This was a really cool example as to how things can be unbiased but not consistent.

Sherwin

I think the estimator is a function of samples. We can only talk about whether an estimator is consistent or not when we can increase the number of samples to infinity.

If we fix the number of the random sample of the image to n=1, then what is the number of the sample that goes to infinity (as we want to talk about convergency)?

keenan

@Sherwin You can setup whatever game you want, and play it. For instance, you could simply say that the 1-sample estimator takes an additional parameter, $m$, which is unused. Formally, then, this estimator is "not consistent as $m$ goes to infinity." In other words, as you say, it's only meaningful to talk about consistency with respect to some particular parameter.

Another, more interesting example is consistency of geometric algorithms. Suppose for instance that I have a scheme for estimating the area of a smooth surface: triangulate the surface via a mesh with vertices on the smooth surface, and sum up the triangle areas to get an approximation of surface area. Is this estimate consistent as the number of mesh vertices increases? Well, it's not as easy as just talking about the number of vertices. You have to talk about where they go, i.e., you have to put some conditions on this sequence of meshes. We hinted at this point in this slide: you can have finer and finer sequences of meshes, but if the normals don't converge, then many other quantities (like area) may not (see this paper).

In general, consistency and convergence are a subtle topic---and it's good to respect this fact!