Is there a natural way to obtain a consistent estimator from an unbiased, and vice-versa?
Are there any other properties to consider for estimators, like computing cost? I feel like cost wouldn't really be an issue because the whole point of us using estimators in the first place is to save on cost.
Are there cases where unbiased implies consistent?
If we had to pick one between the two, which one would we want? In my eyes, I see the consistent one as better because just because the expected value of the different is zero, it doesn't mean that it will be zero very often in the real world.
I'm having a hard time wrapping my head around this definition of consistency. If the true integral has a continuous value, then isn't the probability of matching it exactly generally equal to zero?
Are there cases where using an almost unbiased estimator is useful? (i.e. E[I - I_n] <= 0.001, for example)
Does the definition of unbiased have to hold for all n?
How could bias relate to variance deduction? Since the bias is only related to the expected value, even a biased estimator could have a small variance. Are we only considering reducing the variance of biased estimator?
Is it right to say that Monte Carlo estimators are both biased and consistent by definition?
Why is it necessary to include the unbiased requirement given that we already have consistency.
Convergence in probability
Does unbiasedness imply consistency? Why do we need consistency if every sample is expected to be the true value?
I can see why convergence is a property, but I'm not too sure why it is needed. If you take many samples, both convergence and unbiased will point towards it getting closer to a correct value, but at small sample values, it is possible that the sample point would be extremely off but unbiased ensures that it is somewhat close?
Isn't consistency all we need? I think we just want the answer to be correct? What are the cases that the estimate is consistent but biased?
Since we will never have infinite samples, should we care whether an estimator is almost consistent (converges to something close to correct answer) or is actually consistent since both won't yield the true integral with a specified, finite, number of samples?
If this is the definition, isn't Monte Carlo both consistent and unbiased, or am I looking at this wrong? Monte Carlo gives the correct answer on average and with enough samples we eventually converge to the correct answer.
How important is the bias and consistency plays role in variance?
Is there an example of an estimator being consistent but biased?
Is there a relationship between bias and consistency?
So is bias and consistency evaluated completely separated or is there some relationship between them?
Wouldn't biased just mean that it more quickly converges to the right answer?
I feel like most of the times we introduce a bias because of certain assumptions or certain properties we know to be true -- even if the "answer" is biased, isn't this what we would expect/ or isn't this no less desirable?