Performance of estimators is assessed in the context of a theoretical model for
ID: 3221450 • Letter: P
Question
Performance of estimators is assessed in the context of a theoretical model for the sampling distribution of the observations. Given a criteria for optimality, an optimal estimator is an estimator that performs better than any other estimator with respect to that criteria. A robust estimator, on the other hand, is an estimator that is not sensitive to misspecification of the theoretical model. Hence, a robust estimator may be somewhat inferior to an optimal estimator in the context of an assumed model. However, if in actuality the assumed model is not a good description of reality then the robust estimator will tend to perform better than the estimator denoted optimal.
Some say that optimal estimators should be preferred while others advocate the use of more robust estimators. What is your opinion?
When you formulate your answer to this question it may be useful to come up with an example from your own field of interest. Think of an estimation problem and possible estimators that can be used in the context of this problem. Try to identify a model that is natural to this problem and ask yourself in what ways may this model err in its attempt to describe the real situation in the estimation problem.
As an example, consider estimation of the expectation of a Uniform measurement. We demonstrated that the mid-range estimator is better than the sample average if indeed the measurements emerge from the Uniform distribution. However, if the modeling assumption is wrong then this may no longer be the case. If the distribution of the measurement, in actuality, is not symmetric or if the distribution is more concentrated in the center than in the tails then the performance of the mid-range estimator may deteriorate. The sample average, on the other hand is not sensitive to the distribution not being symmetric.
Explanation / Answer
There are various robust methods for estimating an unknown parameter(s) of a distribution like unbiased estimation, method of moments, maximum likelihood estimation etc.
Suppose we restrict the estimators to the class of unbiased estimators(our criteria of optimality) then we may be able to find the most optimal estimator using Cramer-Rao lower bound because it will give us the lower bound for the variance of the random variable ( For unbiased estimators mean square error is equal to the variance.).
In the example we can see sample average is a robust estimator of expectation of uniform measurement but mid-range estimator that is sample median is the UMVUE(uniformly minimum-variance unbiased estimator) for uniform distribution and hence a better estimator than sample average.
But now the question is - "Is UMVUE always good ?". The answer is no.
One can consider the example of estimating variance of a normal distribution when mean is given. In this case the MLE(Maximum Likelihood estimator) will be more 'optimal' than UMVUE when the optimality criteria is minimization of mean square error.