Point estimates often need to be nested in layers of analysis, and it is the Inv
ID: 3046880 • Letter: P
Question
Point estimates often need to be nested in layers of analysis, and it is the Invariance Principle that provides a pathway for doing so. An example would be estimating Mu after having to estimate Alpha and Beta for some distributions. In simpler statistics class exercises (like those we've seen up until now), this is typically avoided by providing the lower level parameters within exercises or problems (e.g. asking you for Mu by giving you the Alpha and Beta). The only real options we've had prior to this unit for estimating lower level parameters has been trial-and-error: collecting enough data to form a curve that we then use probability plots against chosen parameter values until we find a combination of parameters that "fits" the data we've collected. That approach works in the simplest cases, but fails as our problem grows larger and more complex. Even for a single distribution (e.g., Weibull) there are an infinite number of possible Alpha-Beta combination. We can't manually test them all. Point estimation gets us around all of that by providing the rules needed to actually calculate lower level parameters from data. We sometimes need to be able to collect a lot more data to use this approach, but it's worth it. We'll be able to calculate more than one possible value for many parameters, so it's important that we have rules for selecting from among a list of candidates. Discuss what some of those rules are, and how they get applied in your analysis. If an engineering challenge includes "more than one reasonable estimator," (Devore, p. 249) how do engineers know which to pick, and what issues arise statistically and in engineering management when making those choices?
Explanation / Answer
If the engineering challenge includes more than one reasonable estimator then, engineers know which to pick based on the follwing qualities or rules for the estimator. If the estimator has the follwing qualities, it can be picked.
A "Good" estimator is the one which provides an estimate with the following qualities:
Unbiasedness: An estimate is said to be an unbiased estimate of a given parameter when the expected value of that estimator can be shown to be equal to the parameter being estimated. For example, the mean of a sample is an unbiased estimate of the mean of the population from which the sample was drawn. Unbiasedness is a good quality for an estimate, since, in such a case, using weighted average of several estimates provides a better estimate than each one of those estimates. Therefore, unbiasedness allows us to upgrade our estimates. For example, if your estimates of the population mean µ are say, 10, and 11.2 from two independent samples of sizes 20, and 30 respectively, then a better estimate of the population mean µ based on both samples is [20 (10) + 30 (11.2)] (20 + 30) = 10.75.
Consistency: The standard deviation of an estimate is called the standard error of that estimate. The larger the standard error the more error in your estimate. The standard deviation of an estimate is a commonly used index of the error entailed in estimating a population parameter based on the information in a random sample of size n from the entire population.
An estimator is said to be "consistent" if increasing the sample size produces an estimate with smaller standard error. Therefore, your estimate is "consistent" with the sample size. That is, spending more money to obtain a larger sample produces a better estimate.
Efficiency: An efficient estimate is one which has the smallest standard error among all unbiased estimators.
The "best" estimator is the one which is the closest to the population parameter being estimated.