Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

Statistically speaking, we are generally agnostic to which is a bigger problem,

ID: 3310505 • Letter: S

Question

Statistically speaking, we are generally agnostic to which is a bigger problem, type I (false positive) errors or type II (false negative) errors. However, in certain circumstances it may be important to try and put more emphasis on avoiding one or the other. Can you think of an example of where you may want to try harder to avoid one type or another? Can you think of a policy; political, economic, social, or otherwise, that pushes people toward avoiding one type or another? What are the repercussions of such policies?

Explanation / Answer

Sometimes, by chance alone, a sample is not representative of the population. Thus the results in the sample do not reflect reality in the population, and the random error leads to an erroneous inference.Just like a judge’s conclusion, an investigator’s conclusion may be wrong. A type I error (false-positive) occurs if an investigator rejects a null hypothesis that is actually true in the population; a type II error (false-negative) occurs if the investigator fails to reject a null hypothesis that is actually false in the population. Although type I and type II errors can never be avoided entirely, the investigator can reduce their likelihood by increasing the sample size (the larger the sample, the lesser is the likelihood that it will differ substantially from the population).False-positive and false-negative results can also occur because of bias (observer, instrument, recall, etc.). (Errors due to bias, however, are not referred to as type I and type II errors.) Such errors are troublesome, since they may be difficult to detect and cannot usually be quantified.

Let's return to the question of which error, Type 1 or Type 2, is worse. Many textbooks and instructors will say that the Type 1 (false positive) is worse than a Type 2 (false negative) error. The go-to example to help people think about this is a defendant accused of a crime that demands an extremely harsh sentence.The null hypothesis is that the defendant is innocent. Of course you wouldn't want to let a guilty person off the hook, but most people would say that sentencing an innocent person to such punishment is a worse consequence.The rationale boils down to the idea that if you stick to the status quo or default assumption, at least you're not making things worse. And in many cases, that's true. But like so much in statistics, in application it's not really so black or white. The analogy of the defendant is great for teaching the concept, but when we try to make it a rule of thumb for which type of error is worse in practice, it falls apart.

Again, lets look at a different socio-economic perspective. Suppose you are designing a medical screening for a disease. A false positive of a Type I error may give a patient some anxiety, but this will lead to other testing procedures which will ultimately reveal the initial test was incorrect. In contrast, a false negative from a Type II error would give a patient the incorrect assurance that he or she does not have a disease when he or she in fact does.As a result of this incorrect information, the disease would not be treated. If doctors could choose between these two options, a false positive is more desirable than a false negative.

But in most fields of science, Type II errors are seen as less serious than Type I errors. With the Type II error, a chance to reject the null hypothesis was lost, and no conclusion is inferred from a non-rejected null. But the Type I error is more serious, because you have wrongly rejected the null hypothesis and ultimately made a claim that is not true. In science, finding a phenomenon where there is none is more egregious than failing to find a phenomenon where there is. Therefore in most research designs, effort is made to err on the side of a false negative.