False positives and false negatives, which refer to false positives and false negatives, respectively, in athlete drug testing are important issues in a variety of fields, including drug testing and loan approvals, highlighting the importance of setting standards to maintain reliability and fairness.
When drug testing athletes after competition, athletes whose urine or blood samples are found to contain any level of a banned substance are considered to have taken a banned substance and are disqualified. This is an important process to protect the fairness of the sport and the health of the athletes. However, the errors that can occur in these tests and their consequences are highly sensitive.
Depending on the criteria used, it is possible for an athlete to be found to have taken drugs when he or she did not, or vice versa. In statistics, the former is called the error of adoption (false positive) and the latter is called the error of rejection (false negative). These errors can occur depending on the sensitivity and specificity of the test method and the set cut-off value.
In the former case, the drug tester can pay a serious price, such as a defamation lawsuit from the athlete and a loss of credibility for the testing organization, while in the latter case, there is almost no price to pay because few athletes will confess. This means that we need to be very careful when setting the standards for drug testing, which can lead to liability and credibility issues for testers and organizations.
On the other hand, examples of how the fallacy of adoption and the fallacy of rejection affect each other in different ways can be found in many areas, not just drug testing. For example, in airport security screening, innocent passengers can be inconvenienced or defamed when strict screening criteria are applied to catch terrorists. This shows how important it is to strike a balance between sensitivity and specificity in security screening. In each of these areas, it is important to look for ways to minimize the potential for error and increase reliability.
On the other hand, the cost of an error in acceptance is sometimes less obvious, while the cost of an error in rejection is more obvious. When banks decide whether to approve a loan to a customer, they determine whether the customer is likely to default on the loan. Depending on the approval criteria, there are two types of errors: adoption errors, where the bank decides that the customer will repay the loan (and will default), and rejection errors, where the bank decides that the customer will not repay the loan (and will not default).
In this case, the bank’s failure to increase its operating profit by not lending is less obvious, while the bank’s losses from not being able to repay the loan after lending are more obvious. For this reason, banks need to be more cautious when setting loan approval criteria, and it is important to balance the error of acceptance with the error of rejection.
When setting criteria, drug screeners and banks will want to reduce the cost of errors that are relatively obvious. However, these two errors for the same target are seesawed with each other. That is, shifting the criteria to reduce the error of acceptance will increase the error of rejection, and shifting the criteria to reduce the error of rejection will increase the error of acceptance, so you can’t reduce the likelihood of both errors together. Statisticians say that any detection system is merely redistributing the probability of either the adoption error or the rejection error, so it’s important not to focus on the cost of one error and overlook that reducing it will increase the other.
Therefore, it’s important to strike a balance when setting standards. It’s a principle that can be applied in many areas, not just drug testing or loan approvals. The ultimate goal should be to minimize the occurrence of errors while maintaining reliability and fairness through the right criteria. This can be done by leveraging relevant data and statistical analysis, and by continuously monitoring and adjusting the criteria to optimize them.