The process of determining the false discovery rate-adjusted p-value, often called the “q-value,” involves a series of statistical computations. This adjusted value represents the minimum false discovery rate incurred when calling a test significant. As an example, if a test yields a q-value of 0.05, it signifies that, on average, 5% of the significant results arising from that test are expected to be false positives. Calculating this metric typically starts with a list of p-values from multiple hypothesis tests and utilizes methods to control for the error rate associated with accepting false positives.
Employing a method to determine the false discovery rate has substantial benefits in fields such as genomics, proteomics, and transcriptomics, where large-scale multiple testing is commonplace. It offers a more stringent and accurate control of errors compared to simply using a p-value threshold. Historically, techniques like the Bonferroni correction were used for multiple comparison correction; however, these methods tend to be overly conservative, resulting in a high false negative rate. The development of procedures to control the false discovery rate offers a balance, increasing the power to detect true positives while maintaining a reasonable level of control over false positives.