Burton Lee – Numeracy II: The tale of sample sizes, the superstitious student, & superb studies




Maryland CC Project show

Summary: Dr. Burton Lee returns to MarylandCCProject to discuss the second half of his deeper dive into numeracy of the medical literature (“deep dive” is a bit of a stretch for him, but for us – there’s fractions involved…. FRACTIONS)!  All kidding aside, we are always grateful to spend time with Dr. Lee and lucky to have him teach us how sift through the morass of literature guiding our medical decision making.<br> <br> Pearls<br> Causes for medical literature reversal<br> <br> * Poor methodology (RCTs, meta-analyses, quantitative reviews, etc.)<br> * Conflict of interest<br> * Innumeracy with the medical literature ***<br> <br> Core concepts of numeracy continued…<br> <br> * Law of small numbers:  Relative statistics (such as percentages) can be misleading with small sample sizes.<br> * Regression to the mean:  The most likely outcome for any test is the average result.  Clinical relevance: The outcome of an intervention may not actually make an actual difference, only a perceived difference if the experimental group is different from the true mean. Also known as the gambler’s fallacy.<br> * Apparently effective vs. Actually effective:  Remember, even if a perfect RCT shows a significantly positive finding, it is only reporting an apparently effective result.  When taking into account both alpha &amp; beta error, the chance a paper is reporting an actual truth is usually around 50%!!<br> <br> Interesting concept – the Number Needed to Read (McKibbon et al., 2004) How many journal articles must you read before finding ONE quality study?  The suggested numbers may surprise you!<br> <a href="http://marylandccproject.org/wp-content/uploads/2014/04/NNR-Chart.png"></a><br> So where do we go from here? In his paper, “Why Most Published Research Findings are False” John Ioannides suggests,<br> Instead of chasing statistical significance, we should improve our understanding of the range of R values—the pre-study odds—where research efforts operate. Before running an experiment, investigators should consider what they believe the chances are that they are testing a true rather than a non-true relationship. Speculated high R values may sometimes then be ascertained. As described above, whenever ethically acceptable, large studies with minimal bias should be performed on research findings that are considered relatively established, to see how often they are indeed confirmed. I suspect several established “classics” will fail the test.<br>  <br> <br> References<br> <br> * <a href="http://marylandccproject.org/wp-content/uploads/2014/04/Ioannides-2005-Why-Most-Published-Research-is-False.pdf">Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2(8):e124.</a><br> * <a href="http://marylandccproject.org/wp-content/uploads/2014/04/McKibbon-Number-Needed-to-Read1.pdf">Mckibbon KA, Wilczynski NL, Haynes RB. What do evidence-based secondary journals tell us about the publication of clinically important articles in primary healthcare journals?. BMC Med. 2004;2:33.</a><br> <br> <br>