P.Mean >> Category >> Critical appraisal (created 2007-07-21).

Critical appraisal is the ability to judge the persuasiveness of the evidence in a research study.  You have to strike the proper balance between being too harsh and being too accepting of research findings. Articles are arranged by date with the most recent entries at the top. You can find outside resources at the bottom of this page.

Most of the new content will be added to my blog (blog.pmean.com).

Review all blog entries related to critical appraisal.

2012

40. What percentage of medical care is evidence based? (January/February 2012)

2011

39. P.Mean: How I became a skeptic (created 2011-06-15). I'm a big fan of the skeptic movement. If you're not familiar with this, it is a group of professional and amateur scientists who critically examine claims of fringe science areas like parapsychology, UFOs, and alternative medicine. So when a blog post on the James Randi Educational Foundation website called for people to share their stories of how people became skeptics, I wrote the following story.

2010

38. P.Mean: Are certain CAM therapies undeserving of further study (created 2010-12-01). I have become something of a celebrity on the Science Based Medicine site, as I have noted in an earlier webpage. In addition to the blog post I noted earlier, there is a new post: Of SBM and EBM Redux. Part I: Does EBM Undervalue Basic Science and Overvalue RCTs? These posts are reminding me how important it is to write precisely, which is good. I largely agree with many of the comments written in these particular entries and in others at the Science Based Medicine site, but there are still areas of fundamental disagreement. One of the major areas where we disagree is over the value of running randomized control trials for certain CAM (Complementary and Alternative Medicine) therapies that are biologically implausible.

37. P.Mean: The Science-Based Medicine blog defends itself (created 2010-11-09). I get a few fan letters from people, which are greatly appreciated, but when I get the rare critical response, I am even more grateful. It doesn't matter if the criticism is valid or not. Someone who takes on the unpleasant task of critiquing my work offers some valuable insights on: what I wrote poorly because it was incorrect, or what I wrote poorly because it was misinterpreted, or what I wrote well but there is a dissenting opinion. One of my webpages, P.Mean: Is there something better than Evidence Based Medicine out there (created 2010-09-20), was highlighted and criticized on the Science Based Medicine blog by David Gorski, and here are some of the things I learned from that criticism. This is an expansion of comments I left on their blog entry.

36. P.Mean: Is there something better than Evidence Based Medicine out there (created 2010-09-20). Someone asked me about a claim made on an interesting blog, Science Based Medicine. The blog claims that Science Based Medicine (SBM), that tries to draw a distinction between that practice and Evidence Based Medicine (EBM). SBM is better because "EBM, in a nutshell, ignores prior probability (unless there is no other available evidence and falls for the p-value fallacy; SBM does not." Here's what I wrote.

35. Is Evidence-Based Medicine too rigid? (August 2010) and P.Mean: Is Evidence-Based Medicine too rigid (created 2010-08-19). Someone was asking about criticisms of Evidence-Based Medicine (EBM) that the reliance on grading schemes and the hierarchy of evidence was too rigid or was EBM providing some heuristics that could be adapted as needed. This is hard to respond to, but it is an important question. I view checklists and hierarchies as a necessary evil, and that sometimes they are applied too rigidly.

34. P.Mean: Is intuition real? (created 2010-02-25). Someone asked if intuition is real. My hunch is that intuition is may be real, but it is grossly overrated.

33. P.Mean: Humility is a good thing for researchers to have (created 2010-02-08). I've been writing a series of articles about the seven deadly sins of researchers. One of these sins is pride. I might need to talk about the alternative to pride, which is humility. I believe that researchers should adopt a humble outlook. Humility is often misunderstood as a bad thing. It is not.

2009 

32. P.Mean: Integrating internal and external evidence (created 2006-03-17). This article was originally published on my old website, but it seems to have disappeared. I'm reproducing it here with a few revisions. Evidence based medicine involves the combining of the best available evidence with your clinical knowledge and the patient's individual values. A good article that emphasizes how this should be done is on the web at: Evidence-based decision making--the six step approach. Franz Porzsolt, Andrea Ohletz, Anke Thim, David Gardner, Helmuth Ruatti, Horand Meier, Nicole Schlotz-Gorton, Laura Schrott. Evid Based Med 2003: 8(6); 165-166.

2008 

31. P.Mean: Reading abstracts instead of the full article (created 2008-11-05). An interesting inquiry on the Evidence-Based Health email discussion group generated a lot of responses. A busy clinician has a limited amount of time to answer a clinical question. They carry out a quick search and find 5 decent abstracts. They have two options: 1) Look at one full-text article. 2) Look at 5 abstracts. Which do people think is preferable? It's a tricky question because both approaches have problems. Here are my thoughts on this issue.

30. P.Mean: Errors in statistical methodology (created 2008-10-19). From those of you who review/appraise articles regularly, I would like to hear what kinds or errors you find most often in the statistical methodology. I will be training nurses to critically appraise the statistical methodology sections of articles, and since their time and knowledge of statistics are limited, I hope to focus mainly on errors they are likely to find in real articles.

29. P.Mean: Godwin's Law (created 2008-10-05). There is a tendency in some discussions (mostly with a political focus, but also in some with a medical focus), to invoke the name of Adolph Hitler in criticizing one's opponents or to compare one's opponents to Nazis. This is sometimes called the Reductio ad Hitlerum fallacy or the Argumentum ad Nazium fallacy.

28. P.Mean: The depths of anti-intellectualism (created 2008-09-05). My brother-in-law is an avid conservative and often sends me political commentary that would make Attila the Hun blush. That's actually a good thing, as it makes me think things through more carefully. He recently made a sarcastic comment about the lack of experience of Barack Obama ("the guy with the resume consisting of two good speeches"). It made me think a bit more about a topic of general interest to me and one that goes well beyond politics: the rise of anti-intellectualism in the United States. Here's what I wrote back to him in response.

27. P.Mean: Where to look for information in a controversial area (created 2008-08-20). I am currently researching vaccinations and the negative effects they may cause. My ex-boyfriend is against vaccines and I am for them. I was wondering if you could give me some references to research supporting the autism link not being caused by vaccines or actually any other theory regarding vaccines being harmful to the body. I went to a seminar and they told us there is a problem with molecular mimicry where the body may attack itself looking for a sequence similar to the disease. Any help you can offer would be greatly appreciated. Desperately seeking the truth.

26. P.Mean: Is there a scientific basis for EBM? (created 2008-08-20). A pair of articles in Chest, along with two rebuttals examines two sides to the debate over the validity of Evidence-Based Medicine (EBM). Point: evidence-based medicine has a sound scientific base: P. J. Karanicolas, R. Kunz, G. H. Guyatt. Chest 2008: 133(5); 1067-71, and Counterpoint: evidence-based medicine lacks a sound scientific base. M. J. Tobin. Chest 2008: 133(5); 1071-4; discussion 1074-7.

25. P.Mean: Quote on anti-intellectualism (created 2008-08-08). I want to write an article about the growing mistrust of experts in our society. A working title is "The Rising Tide of Anti-Intellectualism." Here's a quote that might be worth starting out with.

Outside resources:

Book: Dan Mayer. Essential Evidence-based Medicine. 2nd ed. Cambridge University Press; 2009.

Book: William A Silverman. Where's the evidence? Controversies in modern medicine. New York: Oxford University Press Excerpt: "Medicine is moving away from reliance on the proclamations of authorities to the use of numerical methods to estimate the size of effects of its interventions. But a rumbling note of uneasiness underlines present-day medical progress: the more we know, The more questions we encounter about what to do with the hard-won information. The essays in Where's the Evidence examine the dilemmas that have arisen as the result of medicine's unprecedented increase in technical powers. How do doctors draw the line between "knowing" (the acquisition of new medical information) and doing" (the application of that new knowledge)? What are the long-term consequences of responding to the demand that physicians always do everything that can be done? Is medicine's primary aim to increase the length of life? Or is it to reduce the amount of pain and suffering? And who is empowered to choose when these ends are mutually exclusive? This engaging collection of essays will be of interest to professionals interested in the evidence-based medicine debate, including epidemiologists, neonatologists, those involved in clinical trials and health policy, medical ethicists, medical students, and trainees." Available at: http://lccn.loc.gov/97052058.

Journal article: Angell M, Kassirer JP. Alternative medicine--the risks of untested and unregulated remedies. M. New England Journal of Medicine 1998: 339(12); 839-41. Excerpt: It is time for the scientific community to stop giving alternative medicine a free ride. There cannot be two kinds of medicine - conventional and alternative. There is only medicine that has been adequately tested and medicine that has not, medicine that works and medicine that may or may not work. Once a treatment has been tested rigorously, it no longer matters whether it was considered alternative at the outset. If it is found to be reasonably safe and effective, it will be accepted. But assertions, speculation, and testimonials do not substitute for evidence. Alternative treatments should be subjected to scientific testing no less rigorous than that required for conventional treatments.

Journal article: Paul Glasziou, Iain Chalmers, Michael Rawlins, Peter McCulloch. When are randomised trials unnecessary? Picking signal from noise BMJ. 2007;334(7589):349 -351. Abstract: "Although randomised trials are widely accepted as the ideal way of obtaining unbiased estimates of treatment effects, some treatments have dramatic effects that are highly unlikely to reflect inadequately controlled biases. We compiled a list of historical examples of such effects and identified the features of convincing inferences about treatment effects from sources other than randomised trials. A unifying principle is the size of the treatment effect (signal) relative to the expected prognosis (noise) of the condition. A treatment effect is inferred most confidently when the signal to noise ratio is large and its timing is rapid compared with the natural course of the condition. For the examples we considered in detail the rate ratio often exceeds 10 and thus is highly unlikely to reflect bias or factors other than a treatment effect. This model may help to reduce controversy about evidence for treatments whose effects are so dramatic that randomised trials are unnecessary." [Accessed on April 4, 2011]. http://www.bmj.com/content/334/7589/349.abstract.

Website: AGREE Collaboration. AGREE (Appraisal of Guidelines REsearch Evaluation) Collaboration. Excerpt: AGREE is an international collaboration of researchers and policy makers who seek to improve the quality and effectiveness of clinical practice guidelines by establishing a shared framework for their development, reporting and assessment. Available at http://www.agreecollaboration.org.

Article makes simple errors and could cause unnecessary deaths. C. Baigent, R. Collins, R. Peto. British Medical Journal 2002: 324(7330); 167. [Medline] [Full text] [PDF]. Description: This article offers a critical review of a critical review (Cleland 2002). Cleland cited issues a large randomized trial of aspirin for prevention of heart attacks and with a meta-analysis. Baigent et al argue that the claims of Cleland are "wrong for trivial reasons and potentially damaging to patients."

Rafael Campo. “Anecdotal Evidence”: Why Narratives Matter to Medical Practice. PLoS Med. 2006;3(10):e423. Abstract: "In this personal essay, Campo discusses the value of listening to patients' stories." [Accessed November 9, 2010]. Available at: http://dx.doi.org/10.1371/journal.pmed.0030423.

Biases in the interpretation and use of research results. RJ MacCoun. Annu Rev Psychol 1998: 49; 259-87. [Full text] [PDF]. Description: This article provides several fascinating examples of people's tendency to be hypercritical of research findings that they dislike and to overlook the flaws of research that they favor.

Doug Smith. But who's counting? The million-billion mistake is among the most common in journalism. But why? Excerpt: "The difference between a million and a billion is a number so vast that it would seem nearly impossible to confuse the two. Take pennies. At the website of the Mega Penny Project, you can see that a million pennies stack up to be about the size of a filing cabinet. A billion would be about the size of five school buses. Or take real estate. A home in a nice part of Los Angeles might cost a million dollars. A billion dollars would buy the whole neighborhood. But journalists can't seem to keep the two numbers straight. Committed as we are to getting the smallest details right, we seem hopelessly prone to writing "million" when, in fact, we mean "billion."" [Accessed February 4, 2010]. Available at: http://www.latimes.com/news/opinion/commentary/la-oe-smith31-2010jan31,0,2185811.story.

Calling all charlatans. A group of researchers puts companies making scientific claims on the spot. (Andrea Gawrylewski). Description: A short article discussing a group of scientists who examine scientific claims made in advertisements for various products. This website was last verified on 2007-10-12. URL: www.the-scientist.com/news/home/53699/

Cochrane Collaboration. The Cochrane Collaboration estimates that only "10% to 35% of medical care is based on RCTs". On what information is this estimate based? Excerpt: "The Cochrane Collaboration has not actually conducted research to determine this estimate; it is possible that the estimate of 10-35% comes from the following passage in a chapter by Kerr L White entitled 'Archie Cochrane's legacy: an American perspective' in the book 'Non-random Reflections on Health Services Research: on the 25th anniversary of Archie Cochrane's Effectiveness and Efficiency'. This book (published by the BMJ Publishing Group) was edited by Alan Maynard and Iain Chalmers. Iain was formerly Director of the UK Cochrane Centre, and the driving force behind the establishment of The Cochrane Collaboration; he knew Archie Cochrane well." [Accessed February 4, 2010]. Available at: http://www.cochrane.org/docs/faq.htm#q20.

John P. A. Ioannidis. Contradicted and Initially Stronger Effects in Highly Cited Clinical Research. JAMA. 2005;294(2):218-228. Abstract: "Context: Controversy and uncertainty ensue when the results of clinical research on the effectiveness of interventions are subsequently contradicted. Controversies are most prominent when high-impact research is involved. Objectives: To understand how frequently highly cited studies are contradicted or find effects that are stronger than in other similar studies and to discern whether specific characteristics are associated with such refutation over time. Design: All original clinical research studies published in 3 major general clinical journals or high-impact-factor specialty journals in 1990-2003 and cited more than 1000 times in the literature were examined. Main Outcome Measure: The results of highly cited articles were compared against subsequent studies of comparable or larger sample size and similar or better controlled designs. The same analysis was also performed comparatively for matched studies that were not so highly cited. Results: Of 49 highly cited original clinical research studies, 45 claimed that the intervention was effective. Of these, 7 (16%) were contradicted by subsequent studies, 7 others (16%) had found effects that were stronger than those of subsequent studies, 20 (44%) were replicated, and 11 (24%) remained largely unchallenged. Five of 6 highly-cited nonrandomized studies had been contradicted or had found stronger effects vs 9 of 39 randomized controlled trials (P = .008). Among randomized trials, studies with contradicted or stronger effects were smaller (P = .009) than replicated or unchallenged studies although there was no statistically significant difference in their early or overall citation impact. Matched control studies did not have a significantly different share of refuted results than highly cited studies, but they included more studies with "negative" results. Conclusions: Contradiction and initially stronger effects are not unusual in highly cited research of clinical interventions and their outcomes. The extent to which high citations may provoke contradictions and vice versa needs more study. Controversies are most common with highly cited nonrandomized studies, but even the most highly cited randomized trials may be challenged and refuted over time, especially small ones." [Accessed February 4, 2010]. Available at: http://jama.ama-assn.org/cgi/content/abstract/294/2/218.

Dickersin K, et al. Development and implementation of a science training course for breast cancer activists: Project LEAD (leadership, education and advocacy development). Health Expect. 2001 Dec;4(4):213-20

Development of evidence-based clinical practice guidelines (CPGs): comparing approaches. Tari Turner, Marie Misso, Claire Harris, and Sally Green. Implementation Science 2008, 3:45doi:10.1186/1748-5908-3-45. [Abstract] [PDF] Description: This article identified publications on developing clinical practice guidelines. The review found six relevant publications. All these publications stressed the need for a multidisciplinary panel, consumer involvement, identification of clinical questions, systematic searches for evidence, consultation beyond the development group, and regular reviews and updates.

Distinguishing Association from Causation: A Backgrounder for Journalists (Kathleen Meister). Description: This 24 page report, published on October 29, 2007, by the American Council on Science and Health, argues that randomized trials, if they can be conducted, provide strong evidence for a causal effect. In contrast, animal and in vitro experiments do not provide strong evidence for a causal relationship but rather are useful for establishing biological mechanisms. Observational studies can sometimes establish a causal relationship. The key things to look for are temporality of the relationship, strength of the relationship, a dose-response relationship, consistency across varied conditions, and biological plausibility. This website was last verified on 2007-11-16. URL: www.acsh.org/publications/pubID.1629/pub_detail.asp

Siddhartha Mukherjee. Do Cellphones Cause Brain Cancer? The New York Times. 2011. Description: An excellent overview of the limitations of Epidemiological research, illustrated by the perplexing findings about the link (or lack thereof) between cell phone use and brain cancer. My only complaint is that they did not address how difficult it is to decide when to close the door on further research. How many negative studies can you tolerate before you say, "Enough! Let's move on to something else." [Accessed on May 4, 2011]. http://www.nytimes.com/2011/04/17/magazine/mag-17cellphones-t.html.

Milne R, Oliver S. Evidence-based consumer health information: developing teaching in critical appraisal skills. International Journal for Quality in Health Care, October 1996, vol./is. 8/5(439-45), 1353-4505;1353-4505 (1996 Oct)

Sharon E. Straus, W. Scott Richardson, Paul Glasziou, R. Brian Haynes, Sharon E. Strauss. Evidence Based Medicine. 3rd ed. Churchill Livingstone; 2005.

Sharon E. Straus, Finlay A. McAlister. Evidence-based medicine: a commentary on common criticisms. CMAJ. 2000;163(7):837-841. Abstract: "Discussions about evidence-based medicine engender both negative and positive reactions from clinicians and academics. Ways to achieve evidence-based practice are reviewed here and the most common criticisms described. The latter can be classified as "limitations universal to the practice of medicine," "limitations unique to evidence-based medicine" and "misperceptions of evidence-based medicine." Potential solutions to the true limitations of evidence-based medicine are discussed and areas for future work highlighted." [Accessed October 26, 2010]. Available at: http://www.cmaj.ca/cgi/content/abstract/163/7/837.

David L. Sackett, Sharon E. Straus, W. Scott Richardson, William Rosenberg, R. Brian Haynes. Evidence-Based Medicine: How to Practice and Teach EBM. 2nd ed. Churchill Livingstone; 2000.C Bonell. Evidence-based nursing: a stereotyped view of quantitative and experimental research could work against professional autonomy and authority. J Adv Nurs. 1999;30(1):18-23. Abstract: "In recent years, there have been calls within the United Kingdom's National Health Service (NHS) for evidence-based health care. These resonate with long-standing calls for nursing to become a research-based profession. Evidence-based practice could enable nurses to demonstrate their unique contribution to health care outcomes, and support their seeking greater professionalization, in terms of enhanced authority and autonomy. Nursing's professionalization project, and, within this, various practices comprising the 'new nursing', whilst sometimes not delivering all that was hoped of them, have been important in developing certain conditions conducive to developing evidence-based practice, notably a critical perspective on practice and a reluctance merely to follow physicians' orders. However, nursing has often been hesitant in its adoption of quantitative and experimental research. This hesitancy, it is argued, has been influenced by the propounding by some authors within the new nursing of a stereotyped view of quantitative/experimental methods which equates them with a number of methodological and philosophical points which are deemed, by at least some of these authors, as inimical to, or problematic within, nursing research. It is argued that, not only is the logic on which the various stereotyped views are based flawed, but further, that the wider influence of these viewpoints on nurses could lead to a greater marginalization of nurses in research and evidence-based practice initiatives, thus perhaps leading to evidence-based nursing being led by other groups. In the longer term, this might result in a form of evidence-based nursing emphasizing routinization, thus--ironically--working against strategies of professional authority and autonomy embedded in the new nursing. Nursing research should instead follow the example of nurse researchers who already embrace multiple methods. While the paper describes United Kingdom experiences and debates, points raised about the importance of questioning stereotyped views of research should have international relevance." [Accessed October 26, 2010]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/10403976.

C Bonell. Evidence-based nursing: a stereotyped view of quantitative and experimental research could work against professional autonomy and authority. J Adv Nurs. 1999;30(1):18-23. Abstract: "In recent years, there have been calls within the United Kingdom's National Health Service (NHS) for evidence-based health care. These resonate with long-standing calls for nursing to become a research-based profession. Evidence-based practice could enable nurses to demonstrate their unique contribution to health care outcomes, and support their seeking greater professionalization, in terms of enhanced authority and autonomy. Nursing's professionalization project, and, within this, various practices comprising the 'new nursing', whilst sometimes not delivering all that was hoped of them, have been important in developing certain conditions conducive to developing evidence-based practice, notably a critical perspective on practice and a reluctance merely to follow physicians' orders. However, nursing has often been hesitant in its adoption of quantitative and experimental research. This hesitancy, it is argued, has been influenced by the propounding by some authors within the new nursing of a stereotyped view of quantitative/experimental methods which equates them with a number of methodological and philosophical points which are deemed, by at least some of these authors, as inimical to, or problematic within, nursing research. It is argued that, not only is the logic on which the various stereotyped views are based flawed, but further, that the wider influence of these viewpoints on nurses could lead to a greater marginalization of nurses in research and evidence-based practice initiatives, thus perhaps leading to evidence-based nursing being led by other groups. In the longer term, this might result in a form of evidence-based nursing emphasizing routinization, thus--ironically--working against strategies of professional authority and autonomy embedded in the new nursing. Nursing research should instead follow the example of nurse researchers who already embrace multiple methods. While the paper describes United Kingdom experiences and debates, points raised about the importance of questioning stereotyped views of research should have international relevance." [Accessed October 26, 2010]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/10403976.

Melissa M. Brown, Gary C. Brown, Sanjay Sharma. Evidence-Based To Value-based Medicine. 1st ed. American Medical Association Press; 2005. Excerpt: "Medical research and practice is currently evolving from evidence-based medicine to an even higher quality of patient care: value based medicine. Value-based medicine, which measures the patient-perceived value and integrates relevant costs provided by healthcare interventions, allows a more accurate measure of the overall worth of interventions to a patient and other stakeholders. Evidence-Based to Value-Based Medicine explains this evolution and explains the uses and practice of value-based medicine in today's healthcare environment. Through detailed explanations about how to integrate value-based medicine and macroeconomic issues, this resource will teach practitioners how to deliver a higher quality of clinical care and to measure and demonstrate the value of their interventions to patients, policymakers, and payers. Also included are innovative input variable methodologies, a critical appraisal of health-related quality-of-life instruments, and an overview of healthcare economics. " Available at: https://catalog.ama-assn.org/Catalog/product/product_detail.jsp?productId=prod240179.

Kristin L. Carman, Maureen Maurer, Jill Mathews Yegian, et al. Evidence That Consumers Are Skeptical About Evidence-Based Health Care. Health Aff. 2010;29(7):1400-1406. Abstract: "We undertook focus groups, interviews, and an online survey with health care consumers as part of a recent project to assist purchasers in communicating more effectively about health care evidence and quality. Most of the consumers were ages 18-64; had health insurance through a current employer; and had taken part in making decisions about health insurance coverage for themselves, their spouse, or someone else. We found many of these consumers' beliefs, values, and knowledge to be at odds with what policy makers prescribe as evidence-based health care. Few consumers understood terms such as "medical evidence" or "quality guidelines." Most believed that more care meant higher-quality, better care. The gaps in knowledge and misconceptions point to serious challenges in engaging consumers in evidence-based decision making." [Accessed July 8, 2010]. Available at: http://content.healthaffairs.org/cgi/content/abstract/29/7/1400.

Sarah Winch, Debra Creedy, And Wendy Chaboyer. Governing nursing conduct: the rise of evidence-based practice. Nurs Inq. 2002;9(3):156-161. Abstract: "Drawing on the Foucauldian concept of 'governmentality' to analyse the evidence-based movement in nursing, we argue that it is possible to identify the governance of nursing practice and hence nurses across two distinct axes; that of the political (governance through political and economic means) and the personal (governance of the self through the cultivation of the practices required by nurses to put evidence into practice). The evaluation of nursing work through evidence-based reviews provides detailed information that may enable governments to target and instruct nurses regarding their work in the interest of preserving the health of the population as a whole. Political governance of the nursing population becomes possible through centralised discursive mechanisms, such as evidence-based reviews that present nursing practice as an intelligible field whose elements are connected in a more or less systematic manner. The identity of the evidence-based nurse requires the modern nurse to develop new skills and attitudes. Evidence-based nursing is an emerging technology of government that judges nursing research and knowledge and has the capacity to direct nursing practice at both the political and personal level." [Accessed October 26, 2010]. Available at: http://onlinelibrary.wiley.com/doi/10.1046/j.1440-1800.2002.00148.x/full.

Helping Doctors and Patients Make Sense of Health Statistics. Gerd Gigerenzer Wolfgang Gaissmaier Elke Kurz-Milcke Lisa M. Schwartz Steven Woloshin. Psychological Science in the Public Interest 2008: 8(2); 53-96. [Abstract] [PDF]. Excerpt: Many doctors, patients, journalists, and politicians alike do not understand what health statistics mean or draw wrong conclusions without noticing. Collective statistical illiteracy refers to the widespread inability to understand the meaning of numbers. For instance, many citizens are unaware that higher survival rates with cancer screening do not imply longer life, or that the statement that mammography screening reduces the risk of dying from breast cancer by 25% in fact means that 1 less woman out of 1,000 will die of the disease. We provide evidence that statistical illiteracy (a) is common to patients, journalists, and physicians; (b) is created by nontransparent framing of information that is sometimes an unintentional result of lack of understanding but can also be a result of intentional efforts to manipulate or persuade people; and (c) can have serious consequences for health.

Clinical Evidence. How much do we know? Excerpt: "So what can Clinical Evidence tell us about the state of our current knowledge? What proportion of commonly used treatments are supported by good evidence, what proportion should not be used or used only with caution, and how big are the gaps in our knowledge? Of around 2500 treatments covered 13% are rated as beneficial, 23% likely to be beneficial, 8% as trade off between benefits and harms, 6% unlikely to be beneficial, 4% likely to be ineffective or harmful, and 46%, the largest proportion, as unknown effectiveness (see figure 1)." [Accessed February 4, 2010]. Available at: http://clinicalevidence.bmj.com/ceweb/about/knowledge.jsp.

Edzard Ernst. How Much of CAM is Based on Research Evidence? eCAM. 2009:nep044. Abstract: "The aim of this article is to provide a preliminary estimate of how much CAM is evidence-based. For this purpose, I calculated the percentage of 685 treatment/condition pairings evaluated in the Desktop Guide to Complementary and Alternative Medicine' which were supported by sound data. The resulting figure was 7.4%. For a range of reasons, it might be a gross over-estimate. Further investigations into this subject are required to arrive at more representative figures." [Accessed February 4, 2010]. Available at: http://ecam.oxfordjournals.org/cgi/content/abstract/nep044v1.

Carol Hagino. How To Appraise Research: A Guide For Chiropractic Students and Practitioners. 1st ed. Churchill Livingstone; 2002.

Interesting quote: "Evidence, which we have means to strengthen for or against a proposition, is our proper means for attaining truth." Florence Nightingale as quoted in www.causeweb.org/cwis/SPT--FullRecord.php?ResourceId=1836.

Epstein, S., 1996. Impure Science: AIDS, Activism, and the Politics of Knowledge, University of California Press. Description: This book explores the research process from the perspective of AIDS research and notes how this disease has changed much of how we now conduct research, such as the new efforts to obtain community involvement in research, the use of placebos in research involving a serious disease with (back in the 1980s) no effective therapies available, and how decisions are made that a research "fact" has been established and that no further research is needed. I want to write an extensive article about this book in a future newsletter.

John C McLachlan. Integrative medicine and the point of credulity. BMJ. 2010;341:c6979. Excerpt: "So called integrative medicine should not be used as a way of smuggling alternative practices into rational medicine by way of lowered standards of critical thinking. Failure to detect an obvious hoax is not an encouraging sign ." [Accessed December 28, 2010]. Available at: http://www.bmj.com/content/341/bmj.c6979.full.

Stephen Gehlbach. Interpreting the Medical Literature: Practical Epidemiology for Clinicians, Fifth Edition. 5th ed. McGraw-Hill Medical; 2006.

Trevor Sheldon. Managing uncertainty in healthcare. Report of a meeting organised by NICE and AHRQ. 2008. Abstract: "There are certain challenges that confront virtually all health systems, irrespective of the means by which they are funded and administered. One such is the management of uncertainty: specifically, knowing what to do when data on the effectiveness or the cost-effectiveness of new medicines or procedures is incomplete or inadequate, but decisions have nonetheless to be taken on whether to purchase and supply them to patients. A similar issue arises when there is suspicion that a procedure or medicine already in use may be relatively ineffective or represent poor value for money." [Accessed January 15, 2009]. Available at: http://www.nice.org.uk/media/A1A/E6/NICEAHRQWorkshopReportFINAL.pdf.

Dariusz Leszczynski, Zhengping Xu. Mobile phone radiation health risk controversy: the reliability and sufficiency of science behind the safety standards. Health Research Policy and Systems. 2010;8(1):2. Abstract: "There is ongoing discussion whether the mobile phone radiation causes any health effects. The International Commission on Non-Ionizing Radiation Protection, the International Committee on Electromagnetic Safety and the World Health Organization are assuring that there is no proven health risk and that the present safety limits protect all mobile phone users. However, based on the available scientific evidence, the situation is not as clear. The majority of the evidence comes from in vitro laboratory studies and is of very limited use for determining health risk. Animal toxicology studies are inadequate because it is not possible to "overdose" microwave radiation, as it is done with chemical agents, due to simultaneous induction of heating side-effects. There is a lack of human volunteer studies that would, in unbiased way, demonstrate whether human body responds at all to mobile phone radiation. Finally, the epidemiological evidence is insufficient due to, among others, selection and misclassification bias and the low sensitivity of this approach in detection of health risk within the population. This indicates that the presently available scientific evidence is insufficient to prove reliability of the current safety standards. Therefore, we recommend to use precaution when dealing with mobile phones and, whenever possible and feasible, to limit body exposure to this radiation. Continuation of the research on mobile phone radiation effects is needed in order to improve the basis and the reliability of the safety standards." [Accessed February 1, 2010]. Available at: http://www.health-policy-systems.com/content/8/1/2

Trisha Greenhalgh. Narrative based medicine: Narrative based medicine in an evidence based world. BMJ. 1999;318(7179):323-325. Excerpt: "In a widely quoted riposte to critics who accused them of naive empiricism, Sackett and colleagues claimed that "the practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence .... By individual clinical expertise we mean the proficiency and judgment that individual clinicians acquire through clinical experience and clinical practice." Sackett and colleagues were anxious to acknowledge that there is an art to medicine as well as an objective empirical science but they did not attempt to define or categorise the elusive quality of clinical competence. This article explores the dissonance between the "science" of objective measurement and the "art" of clinical proficiency and judgment, and attempts to integrate these different perspectives on clinical method." [Accessed December 5, 2009]. Available at: http://www.bmj.com/cgi/content/full/318/7179/323.

Vera Kalitzkus, Peter F. Matthiessen. Narrative-Based Medicine: Potential, Pitfalls, and Practice. The Permanente Journal. 13(1):80-86. Excerpt: "Narratives have always been a vital part of medicine. Stories about patients, the experience of caring for them, and their recovery from illness have always been shared—among physicians as well as among patients and their relatives. With the evolution of “modern” medicine, narratives were increasingly neglected in favor of “facts and findings,” which were regarded as more scientific and objective. Now, in recent years medical narrative is changing—from the stories about patients and their illnesses, patient narratives and the unfolding and interwoven story between health care professionals and patients are both gaining momentum, leading to the creation or defining of narrative-based medicine (NBM). The term was coined deliberately to mark its distinction from evidence-based medicine (EBM); in fact, NBM was propagated to counteract the shortcomings of EBM.1,2 But what is NBM? Is it a specific therapeutic tool, a special form of physician-patient communication, a qualitative research tool, or does it simply signify a particular attitude towards patients and doctoring? It can be all of the above with different forms or genres of narrative or practical approach called for depending on the field of application. In this article we will give a systematic overview of NBM: a short historic background; the various narrative genres; and an analysis of how the genres can be effectively applied in theory, research, and practice in the medical field, with a focus on possibilities and limitations of a narrative approach. " [Accessed December 5, 2009]. Available at: http://xnet.kp.org/permanentejournal/winter09/narrativemedicine.html.

Narrative evidence based medicine. Rita Charon, Peter Wyer, The NEBM Working Group. Lancet 2008: 371; 296-297. [Full text] [PDF]. Description: This article discusses the need to combine the data-based emphasis of evidence based medicine with patient experiences, illness narratives, and other sources such as contemporary novels.

Journal article: Zachary F. Meisel, Jason Karlawish. Narrative vs Evidence-Based Medicine—And, Not Or JAMA: The Journal of the American Medical Association. 2011;306(18):2022 -2023. Excerpt: "The week before the US Preventive Services Task Force (USPSTF) released its recommendations against routine prostate screening for healthy men, celebrity patients including Joe Torre and Rudy Giuliani had already lined up to challenge the population-based recommendations. To promote their position that screening for prostate-specific antigen is lifesaving, these individuals relied on a powerful tool: their own personal narratives. However, the experts whose goal is to disseminate and translate population-based evidence will, in the name of science, shun individual stories. This one-sided use of narrative has played out repeatedly, from the USPSTF recommendations on screening mammography to the US Food and Drug Administration (FDA) labeling hearings on bevacizumab for advanced breast cancer.1,2 Each time, those who espouse only evidence—without narratives about real people—struggle to control the debate. Typically, they lose." [Accessed on November 9, 2011].

Kimball C Atwood. Naturopathy, Pseudoscience, and Medicine: Myths and Fallacies vs Truth. MedGenMed. 6(1):33-33. Excerpt: "It is clear that my article raised a few hackles. I received more than 60 emails myself, running about 4:1 opposed to what I wrote. Most of those who opposed were doctors of naturopathy (NDs). Not one opposing letter, however, offered an example of an inaccurate statement made in the article itself. Rather, the arguments were based on logical fallacies, mostly the ad hominem (I was “biased,” “hostile,” and “must feel threatened”) and its close relative, the tu quoque (I have no right to criticize naturopathy when regular medicine is so bad, “doesn't have all the answers,” etc.). They were also based on a denial (in the case of NDs) or on an ignorance (in the case of several others) of naturopathic literature. There were several common themes; I will address those and make additional comments specific to the letters printed here."

Ben Goldacre. Objectionable 'objectives'. The Guardian. 2006. Excerpt: "Deconstructing the evidence-based discourse in health sciences: truth, power and fascism. Even from looking at the title, you just know this academic paper from the September edition of the International Journal of Evidence-based Healthcare is going to be an absolute corker. And it uses the word "fascist" (or elaborate derivatives) 28 times in six pages, which even Rik Mayall in The Young Ones might have described as "overdoing it"." [Accessed October 26, 2010]. Available at: http://www.guardian.co.uk/science/2006/aug/19/badscience.uknews.

Kimball C. Atwood. The Ongoing Problem with the National Center for Complementary and Alternative Medicine. Skeptical Inquirer. 2003;27(5). Excerpt: "In spite of statements to the contrary by its director, the NCCAM continues to fund and promote pseudoscience. Political pressures and the Center’s charter would seem to make this inevitable. Ethics and the public interest are compromised." [Accessed December 4, 2010]. Available at: http://www.csicop.org/si/show/ongoing_problem_with_the_national_center.

Gordon C S Smith, Jill P Pell. Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials. BMJ. 2003;327(7429):1459 -1461. Objectives To determine whether parachutes are effective in preventing major trauma related to gravitational challenge. Design: Systematic review of randomised controlled trials. Data sources: Medline, Web of Science, Embase, and the Cochrane Library databases; appropriate internet sites and citation lists. Study selection: Studies showing the effects of using a parachute during free fall. Main outcome measure: Death or major trauma, defined as an injury severity score > 15. Results: We were unable to identify any randomised controlled trials of parachute intervention. Conclusions: As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials. Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute." [Accessed October 26, 2010]. Available at: http://www.bmj.com/content/327/7429/1459.abstract.

Thomas B Newman. The power of stories over statistics. BMJ. 2003;327(7429):1424-1427. Excerpt: "Neonatal jaundice and infant safety on aeroplanes provide two lessons on the power of narrative, rather than statistical evidence, in determining practice." [Accessed December 10, 2009]. Available at: http://www.bmj.com.

John De Simone. Reductionist inference-based medicine, i.e. EBM. J Eval Clin Pract. 2006;12(4):445-449. RATIONALE, AIMS AND OBJECTIVES: Unbeknown to many, reductionist and postmodern worldviews competitively coexist in science and society. The debate on evidence-based medicine (EBM) is at the tip of this 'iceberg'. Via systems thinking and complexity science EBM reveals crucial flaws and its reductionism entails an inability to appreciate (even tolerate) contrasting ideas and/or 'see a bigger picture'. An interdisciplinary approach provides insight into novel explanations. Thereafter, the conceptual barrier shifts to communication, a challenge which mandates attempts to steer the discourse by reframing the debate. METHOD: Interdisciplinary perspectives serve to illustrate a 'bigger picture'. Also, 'wicked' questions stimulate reflection, discern leverage points and dismantle resilient defences. Lastly, a proposal: exploring the value of 'glasses half full'. CONCLUSION: Some may realize that postmodern concepts behind compelling criticisms to EBM have already taken root, being shared by policymakers, practitioners and patients as well. [Accessed October 26, 2010]. Available at: http://www.ncbi.nlm.nih.gov/pubmed/16907688.

Isaac Asimov. The Relativity of Wrong. Originally published in The Skeptical Inquirer, Vol. 14 No. 1, Fall 1989, pages 35-44. Excerpt: "I received a letter the other day. It was handwritten in crabbed penmanship so that it was very difficult to read. Nevertheless, I tried to make it out just in case it might prove to be important. In the first sentence, the writer told me he was majoring in English literature, but felt he needed to teach me science. (I sighed a bit, for I knew very few English Lit majors who are equipped to teach me science, but I am very aware of the vast state of my ignorance and I am prepared to learn as much as I can from anyone, so I read on.) " [Accessed September 13, 2010]. Available at: http://chem.tufts.edu/AnswersInScience/RelativityofWrong.htm.

Ward A. The role of causal criteria in causal inferences: Bradford Hill's "aspects of association". Epidemiologic Perspectives & Innovations. 2009;6(1):2. Available at: http://www.epi-perspectives.com/content/6/1/2 [Accessed June 24, 2009]. Abstract: As noted by Wesley Salmon and many others, causal concepts are ubiquitous in every branch of theoretical science, in the practical disciplines and in everyday life. In the theoretical and practical sciences especially, people often base claims about causal relations on applications of statistical methods to data. However, the source and type of data place important constraints on the choice of statistical methods as well as on the warrant attributed to the causal claims based on the use of such methods. For example, much of the data used by people interested in making causal claims come from non-experimental, observational studies in which random allocations to treatment and control groups are not present. Thus, one of the most important problems in the social and health sciences concerns making justified causal inferences using non-experimental, observational data. In this paper, I examine one method of justifying such inferences that is especially widespread in epidemiology and the health sciences generally - the use of causal criteria. I argue that while the use of causal criteria is not appropriate for either deductive or inductive inferences, they do have an important role to play in inferences to the best explanation. As such, causal criteria, exemplified by what Bradford Hill referred to as "aspects of [statistical] associations", have an indispensible part to play in the goal of making justified causal claims.

Steven Novella, David H. Gorski. Science-Based Medicine Blog. Excerpt: "Science-Based Medicine is a new daily science blog dedicated to promoting the highest standards and traditions of science in medicine and health care. The mission of this blog is to scientifically examine medical and health topics of interest to the public. This includes reviewing newly published studies, examining dubious products and claims, providing much needed scientific balance to the often credulous health reporting, and exploring issues related to the regulation of scientific quality in medicine." [Accessed December 27, 2010]. Available at: http://www.sciencebasedmedicine.org/.

David Isaacs, Dominic Fitzgerald. Seven alternatives to evidence based medicine. BMJ. 1999;319(7225):1618. Excerpt: "We, two humble clinicians ever ready for advice and guidance, asked our colleagues what they would do if faced with a clinical problem for which there are no randomised controlled trials and no good evidence We found ourselves faced with several personality based opinions, as would be expected in a teaching hospital. The personalities transcend the disciplines, with the exception of surgery, in which discipline transcends personality. We categorised their replies, on the basis of no evidence whatsoever, as follows." [Accessed October 26, 2010]. Available at: http://www.bmj.com/content/319/7225/1618.short.

Richard K. Riegelman. Studying a Study and Testing a Test: How to Read the Medical Evidence. Fifth. Lippincott Williams & Wilkins; 2004.

Kuna Gupta, Jyotsna Gupta, Sukhdeep Singh. Surrogate Endpoints: How Reliable Are They? 2010. Excerpt: "Surrogate endpoints offer three main advantages to clinical studies: The study becomes simpler. Since surrogates are usually measures of symptoms or laboratory biomarkers, they make it easier to quantify comparisons. The study becomes shorter. It generally takes less time to see the effect of an intervention on a surrogate than on the final clinical outcome, especially if the surrogate marks an intermediate point in the disease process. The study becomes less expensive. Since the study duration is shorter, the cost decreases. Measurement of the surrogate may be less costly than measurement of the true outcome. In addition, waiting for a clinical outcome may involve more medical care for sicker patients." [Accessed May 3, 2010]. Available at: http://www.firstclinical.com/journal/2010/1005_Surrogate.pdf.

Jeremy Genovese. The Ten Percent Solution. Anatomy of an Education Myth. Excerpt: "For may years, versions of a claim that students remember “10% of what they read, 20% of what they hear, 30% of what they see, 50% of what they see and hear, and 90% of what they do” have been widely circulated among educators. The source of this claim, however, is unknown and its validity is questionable. It is an educational urban legend that suggests a willingness to accept assertions about instructional strategies without empirical support." [Accessed March 25, 2010]. Available at: http://www.skeptic.com/eskeptic/10-03-24/#feature.

Ann Evensen, Rob Sanson-Fisher, Catherine D'Este, Michael Fitzgerald. Trends in publications regarding evidence practice gaps: A literature review. Implementation Science. 2010;5(1):11. Abstract: "BACKGROUND: Well-designed trials of strategies to improve adherence to clinical practice guidelines are needed to close persistent evidence-practice gaps. We studied how the number of these trials is changing with time, and to what extent physicians are participating in such trials. METHODS: This is a literature-based study of trends in evidence-practice gap publications over 10 years and participation of clinicians in intervention trials to narrow evidence-practice gaps. We chose nine evidence-based guidelines and identified relevant publications in the PubMed database from January 1998 to December 2007. We coded these publications by study type (intervention versus non-intervention studies). We further subdivided intervention studies into those for clinicians and those for patients. Data were analyzed to determine if observed trends were statistically significant. RESULTS: We identified 1,151 publications that discussed evidence-practice gaps in nine topic areas. There were 169 intervention studies that were designed to improve adherence to well-established clinical guidelines, averaging 1.9 studies per year per topic area. Twenty-eight publications (34%; 95% CI: 24% - 45%) reported interventions intended for clinicians or health systems that met Effective Practice and Organization of Care (EPOC) criteria for adequate design. The median consent rate of physicians asked to participate in these well-designed studies was 60% (95% CI, 25% to 69%). CONCLUSIONS: We evaluated research publications for nine evidence-practice gaps, and identified small numbers of well-designed intervention trials and low rates of physician participation in these trials." [Accessed February 4, 2010]. Available at: http://www.implementationscience.com/content/5/1/11.

Statistics as Principled Argument. Abelson, R. P. (1995) Hillsdale, New Jersey: Lawrence Erlbaum Associates. ISBN: 0805805281. [BookFinder4U link]. Description: There is a wealth of wisdom in this book. The theme of this book is that Statistics provides basic principles to argue (debate might be a nicer word) about scientific claims. In the first chapter, Dr. Abelson argues that a persuasive argument has to have MAGIC--Magnitude, Articulation, Generality, Interestingness, and Credibility. Then he describes probability and randomness, illustrates common fallacies about probability, and shows how these principles can be applied to research findings. Chapter 5, On Suspecting Fishiness, describes some wonderful examples of strange numbers that might indicate fraud. This chapter is especially valuable because it is so rarely covered. The remaining chapters describe the MAGIC components of a persuasive argument with frequent citations of real research. This book is more conceptual than computational, which fits in with one of Abelson's Laws "Don't talk Greek if you don't know the English translation."

Richard K. Riegelman. Studying a Study and Testing a Test: How to Read the Medical Evidence. Fifth. Lippincott Williams & Wilkins; 2004.

Survey of claims of no effect in abstracts of Cochrane reviews. Phil Alderson, Iain Chalmers. BMJ 2003: 326(7387); 475. [Medline] [Full text] [PDF]. Description: This article notes that claims about "negative" results need to be phrased cautiously. In a review of 989 Cochrane reviews, the authors found 240 poorly worded interpretations of no difference or no effect.

The Taxonomy of Logical Fallacies (Gary N. Curtis). Description: Understanding flaws in the process of of advocating a particular viewpoint is an important component of critical thinking. You can understand these flaws better if you can ascribe them to a particular category. This website was last verified on 2007-07-23. URL: www.fallacyfiles.org/taxonomy.html

E. Ernst. Unconventional cancer therapies: What we need is rigorous research, not closed minds.Chest 2000: 117(2); 307-8. [Medline] [Full text] [PDF]. Description: This article notes the popularity of many complementary and alternative medicine techniques, but also warns of the lack of methodological rigor in many evaluations of these techniques. Rather than reject wholesale all of these techniques, these authors suggest that rigorous research is needed.

Berger B, Steckelberg A, Meyer G, Kasper J, Mühlhauser I. Training of patient and consumer representatives in the basic competencies of evidence-based medicine: a feasibility study. BMC Med Educ. 2010 Feb 11;10:16.

Creative Commons License All of the material above this paragraph is licensed under a Creative Commons Attribution 3.0 United States License. This page was written by Steve Simon and was last modified on 2011-01-03. The material below this paragraph links to my old website, StATS. Although I wrote all of the material listed below, my ex-employer, Children's Mercy Hospital, has claimed copyright ownership of this material. The brief excerpts shown here are included under the fair use provisions of U.S. Copyright laws.

2008

24. Stats: The post-modern assault on evidence-based medicine, part 3 (January 7, 2008). I have volunteered to give a talk for a group of statisticians which discusses an article I am preparing: "The post-modern assault on evidence-based medicine." I might give this talk on the first Monday in February or the first Monday in March. Here is a tentative abstract.

2007 

23. Stats: The post-modern assault on evidence-based medicine, part 2 (December 18, 2007). As mentioned in a December 7, 2007 weblog entry, I sent an email to Michael Shermer, editor of Skeptic Magazine. In it, I included a brief pitch for an article, "The post-modern assault on evidence-based medicine" and he gave me some encouragement to work on this. Guidelines for contributions are at http://www.skeptic.com/the_magazine/contribute.html and they accept articles of varying lengths (500 to 5000 words) though they prefer shorter articles. the current draft is already 2700 words with a lot left to be written. So I'll have to find a way to condense many of these thoughts. Here are some of the talking points in the article. I'm including heading in this draft, but they are just to help me organize my thoughts. This draft still needs a lot of work, but I wanted to have something to show to people.

22. Stats: The post-modern assault on evidence-based medicine (December 7, 2007). I sent a brief email to Michael Shermer, editor of Skeptic Magazine. In it, I included a brief pitch for an article. The title would be "The post-modern assault on evidence-based medicine."

21. Stats: Which expert should you believe? (August 3, 2007). There's a common saying in research circles that goes something like this: "For every PhD, there is an equal and opposite PhD." That saying is certainly true in my experience. For just about any scientific controversy, you can find an expert on either side of the issue. Quite often the experts on both sides know far more about the controversy than you or I will ever get a chance to know. So the question becomes: "Which expert should you believe?"

20. Stats: The research world is not black and white (July 7, 2007). I was asked to review a commentary about studies involving remote healing (which includes prayer studies as a subgroup). It is not available yet on the web, but I'll try to link to it when it becomes available. One of the documents that this commentary did refer to, however, is available on the web. Mistakes in Experimental Design and Interpretation. Peter Norvig, norvig.com/experiment-design.html. This is a good commentary, but it falls into a philosophical trap that plagues most of the discussion on how to critically evaluate research, including (I have to admit) some of my own commentaries.

19. Stats: The Seven Deadly Sins of Researchers (April 4, 2007). I was reading an article in written by Steven Goodwin, The Seven Sins of Programmers, published in issue 17 of the Free Software Magazine and thought it would be fun to use a similar theme in research. So here are the seven deadly sins of researchers. Lest I be accused of the sin of pride, let me admit that everyone, including myself has been tempted by and has indulged in some of these sins at one point in their research career.

18. Stats: Ignore weak evidence at your own peril (March 13, 2007). I ran across an interesting article recently: Incorporating quality of evidence into decision analytic modeling. R. S. Braithwaite, M. S. Roberts, A. C. Justice. Ann Intern Med 2007: 146(2); 133-41. I have not yet read the full article, but the message seems to be that using evidence from weak data sources is better than ignoring it.

2006 

17. Stats: Ambiguous nature of the word "risk" (September 13, 2006). A recent discussion on the Evidence-Based Health email discussion group focused on the word "risk". It means harmful event (risks versus benefits) as well as a probability (risk of heart attack).

16. Stats: Why do we need research? (May 31, 2006). I help a lot of people to perform research and it's important to understand why research is important. We do research because good research drives out bad medical practices.

15. Stats: No tolerance for ambiguity (May 10, 2006). I was at a meeting tonight and put in a plug for my book, Statistical Evidence in Medical Trials, by mentioning that it was intended to help people understand the controversies and the seemingly contradictory research that appears in the medical journals. A woman talked to me afterwards and wanted to know what I thought about a particular author who had written about hormone replacement therapy. I had to defer any comments because I was unfamiliar with this particular author. She then informed me that she had taken hormone replacement therapy and it gave her breast cancer. Thankfully, the cancer has responded well to treatment, but I was struck by the certainty of her comment about how the estrogen supplements caused her cancer.

14. Stats: Contradictory research (March 30, 2006). The Washington Post published an article that gets to the heart of the difficulty with Evidence Based Medicine, Fat or Fiction? Is There a Link Between Dietary Fat and Cancer Risk? Why Two Big Studies Reached Different Conclusions. Lisa M. Schwartz, Steven Woloshin, H. Gilbert Welch, Published March 14, 2006 in The Washington Post, Page HE01. It starts out with a rather provocative statement, but one that is very hard to argue with, "The public is bombarded with messages about diet and cancer prevention. Unfortunately, the advice is pretty inconsistent. One day a diet prevents cancer, the next day it doesn't. In the early '90s, beta carotene (a vitamin A precursor present in fruits and vegetables) was said to prevent lung cancer. But several years later, headlines read, "Beta carotene pills yield no benefit" (The Post, 1996). And while people have been told for years to eat a high-fiber diet to reduce the risk of colon cancer, recently we were told "High-fiber diets are not anti-cancer miracle" (Montreal Gazette, 2005)."

13. P.Mean: Integrating internal and external evidence (created 2006-03-17). This article was originally published on my old website, but it seems to have disappeared. I'm reproducing it here with a few revisions. Evidence based medicine involves the combining of the best available evidence with your clinical knowledge and the patient's individual values. A good article that emphasizes how this should be done is on the web at: Evidence-based decision making--the six step approach. Franz Porzsolt, Andrea Ohletz, Anke Thim, David Gardner, Helmuth Ruatti, Horand Meier, Nicole Schlotz-Gorton, Laura Schrott. Evid Based Med 2003: 8(6); 165-166.

12. Stats: More on the Emily Rosa experiment (March 10, 2006). One of the more interesting research studies from an Evidence-Based Medicine perspective started out as a simple science fair project by a fourth grade student. Emily Rosa wanted to see if practitioners of Therapeutic Touch could detect the energy fields in a carefully controlled condition. The topic of this project was not too surprising, since her parents both worked for the QuackWatch website, but Emily came up with the idea entirely on her own. The science project received a lot of publicity and Emily was encouraged to publish here results in a medical journal. With the assistance of several adults, the publication, A close look at therapeutic touch. L. Rosa, E. Rosa, L. Sarner, S. Barrett. Jama 1998: 279(13); 1005-10, appeared, giving Emily Rosa something nice to put on her resume when she applies to college. I'm still waiting for my first publication in an "A journal" like JAMA, so I am quite jealous.

2005 

11. Stats: A nice definition of anecdotal evidence (October 24, 2005). Robert Todd Carroll, author of The Skeptics Dictionary and creator of the website Skepdic.com, mentioned in a recent newsletter that he has updated his definition for anecdotal evidence. He points out that anecdotes are unreliable because they  are "prone to contamination by beliefs, later experiences, feedback, selective attention to details, and so on." and although these are "scientifically worthless", their vivid details make them popular. But Dr. Carroll wisely does not totally rule out their use.

10. Stats: What is critical thinking? (May 17, 2005). A lot of people use the term "critical thinking" in an offhand way. Usually the writer who uses this term is trying to imply that anyone who disagrees with the writer's theory or belief is naive. Critical thinking is also produced as an excuse to attack a particular theory or to promote "equal time" for a competing belief. Critical thinking, however, is much more complex than this. It involves more than just fault finding.

9. Stats: Effective communication about randomized clinical trials (February 22, 2005). The most recent issue of BMJ has a nice article evaluating a training program for health care professionals. This is an unusual thing to do; most of the time, training programs are just put together with the assumption that because intelligent people are designing the program, it must be effective. I'm just as guilty of this as anyone else, of course. The particular training class is of great interest to me, because it works on how health care professionals communicate to patients about randomized clinical trials.

8. Stats: How good is your intuition? (January 21, 2005). One definition of Evidence Based Medicine is "the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research. Sackett et al 1996." In the same article, the authors point out that "Good doctors use both individual clinical expertise and the best available external evidence, and neither alone is enough. Without clinical expertise, practice risks becoming tyrannised by evidence, for even excellent external evidence may be inapplicable to or inappropriate for an individual patient. Without current best evidence, practice risks becoming rapidly out of date, to the detriment of patients." A recently published book by Malcolm Gladwell highlights the value of individual expertise, which you might also call clinical judgment or simply intuition.

2004 

7. Stats: Entrenched beliefs (December 1, 2004). A member of the Evidence Based Health listserv wrote about a workshop he taught on Evidence Based Medicine (EBM). It went pretty well, he thought until he overheard a conversation during one of the breaks. A doctor was recommending an herbal remedy to a friend with the comment: "It cleared my problem up really quickly." This demonstrated to him how readily we still accept anecdotal evidence instead of EBM. I think it is just a fundamental characteristic of human nature to apply critical thinking skills to everything except a core set of beliefs that you hold near and dear to your heart.

6. Stats: Anecdotal information (November 2, 2004). Anecdotal information is problematic for several reasons, and I have highlighted some of these reasons in a speech I gave at the 2004 meeting of the  Midwest Society for Pediatric Research. Catherine Fiorello highlights the weakness of the anecdotal argument "I was spanked as a child and I turned out okay." Such an observation ignores the need for a comparison group. Such a group would help answer the question "What would I have been like if I hadn't been spanked as a child?"

5. Stats: Craniosacral therapy (September 24, 2004). An educational email circulated at our hospital during Pain Awareness Week has two interesting questions that draw a sharp contrast between traditional medicine and alternative medicine.

4. Stats: Statistical nihilism (July 6, 2004). There's an enormous mistrust of statistics in the real world. To the extent that it makes people skeptical, that's good. To the extent it turns them cynical, that's bad. There's a viewpoint, championed by too many people, that statistics are worthless. I call this viewpoint statistical nihilism.

3. Stats: Overlooking one's own flaws (May 3, 2004). Can we ever be truly objective about ourselves? Maybe not. A recent article in Scientific American  by Michael Shermer, The Enchanted Glass, talks about the tendency to see ourselves more positively than our peers. For example, when asked the probability that certain people will go to heaven, the surveyors listed Bill Clinton at 52%, Mother Theresa at 79% and so forth. But these same people rated their own probability at 87%.

2000 

2. Stats: Flaws in a research paper (January 27, 2000). Dear Professor Mean: A small group I've been teaching has gotten extremely interested in how to decide when there are too many flaws in a paper which would completely invalidate (and circular file) it.

1. Stats: Jargon in Statistics (January 27, 2000). Dear Professor Mean: I have to review a paper for journal club and I don't understand all the obscure statistical jargon that the authors use.

What now?

Browse other categories at this site

Browse through the most recent entries

Get help

Creative Commons License This work is licensed under a Creative Commons Attribution 3.0 United States License. This page was written by Steve Simon.