P.Mean >> Category >> Adverse events in clinical trials (created 2007-06-18).

Date gap methods refer to the analysis of waiting times between discrete events. This category includes methods used to examine the accrual of patients in a clinical trial as well as the frequency of adverse events in a clinical trial or safety study. Also see Category: Accrual problems in clinical trials, Category: Control charts.


21. P.Mean: Calculating NNT for infection rates (created 2009-04-15). I will be leading an EBM teaching session for housestaff on an article about Methicillin-Resistant Staphylococcus aureus infection rates. I was planning to analyze it using the standard questions about therapy from the Users' Guides to the Medical Literature, but I was wondering if there should be any special considerations, given that therapy (MRSA screening & eradication) was given at a hospital-wide level. For example, the results are presented as incidence of nosocomial MRSA infections per person-years -- can I convert this to a percentage, to churn out a number needed to treat (NNT)? Or is this statistically forbidden? Please let me know of any journal  articles you're aware of that address the issue of studies taken at a  hospital- or population-based level.

Outside resources: (also available at http://www.zotero.org/groups/pmeanreferences/items/collection/2959813)

John McKay, Nick Bradley, Murray Lough, Paul Bowie. A review of significant events analysed in general practice: implications for the quality and safety of patient care. BMC Family Practice. 2009;10(1):61. Abstract: "BACKGROUND: Significant event analysis (SEA) is promoted as a team-based approach to enhancing patient safety through reflective learning. Evidence of SEA participation is required for appraisal and contractual purposes in UK general practice. A voluntary educational model in the west of Scotland enables general practitioners (GPs) and doctors-in-training to submit SEA reports for feedback from trained peers. We reviewed reports to identify the range of safety issues analysed, learning needs raised and actions taken by GP teams. METHOD: Content analysis of SEA reports submitted in an 18 month period between 2005 and 2007.RESULTS:191 SEA reports were reviewed. 48 described patient harm (25.1%). A further 109 reports (57.1%) outlined circumstances that had the potential to cause patient harm. Individual 'error' was cited as the most common reason for event occurrence (32.5%). Learning opportunities were identified in 182 reports (95.3%) but were often non-specific professional issues not shared with the wider practice team. 154 SEA reports (80.1%) described actions taken to improve practice systems or professional behaviour. However, non-medical staff were less likely to be involved in the changes resulting from event analyses describing patient harm (p < 0.05). CONCLUSION: The study provides some evidence of the potential of SEA to improve healthcare quality and safety. If applied rigorously, GP teams and doctors in training can use the technique to investigate and learn from a wide variety of quality issues including those resulting in patient harm. This leads to reported change but it is unclear if such improvement is sustained." [Accessed October 11, 2009]. Available at: http://www.biomedcentral.com/1471-2296/10/61.

Laurent Argaud, Martin Cour, Olivier Martin, et al. Changeovers of vasoactive drug infusion pumps: impact of a quality improvement program. Critical Care. 2007;11(6):R133. Abstract: "BACKGROUND: Hemodynamic instability following the changeover of vasoactive infusion pump (CVIP) is a common problem in the intensive care unit. Several empiric methods are used to achieve CVIP. We hypothesized that the variation in these procedures could generate some morbidity. We sought to assess the effects of the standardization of practice, as a quality improvement program, on the CVIP-induced incidents. MATERIALS AND METHODS: We performed a prospective before-and-after intervention study including all adult patients with a diagnosis of cardiovascular failure who received a continuous infusion of vasoactive drugs or inotropic drugs. After a baseline preimplementation period (phase 1), a standardized 'quick change method' of CVIP using two syringe drivers was implemented in our intensive care unit (phase 2). Endpoints (rate and distribution of incidents: variations of systolic blood pressure >20 mmHg or heart rate >20 beats/min, and arrhythmias) were registered in both 3-month phases. RESULTS: We studied a total of 913 CVIP events (phase 1, 435 events; phase 2, 478 events) from 43 patients. Patient characteristics were not significantly different among phases, with a majority of the patients having septic shock. The frequency of incidents was significantly (P < 0.0001) reduced in phase 2 (5.9%, n = 28) versus phase 1 (17.8%, n = 78). This effect was observed whichever catecholamine was used. More than 98% of incidents were blood pressure variations, with a similar distribution of the nature of incidents in both phases. CONCLUSION: The present study illustrates that adverse events are common following CVIP, and illustrates the positive impact of a quality improvement program to enhance inpatient safety related to this current process of care." [Accessed November 30, 2009]. Available at: http://ccforum.com/content/11/6/R133.

Creative Commons License All of the material above this paragraph is licensed under a Creative Commons Attribution 3.0 United States License. This page was written by Steve Simon. Anything below this paragraph represents material from my old website, StATS. Until recently (June 2012), this material was available through Children's Mercy Hospital, but is no longer available there. Although I do not hold clear copyright for this material, I am reproducing it here as a service. See my old website page for more details.


20. Stats: What is an adverse event?

19. Stats: What is an adverse reaction? (adverse drug reaction)


18. Stats: How do you analyze safety data (January 22, 2008). Someone on the MedStats email discussion group asked about how to analyze adverse event data. He noted that adverse event data is not one of the primary or secondary outcome measures, and wondered if it would be appropriate to provide statistical analysis of this data. Adverse events (and safety data in general) represent a special type of analysis that does not fit in well with the listing of primary/secondary outcomes. The main reason for this is the number of possible adverse event categories is very broad and it is not always possible to anticipate in advance what type of adverse events are of greatest interest.


17. Stats: A new and simple approach for monitoring safety data (November 18, 2007). Many hospitals administrators collect safety data, and for the most part this data is not analyzed well. The people who collect the data are well-meaning, but the simplistic tables and graphs that they use are typically unable to reveal important trends and patterns in the data. Much of the safety data represents a description of events (usually bad events) that occur. The question that always seemed to be on their minds was: is there a sudden surge of events that we need to take action on? Note: the file for this link (../07/DateGapIntroduction.html) is missing.

16. Stats: The pros and cons of control charts versus data mining (November 17, 2007). In a talk I gave in December 2006, I highlighted how in the analysis of adverse event data, control charts can augment more complex statistical tools like data mining. Here's a summary of the pros and cons of using control charts. Note: the file for this link (../07/ProsControlCharts.html) is missing.

15. Stats: Monitoring adverse events during peritoneal dialysis (November 15, 2007). One of the doctors I was working with had an interesting data set examining adverse events in patients with peritoneal dialysis. These patients start treatment with peritoneal dialysis on a specific day and are followed until they stop this treatment. There were two adverse events examined: exit site infections, and peritonitis. Although I ran several complex analyses on this data set, I thought it might be useful to look at a simpler approach to monitoring the frequency of adverse events using control charts.

14. Stats: NNH talk update (November 12, 2007). Last year, I gave a talk for PharmaIQ about continuous monitoring of the number needed to harm. I want to update this talk for a second audience in December.

13. Stats: Tracking adverse events during kidney biopsy, Part 2 (April 5, 2007). This is a major revision of an earlier weblog entry. I have been helping a colleague who is interested in monitoring the safety of kidney biopsy events. He was kind enough to let me use his data set on my web pages in order to illustrate some new methods for monitoring adverse events. This data set allows you to see some examples of the use of control charts to track adverse events. Here is the raw data.

12. Stats: Tracking adverse events during kidney biopsy (March 14, 2007). I have been helping a colleague who is interested in monitoring the safety of kidney biopsy events. He was kind enough to let me use his data set on my web pages in order to illustrate some new methods for monitoring adverse events. This data set allows you to see some examples of the use of control charts to track adverse events.


11. Stats: Two talks for PharmaIQ (September 19, 2006). I may be giving a couple of talks for for PharmaIQ, a division of the International Quality & Productivity Center (IQPC). The first has the title "Signal Detection Strategies for Paediatric Treatments" and the second has the title "Control charts for continuous monitoring of the number needed to harm."

10. Stats: Continuous monitoring of the number needed to harm (September 2, 2006). The continuing review of clinical trials has to address "good news" issues. Does one arm of the study show substantially better efficacy? Does one arm of the study have a significantly better safety profile? There are rigorous and well accepted approaches for determining partway through a clinical trial whether one arm has a greater proportion of cured patients or a smaller proportion of harmed patients. Continuing review also has to address "bad news" issues. Is the study falling behind schedule on its planned enrollment rates? Are patients dropping out of the study at an alarming rate? Are certain adverse drug reactions occurring at an unexpected rate? The analysis of  "bad news" issues is more poorly developed. Often decisions about these issues are based on subjective opinions and ad hoc rules. Statistical process control charts and Bayesian statistical methods offer an approach to treat on-going review of rates not tied directly to an efficacy or safety comparison.

9. Stats: Possible sources of funding for my grant (July 6, 2006). The NIH has a Request for Application (RFA) titled Research on Research Integrity (R01). The full text of this announcement is on the web at grants1.nih.gov/grants/guide/rfa-files/RFA-NR-07-001.html. The goal of this RFA is to foster empirical research on research integrity. The sponsoring programs are particularly interested in research that will provide clear evidence (rates of occurrence and impacts) of potential problems areas as well as societal, organizational, group, and individual factors that affect, both positively and negatively, integrity in research. Applications must have clear relevance to biomedical, behavioral health sciences, and health services research.

8. Stats: Applications of the CUSUM chart (June 20, 2006). I am interested in investigating the use of CUSUM charts in monitoring accrual rates, drop out rates, and adverse event rates in a clinical trial. Some references which I might cite in a literature review are listed here.

7. Stats: Seminar on control charts and adverse events (June 5, 2006). I took some time to expand my May 30, 2005 weblog entry on accrual rates and developed a seminar which I will present to the Statistics journal club at KUMC today. The handout for this talk combines that weblog entry with a brief tutorial on quality control. I received some valuable feedback.

6. Stats: Upcoming talks about control charts (May 25, 2006). I am working on some ideas for a grant to use control charts to track adverse events in clinical trials. I also envision the possibility of using control charts as a warning of a sudden influx of events that may be an early indicator of a bioterrorism event. I have not fleshed out these ideas very completely yet, but hope to do so soon in the weblog. While reviewing the upcoming talks at the Joint Statistics Meeting in Seattle, August 2006, I noticed several interesting talks that appear to be related to some of the things I might be working on.

5. Stats: Data mining and drug safety (May 4, 2006). I am very interested in safety issues, especially in the continuing review/interim analysis of clinical trials. It turns out that S-plus is targeting drug safety as a particularly important application of its data mining modules. Two recent web seminars addressed this topic.

4. Stats: I want to write a grant (April 25, 2006). I have been mulling over the idea of writing a research grant where I am the primary investigator. I have helped lots of other people write grants, but have never before taken the step of writing a grant myself. I have a rough idea of the form that this grant would take, but I want to use this weblog to flesh out these ideas and articulate them more clearly.

3. Stats: Reporting serious adverse events (updated February 3, 2006). The FDA held a meeting on March 21, 2005 soliciting opinions about how adverse events should be reported to Institutional Review Boards (IRBs). Some of the testimony provided to FDA can be found on the FDA website and in various spots on the Internet, mostly in PDF format. This is something I have been interested in, but have not had the time to work up the details. It seems to me that any system for reporting adverse events has to have information about the accrual of patients into the study. Here's a simple graph that shows the entry and exit times in a research study. It's not exactly a study of adverse events reports per se, but the example is close enough that I can use to illustrate the general concepts.


2. Stats: Reporting of adverse events (August 5, 2005). Most Institutional Review Boards (IRBs) have difficulty coping with the volume of adverse events that study sponsors report to them. The FDA held a public meeting about this issue recently, and some written responses are available as PDF files at the following location: www.fda.gov/ohrms/dockets/dockets/05n0038/mostrecent.htm.

1. Stats: Control charts for monitoring mortality rates (February 11, 2005). One of the trickiest problems in Medicine is trying to identify whether an unusual trend in mortality rates is an indication of an incompetent physician, or worse, a physician who is actively killing patients.

What now?

Browse other categories at this site

Browse through the most recent entries

Get help