Skip to main content
Advertisement
  • Loading metrics

Effects of Two Commercial Electronic Prescribing Systems on Prescribing Error Rates in Hospital In-Patients: A Before and After Study

  • Johanna I. Westbrook ,

    J.Westbrook@unsw.edu.au

    Affiliation Centre for Health Systems and Safety Research, Australian Institute of Health Innovation, Faculty of Medicine, University of New South Wales, Sydney, Australia

  • Margaret Reckmann,

    Affiliation Centre for Health Systems and Safety Research, Australian Institute of Health Innovation, Faculty of Medicine, University of New South Wales, Sydney, Australia

  • Ling Li,

    Affiliation Centre for Health Systems and Safety Research, Australian Institute of Health Innovation, Faculty of Medicine, University of New South Wales, Sydney, Australia

  • William B. Runciman,

    Affiliation School of Psychology, Social Work & Social Policy, University of South Australia, Adelaide, Australia

  • Rosemary Burke,

    Affiliation Pharmacy Department, Concord Repatriation General Hospital, Sydney, Australia

  • Connie Lo,

    Current address: Information Management and Technology Division, Sydney South West Area Health Service, Sydney, Australia

    Affiliation Centre for Health Systems and Safety Research, Australian Institute of Health Innovation, Faculty of Medicine, University of New South Wales, Sydney, Australia

  • Melissa T. Baysari,

    Affiliation Australian Institute of Health Innovation, Faculty of Medicine, University of New South Wales, Sydney, Australia

  • Jeffrey Braithwaite,

    Affiliation Centre for Clinical Governance Research, Australian Institute of Health Innovation, Faculty of Medicine, University of New South Wales, Sydney, Australia

  • Richard O. Day

    Affiliation Department of Clinical Pharmacology and Toxicology, St Vincent's Hospital, Sydney, and Faculty of Medicine, University of New South Wales, Sydney, Australia

Abstract

Background

Considerable investments are being made in commercial electronic prescribing systems (e-prescribing) in many countries. Few studies have measured or evaluated their effectiveness at reducing prescribing error rates, and interactions between system design and errors are not well understood, despite increasing concerns regarding new errors associated with system use. This study evaluated the effectiveness of two commercial e-prescribing systems in reducing prescribing error rates and their propensities for introducing new types of error.

Methods and Results

We conducted a before and after study involving medication chart audit of 3,291 admissions (1,923 at baseline and 1,368 post e-prescribing system) at two Australian teaching hospitals. In Hospital A, the Cerner Millennium e-prescribing system was implemented on one ward, and three wards, which did not receive the e-prescribing system, acted as controls. In Hospital B, the iSoft MedChart system was implemented on two wards and we compared before and after error rates. Procedural (e.g., unclear and incomplete prescribing orders) and clinical (e.g., wrong dose, wrong drug) errors were identified. Prescribing error rates per admission and per 100 patient days; rates of serious errors (5-point severity scale, those ≥3 were categorised as serious) by hospital and study period; and rates and categories of postintervention “system-related” errors (where system functionality or design contributed to the error) were calculated. Use of an e-prescribing system was associated with a statistically significant reduction in error rates in all three intervention wards (respectively reductions of 66.1% [95% CI 53.9%–78.3%]; 57.5% [33.8%–81.2%]; and 60.5% [48.5%–72.4%]). The use of the system resulted in a decline in errors at Hospital A from 6.25 per admission (95% CI 5.23–7.28) to 2.12 (95% CI 1.71–2.54; p<0.0001) and at Hospital B from 3.62 (95% CI 3.30–3.93) to 1.46 (95% CI 1.20–1.73; p<0.0001). This decrease was driven by a large reduction in unclear, illegal, and incomplete orders. The Hospital A control wards experienced no significant change (respectively −12.8% [95% CI −41.1% to 15.5%]; −11.3% [−40.1% to 17.5%]; −20.1% [−52.2% to 12.4%]). There was limited change in clinical error rates, but serious errors decreased by 44% (0.25 per admission to 0.14; p = 0.0002) across the intervention wards compared to the control wards (17% reduction; 0.30–0.25; p = 0.40). Both hospitals experienced system-related errors (0.73 and 0.51 per admission), which accounted for 35% of postsystem errors in the intervention wards; each system was associated with different types of system-related errors.

Conclusions

Implementation of these commercial e-prescribing systems resulted in statistically significant reductions in prescribing error rates. Reductions in clinical errors were limited in the absence of substantial decision support, but a statistically significant decline in serious errors was observed. System-related errors require close attention as they are frequent, but are potentially remediable by system redesign and user training. Limitations included a lack of control wards at Hospital B and an inability to randomize wards to the intervention.

Please see later in the article for the Editors' Summary

Editors' Summary

Background

Medication errors—for example, prescribing the wrong drug or giving a drug by the wrong route—frequently occur in health care settings and are responsible for thousands of deaths every year. Until recently, medicines were prescribed and dispensed using systems based on hand-written scripts. In hospitals, for example, physicians wrote orders for medications directly onto a medication chart, which was then used by the nursing staff to give drugs to their patients. However, drugs are now increasingly being prescribed using electronic prescribing (e-prescribing) systems. With these systems, prescribers use a computer and order medications for their patients with the help of a drug information database and menu items, free text boxes, and prewritten orders for specific conditions (so-called passive decision support). The system reviews the patient's medication and known allergy list and alerts the physician to any potential problems, including drug interactions (active decision support). Then after the physician has responded to these alerts, the order is transmitted electronically to the pharmacy and/or the nursing staff who administer the prescription.

Why Was This Study Done?

By avoiding the need for physicians to write out prescriptions and by providing active and passive decision support, e-prescribing has the potential to reduce medication errors. But, even though many countries are investing in expensive commercial e-prescribing systems, few studies have evaluated the effects of these systems on prescribing error rates. Moreover, little is known about the interactions between system design and errors despite fears that e-prescribing might introduce new errors. In this study, the researchers analyze prescribing error rates in hospital in-patients before and after the implementation of two commercial e-prescribing systems.

What Did the Researchers Do and Find?

The researchers examined medication charts for procedural errors (unclear, incomplete, or illegal orders) and for clinical errors (for example, wrong drug or dose) at two Australian hospitals before and after the introduction of commercial e-prescribing systems. At Hospital A, the Cerner Millennium e-prescribing system was introduced on one ward; three other wards acted as controls. At Hospital B, the researchers compared the error rates on two wards before and after the introduction of the iSoft MedChart e-prescribing system. The introduction of an e-prescribing system was associated with a substantial reduction in error rates in the three intervention wards; error rates on the control wards did not change significantly during the study. At Hospital A, medication errors declined from 6.25 to 2.12 per admission after the introduction of e-prescribing whereas at Hospital B, they declined from 3.62 to 1.46 per admission. This reduction in error rates was mainly driven by a reduction in procedural error rates and there was only a limited change in overall clinical error rates. Notably, however, the rate of serious errors decreased across the intervention wards from 0.25 to 0.14 per admission (a 44% reduction), whereas the serious error rate only decreased by 17% in the control wards during the study. Finally, system-related errors (for example, selection of an inappropriate drug located on a drop-down menu next to a likely drug selection) accounted for 35% of errors in the intervention wards after the implementation of e-prescribing.

What Do These Findings Mean?

These findings show that the implementation of these two e-prescribing systems markedly reduced hospital in-patient prescribing error rates, mainly by reducing the number of incomplete, illegal, or unclear medication orders. The limited decision support built into both the e-prescribing systems used here may explain the limited reduction in clinical error rates but, importantly, both e-prescribing systems reduced serious medication errors. Finally, the high rate of system-related errors recorded in this study is worrying but is potentially remediable by system redesign and user training. Because this was a “real-world” study, it was not possible to choose the intervention wards randomly. Moreover, there was no control ward at Hospital B, and the wards included in the study had very different specialties. These and other aspects of the study design may limit the generalizability of these findings, which need to be confirmed and extended in additional studies. Even so, these findings provide persuasive evidence of the current and potential ability of commercial e-prescribing systems to reduce prescribing errors in hospital in-patients provided these systems are continually monitored and refined to improve their performance.

Additional Information

Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001164.

Introduction

It is well over a decade since electronic prescribing systems were first shown to reduce medication errors [1],[2], demonstrating their potential to address this long-standing, costly problem [3][5]. However, recent reviews [6][9] reveal that many questions remain unanswered regarding the extent to which systems deliver improvements in medication safety in different settings, important contextual and work practice factors associated with effectiveness, and the cost benefit of systems. To date, evidence of effectiveness rests largely on the experiences of a few hospitals using home-grown systems.

A central question is whether commercial e-prescribing systems can deliver the same benefits as home-grown systems. There is little work comparing commercial systems or the interactions between system design and error rates and types, despite increasing concerns regarding new errors associated with their use [8],[10],[11]. Implementation of these organisation-wide clinical information systems is complex [12],[13] with a multitude of work process and cultural factors [14][16], which affect system adoption and use, driving both intended and unintended outcomes [10],[11],[17],[18].

In 2011, the US Agency for Healthcare Research and Quality [8] released a review of the effects of health information technology on medication management and drew attention to the need for research that evaluates systems in everyday settings and allows comparisons between systems and study sites. Our aim was to evaluate two commercial e-prescribing systems with respect to their effectiveness in reducing prescribing errors and their propensities for introducing new types of error.

Methods

Sample and Data Collection

A before and after study design was implemented at two major teaching hospitals in Sydney, Australia. Hospital A had 400 beds and Hospital B 326 beds. At Hospital A data were collected from four wards pre and post e-prescribing system implementation (two geriatric, a renal/vascular, and a respiratory ward). One ward (geriatric) was assigned the intervention and the remaining three wards acted as controls. At Hospital B the intervention was implemented on two wards (psychiatry and cardiology), and error rates were evaluated in the pre and post e-prescribing implementation periods. Figure 1 outlines the study design.

A daily review of all inpatient medication charts (n = 3,291) was conducted by three pharmacists independent from the hospitals for at least two months pre- and postintervention, with the exception of the psychiatric ward (1 mo pre and post). Data collection at Hospital A was conducted between May–August 2006 (pre) and May–August 2008 (28 wk post e-prescribing system), and at Hospital B between November 2007–March 2008 (pre), and March 2008–February 2010 (16 and 10 wk post system introduction). Data collection was dictated by the hospitals' e-prescribing system implementations, which experienced several delays. Human research ethics approval was received from both hospitals and the University of Sydney.

Error Classification

Errors were classified into procedural (three categories) or clinical errors (14 categories) (Table S1 lists error definitions). Prescribing errors identified in the intervention wards in the postperiod were additionally reviewed to assess whether or not they were “system-related” (see definitions Table S1). System-related errors were defined as errors where system functionality or design contributed to the error, and there was little possibility that another cause, such as a lack of knowledge, produced the error. For example, an order for an inappropriate drug located on a drop-down menu next to a likely drug selection was flagged as a system-related error. Thus all system-related errors underwent dual classification in terms of (1) their manifestation according to one of the 17 procedural or clinical error categories and (2) the system-related mechanism that was deemed to be associated with those errors. In this paper, the system-related errors are reported according to their clinical manifestation and are listed in a separate table, as strategies for their prevention are likely to relate to system redesign or improved functionality.

Inter-rater reliability tests were conducted at regular intervals and compared pharmacist reviewers' agreement with respect to number and type of errors. These tests involved double audit of 10% of all admissions and produced kappa scores of 0.82–0.84. In the last stage of the research, 1,097 admissions (33% of the total sample) were re-reviewed in order to ensure consistency of data collection between the early and later data collection periods. Two pharmacists independently rated the actual or potential severity of errors (Box 1); disagreement was settled by consensus with input from a clinical pharmacologist (ROD) when required. Severity review committees involving an emergency physician, hospital pharmacists, and nurses from both hospitals were also given subsets of errors to classify during the study.

Box 1. Severity Assessment Code [47]

Minor errors

1. Insignificant: Incident is likely to have little or no effect on the patient.

2. Minor: Incident is likely to lead to an increase in level of care e.g. review, investigations, or referral to another clinician.

Serious errors

3. Moderate: Incident is likely to lead to permanent reduction in bodily functioning, increased length of stay, surgical intervention.

4. Major: Incident is likely to lead to a major permanent loss of function.

5. Serious: Incident is likely to lead to death.

Hospital Prescribing and the Interventions

In the preintervention period all wards used paper medication charts in which the prescribing doctors wrote orders. These charts were then used by nursing staff as the medication administration charts. There was no intermediate transcription step between a prescriber's order and the final medication chart entry, as is the case in some countries.

Ward pharmacy services were provided during the weekdays but not on weekends. The research pharmacists' daily review of the medication charts may have occurred either before or after the ward pharmacists had done their rounds. All interventions (corrections) made by the ward pharmacists in patients' medication charts were identifiable and noted (i.e., errors detected by the ward pharmacists were included in the study).

Interventions consisted of the implementation of two e-prescribing systems (Cerner Millennium PowerOrders and iSoft MedChart) integrated with each hospitals' computerised order entry system. Prescribers were required to use the systems to prescribe medications in the post period.

Hospital A implemented the Cerner system, where prescribing is mainly by menu selection of pre-prepared order sentences that are triggered upon drug selection and that can be modified by the prescriber. “Care sets” allow for a group of related orders to be selected and ordered simultaneously with a single click. Unlisted medications and prescribing order comments need to be generated by the prescriber. In the Cerner e-prescribing system, active decision support at the time of study consisted of allergy alerts and drug–drug interaction alerts set at the most severe level (using the Multum database). Medication orders could not be completed if the patient's allergy status was not recorded. If a prescriber wished to over-ride an alert they needed to select an override reason from a drop-down menu or enter a free-text comment. Passive decision support included a drug information database, the highly structured order sentences, and predefined order-sets such as the palliative care set. Further passive decision support allowed prescribers a diabetic medication view, an anticoagulant view, and an analgesic view, which provided integration of patients' lab results and drug doses.

Hospital B implemented the iSoft MedChart system. Prescribing could be completed in three ways following selection of a drug: (1) long-hand, where prescribing information is entered via drop-down lists or free text boxes; (2) “quicklists,” or prewritten orders; and (3) “protocols,” where common combinations of prewritten orders can be selected.

MedChart included alerts for allergy checking, pregnancy warnings, therapeutic duplication, some dose-range checking, and a number of local decision-support rules (such as drug and therapeutics committee decisions and antibiotic stewardship guidelines). Drug–drug interaction alerts were not operational during the study. All alerts allowed the prescriber to continue with the order. Alerts were all “pop-ups” on the screen. Approximately half of the alerts were for information only; prescribers were not required to take action and just had to close the alert box. Others required the prescriber to respond by ticking an “override” box. For approximately 10% of the alerts prescribers were required to enter a free-text reason for overriding the alert in order to proceed. Drug information references were available online as passive decision support.

During the intervention periods both sites used paper orders for a small subset of medications. At Hospital A, heparin infusions and patient-controlled analgesia remained on paper charts.

At Hospital B, orders for intravenous (IV) fluids, IV infusions (e.g., heparin infusion), variable dose regimes (such as titrated or reducing doses), insulins, oral anticoagulants (warfarin), chemotherapy, parenteral nutrition, and epidural or patient-controlled analgesia remained on paper charts. The prescriber was required to order an electronic prompt to signal the administration times for these drugs, but the actual drug orders were located on a paper chart. Errors related to these electronic prompts were included in the postperiod data collection.

Statistical Analysis

The error data were linked with the patient admission data, which matched the study periods. Rates of prescribing errors per admission and per 100 patient days were calculated for each error type and category, by period (pre/post), group (intervention/control), hospital, and ward. Serious errors (graded≥3) (Box 1) were examined by group, error type, and period. System-related error rates per admission were examined for both systems. The 95% CIs for the average error rates per admission and per 100 patient days were calculated using the large sample approximation of mean ±1.96× standard error. For the pre- and postanalysis, two-sample t-tests were used to compare baseline data with post e-prescribing system data with the level of significance set at 5%. The 95% CIs for percentage changes were calculated as per the Fieller CI [19]. All statistical analyses were carried out with SAS 9.2 [20].

Results

Incidence, Type, and Severity of Prescribing Errors at Baseline

The 1,923 admissions across the six wards reviewed at baseline revealed 11,168 prescribing errors, an average of 5.8 per admission. The majority (n = 8,225; 73.6%; 4.28 per admission) were procedural (e.g., unclear, incomplete, or illegible orders) with the remaining 26.4% (n = 2,943; 1.53 per admission) comprising clinical errors. Hospital A had higher procedural and clinical error rates at baseline compared to Hospital B (Table 1). The rates of serious errors were comparable (respectively, 0.28 per admission; 95% CI 0.22–0.35; n = 296 versus 0.26 per admission; 95% CI 0.21–0.31; n = 226).

thumbnail
Table 1. Summary of baseline prescribing error rates by hospital.

https://doi.org/10.1371/journal.pmed.1001164.t001

Error rates for individual wards within hospitals were similar at baseline (Tables 2 and 3). The four most frequent clinical error types in each ward were also considerably similar. At baseline, duplicate therapy and wrong dose/volume errors appeared in the top four most frequent errors for all wards. “Legal/procedural” was the most frequent procedural error category on all wards.

thumbnail
Table 2. Prescribing error rates per admission by hospital, ward type, error category, and error type at baseline.

https://doi.org/10.1371/journal.pmed.1001164.t002

Changes in Prescribing Error Rates Following E-prescribing System Implementation

Total error rates fell significantly (p<0.0001) in each intervention ward following e-prescribing system implementation: by 66.1% (95% CI 53.9%–78.3%) in intervention ward 1; 57.5% (33.8%–81.2%) intervention ward 2; and 60.5% (48.5%–72.4%) intervention ward 3. The three Hospital A control wards experienced small decreases in prescribing error rates per admission, none of which were statistically significant, (respectively −12.8% [95% CI −41.1% to 15.5%] control ward X; −11.3% [−40.1% to 17.5%] control ward Y; and −20.1% [−52.2% to 12.4%] control ward Z). Table 3 reports error rates in the pre- and postperiods for all wards.

A marked reduction in procedural errors drove this decline. In the intervention ward at Hospital A the procedural error rate fell by 90.2% (from 4.89 per admission to 0.48), and at Hospital B by 93.6% (from 2.66 per admission to 0.17). Hospital A had significantly higher procedural error rates at baseline and a difference between the sites persisted in the postperiod. The rates of clinical prescribing errors did not significantly change with the exception of intervention ward 2 where there was a significant increase in clinical error rate: from 0.99 to 1.70 per admission (p = 0.04) (Table 3).

thumbnail
Table 3. Comparison of prescribing error rates pre- and postelectronic prescribing system implementation.

https://doi.org/10.1371/journal.pmed.1001164.t003

Prescribing error rates per 100 patient days confirmed a significant decline in total error rates. As Table 3 shows, intervention ward 1 experienced a 66.5% decline in error rates from 51.6 to 17.3 per 100 patient days; intervention ward 2, a 74.1% reduction, and intervention ward 3, a 64.1% reduction.

Changes in the Rates of Serious Prescribing Errors Following E-prescribing System Implementation

We examined the number of serious errors (i.e., severity≥3) per admission in the intervention wards and Hospital A control wards in each period. There was a significant 44% serious error rate reduction (p = 0.0002) in the intervention wards following system implementation (Table 4). The Hospital A control wards experienced no significant change (16.7% reduction; p = 0.4).

thumbnail
Table 4. Serious errors per admission by study group and period.

https://doi.org/10.1371/journal.pmed.1001164.t004

Changes in Categories of Prescribing Errors Post E-prescribing System Implementation Excluding System-Related Errors

We examined changes in the categories of errors in the intervention wards and Hospital A control wards with system-related errors removed (Table 5), and then examined the ways in which system-related errors manifested themselves at each hospital (Table 6). In the postperiod there were substantial changes in the procedural error rates in the intervention wards, with unclear, incomplete, and legal/procedural orders almost eliminated (90.8% reduction for Hospital A and 93.6% for Hospital B, p<0.0001), while there was little change in these categories in the Hospital A control wards (Table 5).

thumbnail
Table 5. Prescribing errors by type, category, hospital, and period for the intervention and control wards.

https://doi.org/10.1371/journal.pmed.1001164.t005

thumbnail
Table 6. The manifestation of system-related prescribing error rates by type and hospital.

https://doi.org/10.1371/journal.pmed.1001164.t006

The intervention wards also experienced greater changes in the rates of specific categories of prescribing errors compared to the Hospital A control wards. In the control wards (at Hospital A) the most notable changes were a doubling in the rates of wrong timing errors (from 0.12 to 0.26 per admission) and drug–drug interaction errors (0.06 to 0.12). However, there were also considerable reductions in the rates of duplicate therapy errors (0.37 to 0.23) and wrong dose/volume errors (0.43 to 0.25 per admission) (Table 5).

We examined changes in rates of error category by hospital to assess any potential impact of specific system functionality (Table 5). Hospital B experienced a considerably larger increase in the rate of timing errors (0.03 errors/admission to 0.26) than the intervention ward (0.3 pre and post) or control wards (0.12 to 0.26) at Hospital A.

There was some evidence of the effect of the limited decision support in the e-prescribing system at Hospital B, with a marked decline in duplicate therapy error rates (0.20–0.06 per admission; 70% reduction) compared to both the Hospital A control wards (0.37–0.23; 38% reduction) and the intervention ward at Hospital A (0.32 pre and post; no change). Allergy alerts were enabled at both sites but there was little change in allergy error rates, which remained low in both periods (Table 5).

High level drug–drug interaction alerts were enabled at Hospital A but there was no evidence of a significant decrease in these errors (0.05–0.07). Hospital A had marked reductions in wrong strength errors (0.27–0.01; 96% reduction) and wrong route errors (0.11–0.01; 91%) in the intervention ward. Hospital B, in addition to the decline in duplicate therapy errors, experienced the largest declines in rates of wrong strength (0.06–0.01; 83%) and “drug not prescribed” errors (0.16–0.08; 50%) (Table 5).

System-Related Prescribing Errors by Hospital

Each of the hospitals experienced prescribing errors associated with the use of the new systems. Combined, the intervention wards experienced 0.57 system-related errors per admission, which accounted for 34.8% (358/1,029) of all prescribing errors in these wards in the postperiod.

Nearly all system-related prescribing errors manifested as clinical errors (99%, n = 353). The clinical error rate (including system-related errors) for the intervention wards increased from 1.02 (n = 1,077) to 1.39 (n = 872) per admission following e-prescribing system implementation. If system-related clinical errors were removed this rate fell to 0.83 (n = 519) in the postperiod, representing a significant reduction (p = 0.03) in clinical error rate. Thus, system-related errors were a major reason for the e-prescribing system not delivering a significant reduction in the overall rate of clinical errors (Table 3).

The rate and categories of system-related errors differed by hospital. At Hospital A these errors occurred at a rate of 0.73 (95% CI 0.53–0.92) per admission and on the two wards at Hospital B 0.75 (95% CI 0.44–1.06) and 0.48 (95% CI 0.36–0.60). A low percentage of these system-related errors were serious errors (3%; n = 11).

Table 6 shows the distribution of “system-related” errors across error categories by hospital. Hospital A had higher rates of seven error types compared to Hospital B. System-related errors that resulted in wrong strength errors were markedly higher at Hospital B (0.23 per admission versus 0.03 at Hospital A).

Discussion

Both commercial e-prescribing systems were associated with a statistically significant reduction in total prescribing error rates by over 55%, driven by the substantial reductions in incomplete, illegal, and unclear orders. While there was little change in the rate of clinical errors for the intervention wards (and an increase in one intervention ward), the rate of serious prescribing errors decreased by 44% relative to the Hospital A control wards, which experienced a decline of 17%. Thus, while these e-prescribing systems with limited decision support were not associated with a substantial reduction in the rate of clinical errors, they were associated with a reduction in some of the most potentially serious errors.

Other studies have evaluated home-grown e-prescribing systems. For example, Bates et al. [2] reported a 55% reduction in serious nonintercepted medication errors (prescribing, dispensing, and administration errors) following the introduction of a home-grown system, although, as they had no control wards the change attributable to the e-prescribing system could not be determined. Major difficulties in comparing effectiveness studies of e-prescribing systems have been consistently highlighted [8],[9],[21].

Although both systems in our study had only limited decision support enabled, there was some evidence that this was effective in reducing some error types. For example, the MedChart system had duplicate therapy alerts and was associated with a fall in these error rates, consistent with other studies [22][28] of decision-support interventions. However, designing effective organisational-wide decision support is challenging [29][36]. Additional research at one of the study sites has, for example, shown that during ward rounds the effectiveness of the decision support is compromised, as the senior clinicians making the prescribing decisions were seen to instruct junior clinicians on the round to enter the orders. Alerts received were thus not seen by the decision-makers and the doctors entering the orders ignored most alerts received during this process [37]. Responses to decision support alerts outside ward rounds, particularly at night by junior doctors, may be quite different. There remains much to understand about how decision support can be integrated into clinical work processes and lead to safer and more effective prescribing.

An important starting point is to obtain baseline data of the incidence and severity of prescribing errors to facilitate the design of targeted decision support. Few organisations have such data and rarely are prescribers provided with feedback regarding errors. Behaviour change is unlikely in such situations. e-prescribing systems provide enormous capacity to provide real-time feedback of prescribing behaviours; this should be examined together with efforts to embed decision support and alerts.

The increases in wrong timing errors found in the control wards in Hospital A are likely to be attributable to a new paper-based standard national inpatient medication chart, which was introduced in the postperiod. This new chart required specific timing information from prescribers and compliance was modest, an effect noted at other Australian hospitals [38]. Timing errors also increased substantially in the intervention wards at Hospital B. These errors are likely to be associated with the design of the e-prescribing system, which required prescribers to modify the default administration times when necessary. For example, with an order for metformin (500 mg tablet, dose 500 mg oral in the morning), the timing defaults to 0800, and the local rule in the e-prescribing system states that prescribers should change this default time to 0700 (breakfast time at the hospital) because the drug is an oral hypoglycaemic and should be taken with food. Timing errors were logged when prescribers failed to change such default times. This situation was in contrast to the e-prescribing system at Hospital A where administration times were linked to specific order sentences. For example, the order sentence for the metformin example above would be: metformin 500 mg, oral, tab, mane (morning) after food. The “mane after food” defaults the time to 0730 (breakfast time at the hospital), thus avoiding a potential timing error.

There was a high rate of system-related errors for both hospitals accounting for 35% of prescribing errors in the intervention wards in the postperiod. Without these system-related errors, the overall clinical error rate in the intervention wards would have declined significantly in the postperiod. The types of system-related errors varied considerably by hospital, likely due to differences in system designs and the structuring of prescribing tasks. Work is underway to examine the relationships between specific system functionalities and types of system-related errors. For example, the disparity in the rates of system-related errors resulting in “wrong strength” errors at Hospital B (0.23 per admission) compared to Hospital A (0.03), and the rate of “wrong route” errors at Hospital A (0.16 per admission) compared to almost none at Hospital B, suggest specific system features that predispose to these error types. Such findings provide a focus for examining the redesign of system features and/or training of prescribers, and more generally the degree to which such systems reflect ways of working within these clinical environments.

While several studies [10],[11],[39] have described types of system-related errors, few have systematically classified them and quantified their occurrence or severity. Their high volume indicates that they should be targeted; our experience suggests that a high proportion is amenable to remediation through minor system redesign, such as listing the most frequently used option first on drop-down menus, or creating prestructured orders to reduce the need for users to construct complex order sentences. Where system changes cannot be made, areas for targeted training can be identified [40]. This illustrates the importance of identifying what errors are occurring, and when, and highlights the improvements that can be achieved once these types of errors are reduced. Hospitals must allocate sufficient resources to detect and respond to such issues as they arise [41].

Beyond answering the central question regarding the effectiveness of e-prescribing systems in reducing errors, the study has produced comprehensive data on prescribing errors in hospitals in the absence of these systems, with longitudinal data across three control wards in Hospital A. The findings showed considerable similarities in error rates at baseline despite the very different clinical areas represented, from geriatrics to cardiac surgery and psychiatry. This suggests that the underlying mechanisms of prescribing errors are generic rather than speciality specific. There was no substantial change in error rates in the control wards over an average of 2 y, notwithstanding the fact that medication errors were targeted by a range of interventions during this time, including the introduction of a standard national medication inpatient chart designed to reduce errors [38]. These findings confirm how difficult it is to reduce medication error rates and are consistent with the findings of the EPOC Cochrane collaboration series, which demonstrate the relative ineffectiveness of conventional initiatives in changing clinical practice [42]. It also highlights the value of e-prescribing systems in achieving the outcomes they did.

The complexity of undertaking “real-world” studies should not be underestimated [43][45]. The research was subject to substantial delays in system implementation at both sites. The postimplementation data collection periods were different at the two sites and it is possible that this time difference influenced the results. We consulted with clinical and other staff at the sites to seek advice about the required “settling in” period prior to postintervention data collection. At Hospital B, which had the shorter postintervention periods, the system had already been implemented on several other wards and thus many problems had been dealt with in these earlier implementations. There is limited evidence from other studies to clearly identify the effects of time from intervention to outcome measurements and this should be a consideration for future studies.

We were unable to randomise our intervention wards, and because of a change in implementation plans we were unable to obtain a control ward at Hospital B. The availability of three control wards at Hospital A proved to be a major strength given potential confounders such as other safety initiatives that may have impacted prescribing error rates. We had no control over the selection of the intervention wards. At Hospital A, intervention ward 1 was the first ward in the hospital to use the system and one factor in ward selection was a willing clinician leader. At Hospital B several wards had the e-prescribing system implemented before the study intervention wards. The study had a wide range of specialties represented and this was a potential additional challenge for comparison, but the baseline prescribing error rates by type across the wards suggest that specialty was not strongly associated with any particular error type. Some wards, such as the psychiatry ward, would have had a narrower range of drugs prescribed than on other wards. We are confident of the quality of our data due to the extensive inter-rater reliability testing applied throughout the study.

This study provides persuasive evidence of the current and potential value of commercial e-prescribing systems to significantly and substantially reduce prescribing errors in hospital in-patients. However, as other studies have demonstrated [40],[43],[44], success in achieving this outcome is dependent upon many contextual and organisational factors and multimethod studies are of great value in order to understand the mechanisms by which e-prescribing systems impact upon prescribing behaviours [12]. Our qualitative studies at the study sites revealed clinicians' greatest concern regarding the introduction of e-prescribing systems was the associated work practice changes [46], and qualitative and observational studies may best identify the nature of these changes. Experience has shown that embedding systems into everyday practice is a long-term project [13]. Importantly, the results highlight the need to continually monitor and refine the design of these systems to increase their capacity to improve both the safety and appropriateness of medication use in hospitals.

Supporting Information

Table S1.

Definitions of prescribing error categories used in the study.

https://doi.org/10.1371/journal.pmed.1001164.s002

(DOCX)

Acknowledgments

We thank the hospital sites and staff for their support in conducting this study.

Author Contributions

Conceived and designed the experiments: JIW ROD JB WBR. Analyzed the data: JIW LL. Wrote the first draft of the manuscript: JIW. Contributed to the writing of the manuscript: JIW MR LL WBR MTB JB ROD. ICMJE criteria for authorship read and met: JIW MR LL WBR RB CL MTB JB ROD. Agree with manuscript results and conclusions: JIW MR LL WBR RB CL MTB JB ROD. Collected the data: MR CL. Provided technical advice regarding the systems being evaluated: RB MTB CL.

References

  1. 1. Bates D, Teich J, Lee J, Segar D, Kuperman G, et al. (1999) The impact of computerized physician order entry on medication error prevention. J Am Med Inform Assoc 6: 313–321.
  2. 2. Bates D, Leape L, Cullen D, Laird N, Peterson L, et al. (1998) Effect of computerized order entry and a team intervention on prevention of serious medication errors. JAMA 280: 1311–1316.
  3. 3. Institute of Medicine (2007) Preventing medication errors. Washington (D.C.): National Academy Press.
  4. 4. Roughead E (1999) The nature and extent of drug-related hospitalisations in Australia. J Qual Clin Pract 19: 19–22.
  5. 5. Westbrook J, Woods A, Rob MI, Dunsmuir WTM, Day R (2010) Association of interruptions with increased risk and severity of medication administration errors. Arch Intern Med 170: 683–690.
  6. 6. Black A, Car J, Pagliari C, Anandan C, Cresswell K, et al. (2011) The impact of eHealth on the quality and safety of health care: a systematic overview. PLoS Med 8: e1000387.
  7. 7. Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, et al. (2006) Systematic review: Impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med 144: 742–752.
  8. 8. McKibbon K, Lokker C, Handler S, Dolovich LR, Holbrook A, et al. (2011) Enabling medication management through health information technology. Rockville (Maryland): Agency for Healthcare Research and Quality. pp. 1–925.
  9. 9. Reckmann M, Westbrook J, Koh Y, Lo C, Day R (2009) Does computerized provider order entry reduce prescribing errors for hospital inpatients? A systematic review. J Am Med Inform Assoc 16: 613–623.
  10. 10. Ash JS, Berg M, Coiera E (2004) Some unintended consequences of information technology in health care: the nature of patient care information system-related errors. J Am Med Inform Assoc 11: 104–112.
  11. 11. Koppel R, Metlay J, Cohen A, Abaluck B, Localio A, et al. (2005) Role of computerized physician order entry systems in facilitating medication errors. JAMA 293: 1197–1203.
  12. 12. Westbrook J, Braithwaite J, Georgiou A, Ampt A, Creswick N, et al. (2007) Multi-method evaluation of information and communication technologies in health in the context of wicked problems and socio-technical theory. J Am Med Inform Assoc 14: 746–755.
  13. 13. Day R, Roffe D, Richardson K, Baysari M, Brennan N, et al. (2011) Implementing electronic medication management at an Australian teaching hospital. Med J Aust 195: 498–502.
  14. 14. Ash JS, Stavri PZ, Kuperman GJ (2003) A consensus statement on considerations for a successful CPOE implementation. J Am Med Inform Assoc 10: 229–234.
  15. 15. Callen J, Braithwaite J, Westbrook J (2007) Cultures in hospitals and their influence on attitudes to, and satisfaction with, the use of clinical information systems. Soc Sci Med 65: 635–639.
  16. 16. Callen J, Braithwaite J, Westbrook J (2008) Context implementation model: a model for assisting clinical information system implementation. J Am Med Inform Assoc 15: 255–262.
  17. 17. Han Y, Carcillo J, Venkataraman S, Clarke R, Watson R, et al. (2005) Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system. Pediatrics 116: 1506–1512.
  18. 18. Lo C, Burke R, Westbrook J (2010) Comparison of pharmacists' work patterns on hospital wards with and without an electronic medication management system (eMMS). J Pharm Pract Res 40: 108–112.
  19. 19. Fieller E (1954) Some problems in interval estimation. J R Stat Soc Series B Stat Methodol 16: 175–185.
  20. 20. SAS Institute SAS Proprietary Software Release 9.2. Cary (North Carolina): SAS Institute.
  21. 21. Ammenwerth E, Schnell-Inderst P, Machan C, Siebert U (2008) The effect of electronic prescribing on medication errors and adverse drug events: a systematic review. J Am Med Inform Assoc 15: 585–600.
  22. 22. Strom B, Schinnar R, Bilker W, Hennessy S, Leanard C, et al. (2010) Randomized clinical trial of a customized electronic alert requiring an affirmative response compared to a control group receiving a commercial passive CPOE alert: NSAID-warfarin co-prescribing as a test case. J Am Med Inform Assoc 17: 411–415.
  23. 23. Terrell D, Perkins K, Dexter A, Hiui P, Callahan C, et al. (2009) Computerized decision support to reduce potentially inappropriate prescribing to older emergency department patient: a randomized controlled trial. J Am Geriatric Soc 57: 13881394.
  24. 24. Bennett JW, Glasziou P, Del Mar C, De Looze F (2003) A computerised prescribing decision support system to improve patient adherence with prescribing. A randomised controlled trial. Aust Fam Physcian 32: 667–671.
  25. 25. Garg A, Adhikari N, McDonald H (2005) Effects of computerized clinical decision support systems on practitioner performance and patient outcomes. A systematic review. JAMA 293: 1223–1238.
  26. 26. Hunt D, Haynes B, Hanna S, Smith K (1998) Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. JAMA 280: 1339–1346.
  27. 27. Khorasani R (2001) Computerized physician order entry and decision support: improving the quality of care. Radiographics 21: 1015–1018.
  28. 28. Wolfstadt JI, Gurwitz JH, Field TS, Lee M, Kalkar S, et al. (2008) The effect of computerized physician order entry with clinical decision support on the rates of adverse drug events: a systematic review. J Gen Intern Med 23: 451–458.
  29. 29. Sittig DF, Krall MA, Dykstra RH, Russell A, Chin HL (2006) A survey of factors affecting clinician acceptance of clinical decision support. BMC Medical Informatics & Decision Making 6: 6.
  30. 30. Bates DW, Kuperman GJ, Wang S, Gandhi T, Kittler A, et al. (2003) Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc 10: 523–530.
  31. 31. Baysari M, Westbrook J, Day R (2011) Narrative review: errors in selecting medicines for prescription and the role of computerized decision support. Drug Safety 34: 289–298.
  32. 32. Bobb AM, Payne TH, Gross PA (2007) Viewpoint: controversies surrounding use of order sets for clinical decision support in computerized provider order entry. J Am Med Inform Assoc 14: 41–47.
  33. 33. Coiera E, Westbrook J, Wyatt J (2006) The safety and quality of decision support systems. Meth Inform Med 45: S20–25.
  34. 34. Colombet I, Bura-Riviere A, Chatila R, Chatellier G, Durieux P (2004) Personalized versus non-personalized computerized decision support system to increase therapeutic quality control of oral anticoagulant therapy: an alternating time series analysis. BMC Health Ser Res 4: 27.
  35. 35. Elwyn G, Legare F, van der Weijden T, Edwards A, May C (2008) Arduous implementation: does the Normalisation Process Model explain why it's so difficult to embed decision support technologies for patients in routine clinical practice. Implement Sci 3: 57.
  36. 36. Osheroff JA, Pifer EA, Teich JM, Sittig DF, Jenders RA (2005) Improving outcomes with clinical decision support: an implementer's guide. Chicago: Healthcare Information and Management Systems Society.
  37. 37. Baysari M, Westbrook J, Richardson K, Day R (2011) The influence of computerized decision support on prescribing during ward-rounds: are the decision-makers targeted? J Am Med Inform Assoc 18: 754–759.
  38. 38. Coombes I, Stowasser D, Reid C, Mitchell C (2009) Impact of a standard medication chart on prescribing errors: a before and after audit. Qual Safety Healthcare 18: 478–485.
  39. 39. Savage I, Cornford T, Klecun E, Barber N, Clifford S, et al. (2010) Medication errors with electronic prescribing (eP): two views of the same picture. BMC Hlth Ser Res 10:
  40. 40. Cornford T, Savage I, Jani Y, Dean Franklin B, Barber N, et al. (2010) Learning lessons from electronic prescribing implementations in secondary care. In: Safran C, Reti S, Marin H, editors. 13th World Congress on Medical Informatics. Amsterdam: IOS Press.
  41. 41. Catwell L, Sheikh A (2009) Evaluating eHealth interventions: the need for continuous systematic evaluation. PLoS Med 6: e1000126.
  42. 42. Cochrane Effective Practice and Organisation of Care Group Cochrane Effective Practice and Organisation of Care (EPOC) Group. Ottawa, Canada: Cochrane Collaboration.
  43. 43. Greenhalgh T, Srtamer K, Bratan T, Byrne E, Russell J, et al. (2010) The devil's in the detail: final report of the independent evaluation of the Summary Care Record and HealthSpace programs. London: University College London.
  44. 44. Greenhalgh T, Russell J (2010) Why do evaluations of eHealth programs fail? An alternative set of guiding principles. PLoS Med 7: e1000360.
  45. 45. Lilford R, Foster J, Pringle M (2009) Evaluating eHealth: how to make evaluation more methodologically robust. PLoS Med 6: e1000186.
  46. 46. Georgiou A, Ampt A, Creswick N, Westbrook J, Braithwaite J (2009) Computerized provider order entry-What are health professionals concerned about? A qualitative study in an Australian hospital. Int J Med Inform 78: 60–70.
  47. 47. New South Wales Health Department (2005) Severity Assessment Code (SAC) Matrix. Sydney: NSW Health.