France Boucher, Sylvie Schryve, André Bonnici, Carl Desparois, François Paradis, Julie Racicot, Diem Vo, Benoît Lemire, and Linda VaillantTo cite: Boucher F, Schryve S, Bonnici A, Desparois C, Paradis F, Racicot J, et al. Piloting a hospital pharmacy performance model in the face of province-wide implementation of activity-based funding in Quebec health care centres. Can J Hosp Pharm. 2024;77(4):e3590. doi: 10.4212/cjhp.3590
ABSTRACT
Background
In the face of province-wide implementation of activity-based hospital funding in Quebec, a need arose to effectively measure pharmacists’ contributions along the patient care trajectory and to enable pharmacy benchmarking using valid performance indicators.
Objectives
A 3-phase project was initiated to measure the performance and impact of pharmacists and pharmacy departments. Phases 2 and 3, described here, focused on gradually implementing, in various health care centres, the priority indicators selected in phase 1.
Methods
The project involved multiple committees overseeing the implementation, data collection, analysis, and documentation of 18 performance indicators. Specific tools were developed to facilitate data collection and encourage pharmacists’ participation. A feedback survey was used to document pharmacists’ experiences.
Results
Substantial data were gathered over 3 years (2017 to 2020), involving 358 pharmacists from 6 health care centres. The overall contribution rate to the daily data collection from front-line pharmacists was 55%. The feedback survey revealed that, of the various communication tools used to promote the project, in-person events were better perceived by the front-line pharmacists than online tools. Of the 183 respondents to the survey, most (94%, n = 172) believed it was important to collect data to document pharmacists’ activities, and 82% (n = 150) saw the project as relevant to the upcoming activity-based funding system.
Conclusions
Despite challenges, progress was made in defining relevant indicators, adjusting the list generated during phase 1, and reaching a consensus on 16 indicators. Stakeholders expressed interest, emphasizing the importance of documenting pharmacists’ activities. The project has laid the foundation for demonstrating the value of pharmacists along the patient care trajectory and measuring pharmacy departments’ performance. However, more integrated technological solutions are needed for province-wide implementation.
KEYWORDS: performance indicators, hospital pharmacy benchmarking, quality improvement, pharmacy performance measurement
RÉSUMÉ
Contexte
Dans le contexte de la mise en oeuvre à l’échelle provinciale du financement à l’activité des hôpitaux au Québec, le besoin s’est fait sentir de mesurer efficacement la contribution des pharmaciens tout au long du parcours de soins aux patients et de permettre l’analyse comparative des départements de pharmacie à l’aide d’indicateurs de performance valables.
Objectifs
Un projet en trois phases a été lancé pour mesurer la performance et l’impact des pharmaciens et des départements de pharmacie. Les phases 2 et 3 décrites dans cet article portaient sur la mise en oeuvre progressive dans différents établissements de santé des indicateurs prioritaires retenus lors de la phase 1.
Méthodologie
Le projet impliquait plusieurs comités chargés de superviser la mise en oeuvre, la collecte de données, l’analyse et la documentation des 18 indicateurs de performance. Des outils spécifiques ont été préparés pour faciliter la collecte des données et encourager la participation des pharmaciens. Un sondage de rétroaction a été utilisé pour recueillir les impressions des pharmaciens.
Résultats
Une quantité considérable de données a été recueillie sur une période de 3 ans (2017 à 2020) auprès de 358 pharmaciens de 6 établissements de santé. Le taux global de participation à la collecte quotidienne de données auprès des pharmaciens était de 55 %. Le sondage de rétroaction auprès des pharmaciens a révélé que, parmi les divers outils de communication utilisés pour promouvoir le projet, les événements en personne étaient jugés plus utiles que les outils en ligne. Des 183 répondants au sondage, la plupart (94 %, n = 172) ont estimé qu’il était important de recueillir des données pour recenser les activités des pharmaciens, et 82 % (n = 150) ont estimé que le projet était pertinent pour le futur système de financement à l’activité.
Conclusions
Malgré les défis, des progrès ont été réalisés quant à la définition des indicateurs pertinents en ajustant la liste générée au cours de la phase 1 pour établir un consensus concernant 16 indicateurs. Les parties prenantes ont manifesté leur intérêt, soulignant l’importance de recenser les activités des pharmaciens. Le projet a jeté les bases pour démontrer la valeur des pharmaciens tout au long du parcours de soins des patients et mesurer la performance des départements de pharmacie. Cependant, des solutions technologiques plus intégrées sont nécessaires pour permettre la mise en oeuvre à l’échelle provinciale.
Mots-clés: indicateurs de performance, analyse comparative de la pharmacie hospitalière, amélioration de la qualité, mesure de la performance en pharmacie
Performance measurement in health care is the subject of ongoing research.1 Large-scale performance measurement helps identify areas for improvement, streamline processes, and ultimately enhance patient outcomes.2,3 It also helps in comparing practices across different health care centres and against national and international standards, fostering the sharing of knowledge to drive best practices.4 The surge in activity-based funding for hospitals underscores the significance of comprehensive activity measurement, challenging administrators and various caregivers to justify their value in contributing to patient outcomes.5–7 To tackle this issue, many authors in the United States and Europe have proposed various sets of indicators.8–14 These initiatives reflect differing priorities and methods across countries. In Canada, no nationwide system exists for benchmarking hospital pharmacy practices. About a decade ago, the Canadian Society of Hospital Pharmacy introduced clinical pharmacy key performance indicators (cpKPIs) to evaluate pharmaceutical care, although these do not cover all aspects of pharmacy practice.3
Continuous, large-scale measurement of hospital pharmacy practice has proven difficult due to the heterogeneity of data sources, the lack of specific, automated collection tools, and the intensity of resources required.15–17 As such, most attempts at performing measurements have focused on a narrower scope of activities, such as medication error rates, adherence to clinical guidelines, or the efficient management of pharmaceutical resources.18–24 Many efforts have centred on aspects of workload or technical efficiency, whereas a patient-centric approach might work better to justify the added value of pharmacists.25–28
In the face of province-wide implementation of activity-based hospital funding in Quebec, a need emerged to better quantify hospital pharmacy activity, impact, and performance. With valid and relevant performance indicators representing all areas of professional practice, pharmacy departments would be better equipped to measure and benchmark their performance, demonstrate their value throughout the patient care trajectory, and provide input for an activity-based funding system. In a recent study,29 our team used a consultative approach to develop a performance framework and associated indicators, emphasizing the need for balanced, easily documented indicators. The pilot project described here was undertaken to validate these indicators, which are proposed for use in hospital activity-based funding systems.
The initiative was led by a steering committee of 3 senior pharmacists (F.B., F.P., and L.V.) and 3 health care data analysis consultants (including S.S.), with support from an external advisory committee, consisting of a former hospital Chief Executive Officer (CEO), 5 Chief Pharmacy Officers, a quality assurance coordinator, and a regulatory authority observer, all selected for their expertise and strategic roles in the health care system. The central steering committee met monthly to review the project’s progress and to direct the work according to the issues and problems raised. At the beginning and the end of each of the 3 phases, the advisory committee gave its opinion on the direction to be taken.
Phase 1, described in our previous article,29 ended with the steering committee’s selection of a limited set of indicators that can be used to demonstrate the benefits of pharmacy activities along the patient care continuum. The development of a performance framework, a literature review, and an extensive consultation process supported their choice of indicators. Selection rounds were used to ensure representation of 5 framework dimensions (appropriateness, quality and safety, efficiency, innovation and continuous improvement, and organizational structure) and 5 professional roles (pharmaceutical care, drug distribution, education of trainees and colleagues, research, and management and professional matters). Of the 150 indicators initially identified during phase 1, 24 were selected. With the experimental phases in mind, 13 indicators were prioritized for their low measurement complexity and immediate relevance to managers, concretely improving the performance assessment tools in use, as well as allowing for benchmarking between institutions (Table 1; see column for indicators at start of the pilot project). The prioritized indicators covered all 5 professional roles, but only 4 of the 5 dimensions of the framework. Indeed, indicators lying in the organizational structure dimension were given a high complexity score, leading to a long-term priority level. As such, they could not be used in the pilot project.
TABLE 1 Performance Indicators Tested in the Pilot Study
Quebec’s health care centres coordinate and deliver services in designated regions, encompassing facilities with specific mandates such as acute or long-term care. Each heath care centre has one integrated pharmacy department, which serves all of the centre’s facilities, from one or multiple sites. From among all health care centres, those suitable to serve as pilot sites were identified through interest expressed by department heads and anticipated resource availability, with representation of academic and non-academic centres, diverse clienteles (short-term, long-term, and ambulatory), and a variety of pharmacy information systems. The participating health care centres treated this project as a quality improvement initiative and therefore did not seek exemptions from their respective research ethics boards. Each health care centre’s CEO signed a commitment and confidentiality agreement. Results from the pilot sites were anonymized to comply with confidentiality requirements.
Phases 2 and 3 are referred to jointly as the “pilot project”, which was set up according to the following principles:
On-site working committees were created to deploy the indicators at their respective sites. Each working committee comprised the head of the pharmacy department, a project manager, and a clinical–administrative information systems manager.
With the support of the on-site working committees, the steering committee carried out the following activities sequentially and repeatedly throughout the deployment, by indicator bundle and pilot site:
Figure 1 displays the timeline of the major deployment steps of the pilot project, which ran from June 2017 to October 2020. Phase 2 marked the start of experimentation, with the first bundle of indicators (Q1, Q2, A3, Q3, E2, and I2; see Table 1) being deployed in certain facilities at the first pilot site. The first bundle was then deployed in a few facilities at 2 other pilot sites. Subsequently, bundles 1 and 2 (A2, A1, A4, and Q4) were deployed in all short- and long-term facilities at the 3 sites. This second phase of the overall project lasted 16 months. Phase 3 saw the addition of indicators from bundle 3 (A7, E7, E8, E5, Q10, E1, E3, and I1) and, more importantly, aimed to validate or question the previously adopted solutions and to experiment on a larger scale, over 2 years and across 6 pilot sites.
| ||
FIGURE 1 Timeline of the major deployment steps of the pilot project. The measurement of indicators was deployed in 3 bundles at the pilot sites. Green refers to activities at pilot site 1; blue to activities at pilot sites 2 and 3; and purple to activities at pilot sites 4, 5, and 6. |
Master indicator fact sheets included the definition of each indicator, data sources, calculation and interpretation methods, and update frequency. For 9 of the 18 indicators tested, manual data collection by pharmacists was required. A data collection tool was built using a stand-alone web survey form (SurveyGizmo) with embedded definitions and instructions, allowing pharmacists to report their activities on a daily basis. Pharmacist participation remained voluntary throughout the whole pilot project. With the support of the advisory committee, the steering committee set an ambitious, arbitrarily chosen target of 70% for the daily participation rate at each pilot site, calculated as the average number of forms submitted divided by the average number of expected pharmacist shifts during weekdays. During phase 3, individual participation was disclosed to the on-site working committees, so that they could follow up with less assiduous pharmacists. The actual data collected by each pharmacist remained confidential, including for pharmacy administration, in accordance with the partnership agreements.
Other specific tools were developed and updated as the pilot project progressed. For example, paper checklists enabled pharmacists to collate requested data over the course of each day. The on-site working committees received monthly updates, with tables and graphs summarizing pharmacists’ participation. In response to the participation rate observed during phase 2, the steering committee implemented various communication tools to engage pharmacists in data collection during phase 3. These included a dedicated webpage, an animated video, an interactive instructional document, a quarterly newsletter, systematic weekly email reminders, access to the history of self-submitted forms and personal statistics, and in-person meetings to promote the project and obtain feedback.
Quarterly data extraction and compilation were completed by the on-site working committees using pharmacy and financial information systems, along with accident–incident registries and some ad hoc data collection based on forms provided by the steering committee. All results were integrated into a master reporting template. Upon receiving the data, the steering committee supplemented them with information from the web-based daily data collection tool. The data were also validated against the health care centres’ financial and statistical reports to ensure quality. Consistency checks were performed by comparing results across the 6 pilot sites, and corrections were applied with support from the on-site working committees when necessary. Standardized tables and graphs facilitated clear presentation and analysis. During phase 2, each pilot site had access only to its own results, to comply with confidentiality agreements. Data were analyzed by facility and care sector where this could be done without compromising respondents’ anonymity. Only the steering committee could perform benchmarking. Following requests from the pilot sites and amendment of project agreements, consolidated data were disclosed to the on-site working committees in phase 3, which allowed the pilot sites to benchmark their respective results.
After daily data collection ended, all pharmacists at the pilot sites, regardless of their participation in data collection, were invited to complete an online survey concerning their experience of the project. The purpose of this survey was to gather relevant information to complete the analysis and support recommendations at the conclusion of the project.
The pilot project encompassed 358 hospital pharmacists across 6 pilot sites, representing different types of hospitals (with various missions, including academic and nonacademic) and different health regions (Table 2). Data collection lasted 1 to 3 years, depending on the pilot site. By the end of the project, the overall participation rate by pharmacists in daily data collection stood at 55%, falling short of the preset target of 70%. Notably, only a small percentage (7%, n = 25) of pharmacists failed to submit at least 1 form. Analysis of the participation rate over time showed a downward trend at several pilot sites, reflecting a gradual loss of momentum.
TABLE 2 Characteristics of Pilot Sites
Data extraction from health care centres’ information systems posed several difficulties owing to the number of systems involved and their lack of integration. In most cases, the extracted files had to be manipulated before the data could be integrated into the indicator calculation tool. Additionally, substantial modifications to the definitions of indicators and their component variables were required. For instance, the indicator reporting on admission medication reconciliation (indicator Q1) was initially measured only for inpatients, including those in long-term care. However, feedback from the pilot sites highlighted the need to extend the scope of this indicator to encompass medication reconciliations conducted during hemato-oncology outpatient visits, to align with Required Organizational Practices issued by Accreditation Canada for ambulatory care visits when medication management is a major component of care.30
Two indicators were withdrawn at the end of phase 2. Despite its significance for care continuity, the indicator measuring medication reconciliation at discharge (indicator Q2) was withdrawn after nearly a year of the pilot due to challenges with data collection. According to the project definitions established in phase 1, this indicator was intended to account only for medication reconciliation directly involving a pharmacist at some point in the process. However, the tools and definitions used at the pilot sites did not enable differentiation between medication reconciliation exercises that did and did not involve a pharmacist. The second indicator that was withdrawn related to hours worked on drug utilization reviews (indicator A4). It was deemed to concern too few hospitals and too few worked hours to justify manual data collection by pharmacists.
Feedback during phase 2 revealed that several frontline pharmacists felt that the measured activities did not reflect a substantial portion of their practice. To address this issue and increase participation, 5 new indicators measuring pharmaceutical care and drug distribution were added in phase 3. During phase 1, indicator E5, which reports on drug therapy problems resolved by pharmacists, was initially assigned a lower priority due to the anticipated intensity of data collection and thus was not included at the start of the pilot project. However, it was reinstated to better highlight the clinical activities of frontline pharmacists. Among the new indicators, 4 required manual data collection by pharmacists (indicators A7, E5, E7, and E8). One of these (indicator E8) was calculated using data already recorded by pharmacists for other indicators (Table 1).
Table 3 shows a few selected results from the 16 final indicators. Results must be interpreted with caution, as they are subject to many contextual factors across the various pilot sites, such as staff shortages and level of participation from front-line pharmacists. Figure 2 shows an example of results for the indicator reporting the distribution of hours worked by pharmacists in each professional role (indicator A2). Time devoted to pharmaceutical care ranged from 30% to 55% across pilot sites, which could reflect available human resources rather than a focus on certain activities by administrators.
TABLE 3 Selected Results from Testing Indicators during the Pilot Project
| ||
FIGURE 2 Average distribution of hours worked by pharmacists according to professional roles. These results cover a 10-month period across all 6 pilot sites. The breakdown of hours of care with and without supervision of trainees was added during phase 3 to obtain a broader picture of teaching activities, beyond the hours worked in association with the education role and to reflect the additional workload generated by hosting students. |
Indicators calculated as a ratio of manually collated data to data from facilities’ information systems are subject to under-reporting bias. To test and illustrate a method for adjusting the calculation to reduce the impact of this bias, a corrective calculation was applied to the indicator reporting the amount of time devoted to pharmaceutical care per volume of clientele (indicator A1). This correction relied on identifying discrepancies between the hours documented by pharmacists via the web-based form and the hours allocated by the respective pharmacy departments. For instance, if, in a specific facility and care sector, the recorded hours for pharmaceutical care within the indicators project represented only 60% of the allocated hours, the indicator result was adjusted proportionally to account for the under-reporting.
The feedback survey was sent to 337 pharmacists, of whom 183 submitted responses, yielding a 54% response rate. Most respondents (94%, n = 172) stated their belief that it is important to collect data to document pharmacists’ activities, and 82% (n = 150) saw the project as relevant to the upcoming activity-based funding system. Almost all (97%, n = 177) took part in daily data collection, but some (28%, n = 51) admitted to dropping out during the project. Many (65%, n = 119) mentioned that nonparticipation on certain days was due to omission or oversight. For those who dropped out or did not participate (n = 51), three-quarters (n = 38) said the data collected did not represent their clinical practice, and one-third (n = 17) cited excessive workload.
The primary barriers to participation in data collection were the effort and time required, along with difficulties related to definitions and data compilation, as reported by 65% (n = 119) and 59% (n = 108) of respondents, respectively. Among pharmacists who undertook daily data collection, the most challenging aspect was reported to be tracking the number of pharmacotherapeutic problems resolved (68%, n = 86/126). Respondents noted it was difficult to remember all of the day’s activities for proper reporting.
The majority of survey participants reported that they knew about the available tools (84% [n = 154] to 98% [n = 179], depending on the tool), and a good number used them (56% [n = 102] to 80% [n = 146], depending on the tool). The most popular tool was the weekly reminder to access the form (80%, n = 146). The most helpful communication methods were meetings and face-to-face interactions with facility managers (64%, n = 117) and the steering committee (57%, n = 104). Webcasts, promotional emails (excluding reminders), and an interactive slideshow explaining data collection were useful for 45% (n = 82) of respondents. A promotional video on the pilot project was the least useful (17%, n = 31).
In general, 55% (n = 101) of the 183 survey respondents were satisfied with the pilot project, while 39% (n = 71) were dissatisfied. Better communication of results and more frequent project updates were the top suggestions for increased engagement (35% [n = 64] and 26% [n = 47], respectively). Only 7% (n = 13) would have increased their participation if their department head had made it mandatory.
Despite its challenges, the pilot project allowed for significant progress toward a consensus on the most relevant indicators and their definitions for pharmacy departments in Quebec health care systems. For many of the selected indicators, data on their frequency of occurrence or extent of application across all facilities had never previously been collected. The phased deployment of multiple indicators across various pilot sites proved effective. The project’s large scale allowed the collection of substantial data over 3 years, involving approximately 20% of the provincial workforce. Also, the project generated interest among province-level stakeholders and hospital pharmacists, reinforcing the importance of documenting the added value associated with pharmacists’ activities.
Although many studies have detailed the careful selection of performance indicators for hospital pharmacy activities, few have demonstrated the practicality of implementing such a model on a large scale. Lo and others15 reported on 5 different experiences in Canadian hospitals where the renowned cpKPIs were implemented to various extents. These authors proposed solutions to the barriers identified, focusing mainly on simplifying processes, increasing the use of automation, and enhancing transparency for stakeholders. Our project’s developers believed in the usefulness of cpKPIs, and half of the 8 Canadian cpKPIs were tested (indicators A3, Q1, Q2, and E5). However, these required manual data collection from front-line pharmacists, which contributed to the perceived significant added workload and undue burden.
One of the main challenges of this pilot project was getting front-line pharmacists to participate diligently in daily data collection. Compulsory participation, had it been declared, could have improved the validity and acceptability of adopting the proposed performance indicators. However, due to the external nature of the steering committee, it could not impose data collection, so pharmacists’ participation remained voluntary. Nonetheless, it is noteworthy that the majority of pharmacists persisted in their participation, despite the long data collection period. However, encouraging participation beyond a certain level proved difficult. The feedback survey revealed the challenge of changing organizational culture, with the main reason for nonparticipation being the difficulty of integrating the new habit of completing the data collection form into pharmacists’ daily activities.
The culture of performance measurement is a challenge in itself, which may be rooted in negative perceptions. Frontline pharmacists who are mandated to collect data routinely may perceive that their time would be better invested in direct patient care. In their conclusions following focus group discussions to explore pharmacists’ perceptions of the barriers to and facilitators of cpKPI implementation, Minard and others16 found that, despite facing challenges, frontline pharmacists generally supported cpKPI measurement. Another group surveying Canadian hospital pharmacists found that involvement in cpKPI activities was positively correlated with overall job satisfaction.31 Similarly, our project achieved some success in changing negative perceptions: 94% of pilot site pharmacists surveyed at the project’s end recognized the importance of collecting data to document their activities.
In a foundational paper published in 1978, Donabedian divided quality indicators into 3 categories: structure, process, and outcome.32 All but one of the Canadian cpKPIs are process indicators. These are most valuable when there is strong evidence associating processes with clinically meaningful outcomes.15 Although most of the indicators selected in our exercise were of a structure or process nature, many belonged to the outcome category (specifically indicators E3, E5, E7, Q3, and Q10). Outcome indicators are essential because concrete results such as costs or medication errors help the public and stakeholders understand the impacts of clinical pharmacists. Conversely, the analysis of outcome indicators is inherently flawed, because outcomes are most often influenced by multiple factors that may not depend entirely on pharmacists’ activities.
Amid a movement toward privatization of the Saudi Arabian health care system, Al-Jazairi and Alnakhli33 tested a series of 18 indicators similar to ours over 1 year in a single tertiary care hospital. Instead of daily data collection, they asked pharmacists to collect data on a monthly basis, which may be more acceptable in the long run. Indeed, their participation rate reached 95%, and they were able to show the value of clinical pharmacists and associated cost savings. As they note, a health care institution must then benchmark the collected indicators against national or international indicators to determine where their services stand when compared with other institutions. Attempting to do so, our project highlighted another significant challenge of the benchmarking exercise, that is, the need to validate the data and account for contextual factors across various sites. This requirement presents a potential hurdle to large-scale deployment due to the meticulous analysis and understanding of the field’s reality that is required.
This project has laid the foundation for demonstrating the value of pharmacy activities along the patient care trajectory and measuring the performance of individual pharmacy departments. Although the tools developed for the pilot project worked well within the project’s resource limits, they may not be suitable for province-wide use. This caveat is especially true of the requirement for daily data collection by pharmacists and the technologies used for data collection and processing. Scaling up and deploying certain indicators at the provincial level will require more advanced technological solutions than those used in the pilot project, minimizing or even eliminating manual data collection, along with centralized measurements for system-wide consistency. Achieving large-scale deployment that aligns with government requirements for system development and financing will require support from public authorities.
1 Levesque JF, Sutherland K. Combining patient, clinical and system perspectives in assessing performance in healthcare: an integrated measurement framework. BMC Health Serv Res. 2020;20(1):23.
Crossref PubMed PMC
2 Connor L, Dean J, McNett M, Tydings DM, Shrout A, Gorsuch PF, et al. Evidence-based practice improves patient outcomes and healthcare system return on investment: findings from a scoping review. Worldviews Evid Based Nurs. 2023;20(1):6–15.
Crossref PubMed
3 Fernandes O, Gorman SK, Slavik RS, Semchuk WM, Shalansky S, Bussières JF, et al. Development of clinical pharmacy key performance indicators for hospital pharmacists using a modified Delphi approach. Ann Pharmacother. 2015;49(6):656–69.
Crossref PubMed
4 Papanicolas I, Smith P, editors. Health system performance comparison: an agenda for policy, information and research. McGraw-Hill Education; 2013.
5 Laberge M, Brundisini FK, Champagne M, Daniel I. Hospital funding reforms in Canada: a narrative review of Ontario and Quebec strategies. Health Res Policy Syst. 2022;20(1):76.
Crossref PubMed PMC
6 Vailles F. Révolution dans le financement des hôpitaux. La Presse; 2022 May 9 [cited 2024 Jan 23]. Available from: https://www.lapresse.ca/affaires/chroniques/2022-05-09/revolution-dans-le-financement-des-hopitaux.php#
7 Kerr R. Time for a revolution in funding public hospital capacity. In: InSight+ [newletter]. Australasian Medical Publishing Company; 2022 Nov 28 [cited 2024 Jan 23]. Available from: https://insightplus.mja.com.au/2022/46/time-for-a-revolution-in-funding-public-hospital-capacity/
8 ASHP Practice Advancement Initiative 2030: new recommendations for advancing pharmacy practice in health systems. Am J Health Syst Pharm. 2020;77(2):113–21.
Crossref
9 Rough S, Shane R, Armitstead JA, Belford SM, Brummond PW, Chen D, et al. The high-value pharmacy enterprise framework: advancing pharmacy practice in health systems through a consensus-based, strategic approach. Am J Health Syst Pharm. 2021;78(6):498–510.
Crossref PubMed
10 Cillis M, Spinewine A, Krug B, Quennery S, Wouters D, Dalleur O. Development of a tool for benchmarking of clinical pharmacy activities. Int J Clin Pharm. 2018;40(6):1462–73.
Crossref PubMed
11 Rodríguez-González CG, Sarobe-González C, Durán-García ME, Mur-Mur A, Sánchez-Fresneda MN, Pañero-Taberna MLM, et al. Use of the EFQM excellence model to improve hospital pharmacy performance. Res Social Adm Pharm. 2020;16(5):710–6.
Crossref
12 Lopes H, Lopes AR, Farinha H, Martins AP. Defining clinical pharmacy and support activities indicators for hospital practice using a combined nominal and focus group technique. Int J Clin Pharm. 2021; 43(6):1660–82.
Crossref PubMed PMC
13 Vermeulen LC, Moles RJ, Collins JC, Gray A, Sheikh AL, Surugue J, et al. Revision of the International Pharmaceutical Federation’s Basel Statements on the future of hospital pharmacy: from Basel to Bangkok. Am J Health Syst Pharm. 2016;73(14):1077–86.
Crossref PubMed
14 Lyons K, Blalock SJ, Brock TP, Manasse HR Jr, Eckel SF. Development of a global hospital self-assessment tool and prioritization tier system based on FIP’s Basel Statements. Int J Pharm Pract. 2016;24(2):123–33.
Crossref PubMed
15 Lo E, Rainkie D, Semchuk WM, Gorman SK, Toombs K, Slavik RS, et al. Measurement of clinical pharmacy key performance indicators to focus and improve your hospital pharmacy practice. Can J Hosp Pharm. 2016;69(2):149–55.
PubMed PMC
16 Minard LV, Deal H, Harrison ME, Toombs K, Neville H, Meade A. Pharmacists’ perceptions of the barriers and facilitators to the implementation of clinical pharmacy key performance indicators. PLoS One. 2016;11(4):e0152903.
Crossref PMC
17 Flynn AJ, Fortier C, Maehlen H, Pierzinski V, Runnebaum R, Sullivan M, et al. A strategic approach to improving pharmacy enterprise automation: development and initial application of the Autonomous Pharmacy Framework. Am J Health Syst Pharm. 2021;78(7):636–45.
Crossref PubMed
18 Rough SS, McDaniel M, Rinehart JR. Effective use of workload and productivity monitoring tools in health-system pharmacy, part 1. Am J Health Syst Pharm. 2010;67(4):300–11.
Crossref PubMed
19 Rough SS, McDaniel M, Rinehart JR. Effective use of workload and productivity monitoring tools in health-system pharmacy, part 2. Am J Health Syst Pharm. 2010;67(5):380–8.
Crossref PubMed
20 Gupta SR, Wojtynek JE, Walton SM, Botticelli JT, Shields KL, Quad JE, et al. Monitoring of pharmacy staffing, workload, and productivity in community hospitals. Am J Health Syst Pharm. 2006;63(18):1728–34.
Crossref PubMed
21 Shawahna R. Quality indicators of pharmaceutical care for integrative healthcare: a scoping review of indicators developed using the Delphi technique. Evid Based Complement Alternat Med. 2020;2020:9131850.
Crossref PubMed PMC
22 Bialas C, Revanoglou A, Manthou V. Improving hospital pharmacy inventory management using data segmentation. Am J Health Syst Pharm. 2020;77(5):371–7.
Crossref
23 Negro Vega E, Álvarez Díaz AM, Queralt Gorgas M, Encinas Barrios C, De la Rubia Nieto A. Quality indicators for technologies applied to the hospital pharmacy. Farm Hosp. 2017;41(4):533–42.
PubMed
24 Reichard JS, Garbarz DM, Teachey AL, Allgood J, Brown MJ. Pharmacy workload benchmarking: establishing a health-system outpatient infusion productivity metric. J Oncol Pharm Pract. 2019;25(1):172–8.
Crossref
25 Petersen AE, Zeeman JM, Vest MH, Schenkat DH, Colmenares EW. Development of a system-wide pharmacy operational weighted workload model at a large academic health system. Am J Health Syst Pharm. 2022;79(13):1103–9.
Crossref PubMed PMC
26 Hamzah NM, See KF. Technical efficiency and its influencing factors in Malaysian hospital pharmacy services. Health Care Manag Sci. 2019; 22(3):462–74.
Crossref PubMed
27 Rattanachotphanit T, Limwattananon C, Limwattananon S, Johns JR, Schommer JC, Brown LM. Assessing the efficiency of hospital pharmacy services in Thai public district hospitals. Southeast Asian J Trop Med Public Health. 2008;39(4):753–65.
PubMed
28 Barnum DT, Shields KL, Walton SM, Schumock GT. Improving the efficiency of distributive and clinical services in hospital pharmacy. J Med Syst. 2011;35(1):59–70.
Crossref
29 Boucher F, Lemire B, Schryve S, Vaillant L. Selecting performance indicators for hospital pharmacy practice: a Canadian initiative. J Pharm Pract Res. 2023;53(5):282–90.
Crossref
30 Required organizational practices - 2020 handbook. Accreditation Canada; 2020.
31 Losier M, Doucette D, Fernandes O, Mulrooney S, Toombs K, Naylor H. Assessment of Canadian hospital pharmacists’ job satisfaction and impact of clinical pharmacy key performance indicators. Can J Hosp Pharm. 2021;74(4):370–7.
PubMed PMC
32 Donabedian A. The quality of medical care. Science. 1978;200(4344): 856–64.
Crossref PubMed
33 Al-Jazairi AS, Alnakhli AO. Quantifying clinical pharmacist activities in a tertiary care hospital using key performance indicators. Hosp Pharm. 2021;56(4):321–7.
Crossref PubMed PMC
Address correspondence to: Benoît Lemire, Pharmacy Department, BRC.6004, McGill University Health Centre, 1001 Décarie Boulevard, Montréal QC H4A 3J1, email: benoit.lemire@muhc.mcgill.ca
Competing interests: Benoît Lemire and Julie Racicot have served as members of the Board of Directors of the Association des pharmaciens des établissements de santé du Québec (A.P.E.S.), which funded this study. Julie Racicot has also received payment for travel expenses from A.P.E.S. Carl Desparois served as a member of the Board of Directors of the Ordre des pharmaciens du Québec. No other competing interests were declared.
Funding: This study was funded by the Association des pharmaciens des établissements de santé du Québec (A.P.E.S.).
Acknowledgments: The authors thank the managers from all pilot sites, the heads of the various pharmacy departments, the project pharmacists at the pilot sites, and the facility employees who extracted and transmitted data, without whom this project could not have become a reality. They also thank the pharmacists who contributed through their participation and comments to the success of this project.
Submitted: January 31, 2024
Accepted: June 26, 2024
Published: November 13, 2024
© 2024 Canadian Society of Healthcare-Systems Pharmacy | Société canadienne de pharmacie dans les réseaux de la santé
Canadian Journal of Hospital Pharmacy, VOLUME 77, NUMBER 4, 2024