Metrics Experts
Ask the Metrics Experts: Metrics Questions and Answers
Click the question to reveal the answer.
ICH E6 R2 Requirements
Q: ICH E6 R2 requires us to focus particularly on critical data and processes in quality risk management. Do you have any metrics that help assess whether our efforts are focused in this way? (Metric Insights Aug. 2021)
A: This was a topic of discussion in the Centralized & Site Monitoring Process Metrics Work Group meetings earlier this year. Specifically, whether centralized monitoring is focused on critical data and processes. After much discussion, we included some metrics in the recently published Centralized & Site Monitoring Process Metrics Toolkit 1.0. They assess the proportion of risks/issues that are identified through centralized monitoring and any associated actions that are related to critical data and/or processes. We describe these as ‘aspirational’ metrics because those in the work group thought they would be very valuable to measure but that with current system configurations, it would not be possible to measure them. Without measuring in some way, how can we know whether our efforts are focused on these critical items or not? Our work groups include members from CROs and vendors, as well as sponsors, and are a great place to cross-fertilize ideas such as this. With vendors able to hear and take part in such discussions, it allows them to understand user perspectives and to prioritize items like these for development and inclusion in future releases.
Vendor Oversight
Q: One of our vendor oversight metrics measures whether our CRO has billed us for the amount that we expect by quarter. We are finding that this sometimes green even though we know there are problems with costs not matching the work that should have been completed. Or it might be red but when we drill down, actually the costs are in line with what has been completed. Can you recommend a metric that might work better for oversight of costs with the CRO? (Metric Insights, Mar. 2021)
A: Organizations typically measure whether costs are as expected. However, even when the costs are as expected, there may be problems because not all the activities that were expected to be completed to date happened. For example, the costs might be as expected for the date, but the site activation and patient enrollment activities are far behind. What you really want to know is whether costs are as expected compared with the work that has been completed. In this example, costs are higher than expected for the work completed because not enough sites have been activated and patients enrolled. There is a metric that calculates this – the Schedule Performance Index which we have incorporated into our Vendor Oversight Finance metrics. As contracts are typically based on a price per unit, this measurement is actually what the Sponsor and CRO should review from a cost accounting perspective. The Schedule Performance Index takes the cost of all the work that will be charged to the sponsor (the units completed to date) and divides by the cost of the units expected. This approach focuses on the actual costs that will be charged to the sponsor – complexities due to invoicing and payment schedules are not included. If the Schedule Performance Index is 1, then everything is on track. If above 1, then more work has been completed than expected – you are ahead of schedule according to the costs. And if less than 1, less work has been completed than expected and you are behind schedule according to costs. The Schedule Performance Index allows you to link the time and cost dimensions of the work, and with other metrics such as % milestones complete will help you and the sponsor manage costs.
Risk-Based Monitoring
Q: Since we implemented Risk-Based Monitoring we have been trying to monitor the process to determine how well it is working. We have developed metrics such as whether alerts from Centralized Monitoring are closed on time but are uncertain what to use as a target. How are other organizations approaching this? (Metric Insights, Jan. 2021)
A: With a new or significantly revised process, you should always be asking the question “How will we know whether it is working?” You need to establish measurements the evaluate the process. We applaud your approach! The Centralized & Site Monitoring Process Metrics Work Group has been discussing this exact topic. The work group developed a high-level process map that includes the centralized monitoring and remote/onsite site monitoring processes. The work group identified a series of questions about the process (aka Key Performance Questions) they wished to answer and are working on defining metrics to answer the questions. This approach ensures that our metric sets are comprised of metrics that provide actionable data.
The measurement that you state in your question is a type of time metric we call an “on time” or “timeliness” metric. The work group defined an alternative time metric that is a “cycle time” metric – instead of measuring if the task is completed within the timeframe you establish as the target, it measures the actual time it takes to complete the task.
Key Performance Question | Draft Metric | Performance Target |
How long does it take to determine if risks identified in centralized monitoring are issues? | Average time from risk identification to issue confirmation | TBD |
The work group explored the pros/cons of each approach and concluded that the industry doesn’t have enough experience with the new process to determine what the target should be. Using a cycle time metric provides the opportunity for the industry to gather data on the new process and, in time, decide whether it is feasible and useful to have a timeliness metric that measures against a target. Additionally, upon review of the results, we may determine that critical risks should be assessed faster than others. Finally, we don’t want to encourage the premature closures of risks (prior to proper investigation) in order to meet timeliness targets without basis in understanding the process. Whenever metrics are defined and used, it is important to think critically about the purpose of the metrics and the behaviors they might drive.
KRI Thresholds
Q: We are accelerating implementation of our Centralized Monitoring program due to the COVID-19 pandemic. We’ve started with a small number of Key Risk Indicators (KRIs) including AEs/patient. We have upper and lower threshold limits, so we can look for outlier sites – those that are over- or under-reporting relative to others. But what we’ve noticed is that we are getting lots of false signals that are taking time to investigate. For example, a site that has recently been activated and only has enrolled one patient has no AEs/patient and looks like an outlier. Do you have any suggestions for how to set thresholds to take account of this? (Metric Insights, Apr. 2020)
A: With KRIs such as AEs/patient, you should use leading metrics – metrics that provide information that you can use to course correct on the current study. The problem you describe is not due to the KRI threshold levels – it is the metric that you are tracking. Cumulative AEs/patient is a lagging metric. During the study, as patients progress through their treatment, the likelihood that they will experience AEs increases – in other words, the opportunity for AEs to occur increases. In the example of the site that just enrolled its first patient, there are no AEs reported because the study is just getting started. The leading metric cumulative AEs per patient-week is a better measurement because it accounts for the length of time (number of weeks) each patient has been on the study. Using a version of the measurement that accounts for the time the patient is on the study can significantly impact how early a signal is detected and can help to reduce the number of false signals.
Additional details about defining KRIs are available in the paper we have recently published, Measure the Right Things at the Right Time: Design Key Risk Indicators and Key Performance Indicators that Provide Timely Insights During Study Conduct. We suggest you read the paper and review the definitions of your other KRIs to make sure they are leading metrics too. View the paper ›
Private Virtual Community for Avoca Quality Consortium Members
The AQC Knowledge Center includes an online community for members to collaborate on issues and topics. Ask your question today to leverage the experience of industry peers and gain different perspectives.
Contact us to request log-in credentials