Does real-world evidence have a place in regulatory decision making? An interview with David Klonoff
In this feature, as part of our editorial ‘in focus’ surrounding the regulatory use of real-world evidence (RWE), David Klonoff (Diabetes Research Institute, Mills-Peninsula Medical Center; CA, USA) discusses the benefits and challenges of utilizing RWE in regulatory decision making.
Please could you introduce yourself?
I am an endocrinologist by training and am the current Medical Director of the Diabetes Research Institute at the Mills-Peninsula Medical Center (CA, USA), where I have acted as Principal Investigator of 120 clinical trials of diabetes devices and drugs. I am also a Clinical Professor of Medicine at the University of California, San Francisco (CA, USA).
I have previously worked with members of academia, industry and regulatory agencies to develop standards and guidelines for various types of technologies used in diabetes devices. I currently Chair a standards committee for the Institute of Electrical and Electronics Engineers (NJ, USA), developing a cybersecurity standard for diabetes devices, as well as a technical standard committee for the Clinical and Laboratory Standards Institute (PA, USA), concerning continuous glucose monitors.
What are some of the benefits and challenges associated with conducting and acting on the outcomes of real-world evidence (RWE) studies?
RWE is the clinical evidence about benefits or risks associated with medical products derived from analyzing real-world data (RWD), which are data collected through routine clinical practice from multiple sources that can be linked together to provide meaningful patterns. RWE is an analysis of data representing conclusions derived from observations of subjects in healthcare settings, rather than subjects in a research environment, participating in randomized controlled trials (RCTs).
In RCTs, subject characteristics are clearly predefined, and subjects are then randomly allocated to either intervention or standard care study arms. The intervention is applied in the same way to each subject and participants are encouraged to adhere to the intervention. All trial subjects, in both treatment groups, are assessed with standardized measurements at the completion of the study. This type of study is expensive; the outcomes demonstrate the efficacy of the intervention under ideal circumstances for a well-defined population.
By contrast, in a RWE study, the subjects might be randomized at the onset – called a pragmatic RWE study – or the subjects might be identified and allocated to intervention or standard care arms retrospectively, from an analysis of medical records, in what is termed an observational study. In a RWE trial, there is no restriction on exactly how an intervention is applied and there is no formal follow-up or activity to promote adherence to the intervention. The outcomes demonstrate the real-world effectiveness of the intervention.
Results of RWE trials can be affected by various biases, particularly the following four: first, there is a tendency of physicians to prescribe certain treatments to certain types of subjects, so the two treatment groups might not be at equal risk of particular outcomes – this is known as selection bias. Second physicians might not apply the named intervention the same way to each cohort. This is known as investigator bias. Third, subjects using the intervention might also have other healthy behaviors. This is known as adherence bias. Fourth, not all outcomes are clearly or accurately reported in medical records, so there can appear to be a misleading preponderance of certain outcomes, which reflects how outcomes are reported, rather than how frequently they are actually occurring – this is known as reporting bias.
How has the use of RWE studies in regulatory and healthcare decision making evolved since the implementation of the 21st Century Cures Act and US FDA RWE program?
The 2016 21st Century Cures Act was intended to promote more rapid development of drugs and biologics by requiring the US FDA to develop a framework and guidance for evaluating RWE in approving these types of products. On 6 December 2018, then-FDA Commissioner, Scott Gottlieb, announced that the FDA was initiating a new strategic program to utilize RWE. The FDA released a document describing a framework to promote the agency’s use of RWE. This Framework discussed how investigators could improve the quality of RWE studies to provide better comparative effectiveness data to assist planners and payers in selecting the most effective and cost-effective treatments from a variety of possible choices of treatment.
RWE trials are less costly than RCTs and often provide information about an entire population rather than just a small segment that would be represented in a RCT. Therefore, it is a good investment of research money to improve the applicability of RWE studies so that they do not merely suggest hypotheses – which must then be tested in RCTs – but actually test hypotheses themselves, if RWE data collection and analysis can be standardized in the future.
Have we realized the full potential of RWE use in regulatory decision making?
Currently RWE can study many thousands or hundreds of thousands of subjects on a scale far too costly for a pharmaceutical or biologics company to study. These outcome data can be used to identify rare side effects of treatments that are unlikely to turn up in the numbers of subjects typically studied in Phase III pivotal clinical trials. Post-marketing studies of large populations can be used for label expansions if the basic safety and effectiveness for a product has already been established with RCTs.
RWE studies will become most valuable if investigators will be able to overcome biases and incomplete data sources to assemble data from available non-medical record sources and recreate outcomes data that match those of RCTS. There is evidence that outcomes data from unstructured electronic health records might be more accurate than structured data from electronic health records.
In a multicenter study recently reported in JAMA Network Open, only 15% of a set of prespecified RCTs could feasibly be replicated by ascertaining the 1) intervention, 2) indication, 3) inclusion and exclusion criteria, and 4) a primary end point from real-world data sources such as structured electronic health records or claims data. This study demonstrated that observational methods are currently insufficiently accurate to replace traditional RCTs.
However, last year, a group at Harvard carefully studied real-world data and created a RWE cohort and protocol that very closely predicted the outcome of a cardiovascular outcomes trial for diabetes drugs. This group is now working with the FDA to develop a new method for creating RWE studies to replicate RCTs and minimize differences between RWE and RCT study outcomes due to differences in study design.
What regulations need to be implemented to increase trust in RWE and expand use of RWE?
RWE studies will become more reproducible and useful, first, if a national registry for post-market RWE trials is created and, second, if journals will only publish RWE studies that were enrolled in this registry prior to data collection. Standardization of various steps in developing RWE studies will help make these studies more reproducible, however, some steps in a RCT will never be fully replicated in a RWE study, either because of missing information in the health records about the subjects or study procedure.
How do you see this field evolving in the future, especially in light of further US FDA guidance on RWE to be released in 2020/2021?
RWE studies will become increasingly refined to more closely replicate RCTs as new sources of outcomes data become available and better standardized for mining. I do not see RWE ever replacing RCTs, which will remain the gold standard for testing hypotheses about the safety and effectiveness of medical products.