Assessing the feasibility of real-world data

In this original editorial piece, Manuela Di Fusco (Pfizer; NY, USA) explores the feasibility of real-world data and evidence as they pertain to healthcare decision making.

Like Comment

Real-world data (RWD) and real-world evidence (RWE) are playing an increasing role in healthcare decision making [1–2].

The conduct of RWD studies involves many interconnected stages, ranging from the definition of research questions of high scientific interest, to the design of a study protocol and statistical plan, and the conduct of the analyses, quality reviews, publication and presentation to the scientific community. Every stage requires extensive knowledge, expertise and efforts from the multidisciplinary research team.

There are a number of well-accepted guidelines for good procedural practices in RWD [3–15] . Despite their stress on the importance of data reliability, relevance and studies being fit for purpose, their recommendations generally focus on methods/analyses and transparent reporting of results. There often is little focus on feasibility concerns at the early stages of a study; ongoing RWD initiatives, too, focus on improving standards and practices for data collection and analyses [16].

RWD and RWE are playing an increasing role in healthcare decision making."

The availability and use of new data sources, which have the ability to store health-related data, have been growing globally, and include mobile technologies, electronic patient-reported outcome tools and wearables [1]. 

As data sources exist in various formats, and are often created for non-research purposes, they have inherent associated limitations – such as missing data. Determining the best approach for collecting complete and quality data is of critical importance. At study conception, it is not always clear if it is reasonable to expect that the research question of interest could be fully answered and all analyses carried out. Numerous methodological and data collection challenges can emerge during study execution. However, some of these downstream study challenges could be proactively addressed through an early feasibility study, concurrent to protocol development. For example, during this exploratory study, datasets may be explored carefully to ensure data points deemed relevant for the study are routinely ascertained and captured sufficiently, despite potential missing data and/or other data source limitations.

Determining the best approach for collecting complete and quality data is of critical importance."

This feasibility assessment serves primarily as a first step to gain knowledge of the data and ensure realistic assumptions are included in the protocol; relevant sensitivity analyses can test those assumptions, hence setting the basis for successful study development.  

Below is a list of key feasibility questions which may guide the technical exploration and conceptualization of a retrospective RWD study. The list is based on experience supporting observational studies on a global scale and is not intended to be exhaustive and representative of all preparatory activities. This technical feasibility analysis should be carried out while considering other relevant aspects, including the novelty and strategic value of the study versus the existing evidence – in the form of randomized controlled trial data and other RWE –, the intended audience, data access/protection, reporting requirements and external validity aspects.

This feasibility assessment serves primarily as a first step to gain knowledge of the data and ensure realistic assumptions are included in the protocol..."

The list may support early discussions among study team members during the preparation and determination of a RWD study.

  • Can the population be accurately identified in the data source?

Diagnosis and procedures can be identified through International Classification of Diseases codes; published code validation studies on the population of interest can be a useful guide.

  • How generalizable is the population of the data source?

Generalizability issues should be recognized upfront. For example, the patient population for which data is available in the data source might be restricted to a specific geographic region, health insurance plan (e.g. Medicare or commercial), system (hospital/inpatient and ambulatory) or group (e.g. age, gender).

  • Are all the details related to the treatment interventions accurately captured in the data source?

Data for over the counter drugs might not be available in sources such as claims data. For prescription medications, details of interest for studies include: dose, strength, route of administration, fill date and days of supply, amongst others.

  • Are the variables and outcomes of interest for the study measured accurately and consistently in the data source?

Some details regarding patient demographics and clinical characteristics might not be available in some datasets – for example, potentially relevant laboratory values not available in claims data.

If outcomes of interest happen in a hospital setting, the events can be identified using inpatient hospital claims for primary and/or secondary hospital discharge diagnoses, utilizing International Classification of Diseases codes. Published code validation studies on the outcomes of interest can be a useful guide.

  • Does the data source capture additional variables that may be used to control for potential confounding? Such variables could include comorbid conditions and/or other factors experienced by the study population that may influence the outcomes of interest?
  • Does the data source continuously cover the time period needed to conduct the study?

RWD studies on new initiators need a washout period – usually 6–12 months in length – during which patients did not receive prior treatment and had continuous enrollment in medical and pharmacy insurance plans.

Additionally, after identification, data should be available for a follow-up period appropriate to observe and capture the outcomes of interest.

  • After applying the inclusion and exclusion criteria, does the data source have sufficient sample size to support the study?

Small sample sizes can make it more challenging to answer comparative research questions. Pooling multiple databases may increase sample size, generalizability and depth of the data, however, linkage should be subject to feasibility assessment, too.


References:

[1] U.S. Department of Health and Human Services Food and Drug Administration. Real-World Evidence.
https://www.fda.gov/science-research/science-and-research-special-topics/real-world-evidence
[Accessed 02/13/2020]

[2] European Medicines Agency. Promote use of high-quality real-world data (RWD) in decision making. https://www.ema.europa.eu/en/documents/presentation/presentation-ema-regulatory-science-2025-promote-use-high-quality-real-world-data-rwd-decision_en.pdf
[Accessed 02/13/2020]

[3] Berger ML, Sox H, Willke RJ et al. Good practices for real-world data studies of treatment and/or comparative effectiveness: recommendations from the Joint ISPOR–ISPE Special Task Force on Real-World Evidence in Health Care Decision Making. Value in Health. 26(9): 1033–1039; (2017)

[4] Berger ML, Martin BC, Husereau D, et al. A questionnaire to assess the relevance and credibility of observational studies to inform health care decision making: an ISPOR–AMCP-NPC Good Practice Task Force Report. Value in Health. 17(2): 143–156; (2014)

[5] Berger ML, Mamdani M, Atkins D, Johnson ML. Good research practices for comparative effectiveness research: defining, reporting and interpreting nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report – Part I. Value in Health. 12(8): 1044–1052; (2009)

[6] Cox E, Martin BC, Van Staa T, Garbe E, Siebert U, Johnson ML. Good research practices for comparative effectiveness research: approaches to mitigate bias and confounding in the design of nonrandomized studies of treatment effects using secondary data sources: the International Society for Pharmacoeconomics and Outcomes Research Good Research Practices for Retrospective Database Analysis Task Force Report – Part II. Value in Health. 12(8): 1053–1061; (2009)

[7] The European Network of Centres for Pharmacoepidemiology and Pharmacovigilance. Guide on Methodological Standards in Pharmacoepidemiology (Revision 7).
www.encepp.eu/standards_and_guidances
[Accessed 11/06/2018] 

[8] GRACE Principles. A validated checklist for evaluating the quality of observational cohort studies for decision-making support. GRACE checklist v5.0.
www.graceprinciples.org/doc/GRACE-Checklist-031114-v5.pdf
[Accessed 02/13/2020]

[9] Johnson ML, Crown W, Martin BC, Dormuth CR, Siebert U. Good research practices for comparative effectiveness research: analytic methods to improve causal inference from nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report – Part III. Value in Health. 12(8): 1062–1073; (2009)

[10] In: Developing A Protocol For Observational Comparative Effectiveness Research: A User’s Guide. Velentgas P, Dreyer NA, Nourjah P, Smith SR, Torchia MM. Agency for Healthcare Research and Quality, MD, USA, (2013)

[11] von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP. The Strengthening The Reporting of OBservational studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. J Clin Epidemiol. 61(4): 344–349; (2008)

[12] U.S. Department of Health and Human Services Food and Drug Administration. Best Practices for conducting and reporting pharmacoepidemiologic safety studies using electronic healthcare data.
https://www.fda.gov/files/drugs/published/Best-Practices-for-Conducting-and-Reporting-Pharmacoepidemiologic-Safety-Studies-Using-Electronic-Healthcare-Data-Sets.pdf
[Accessed 02/13/2020]

[13] U.S. Department of Health and Human Services Food and Drug Administration. Guidance for industry: good pharmacovigilance practices and pharmacoepidemiologic assessment.
www.fda.gov/downloads/drugs/guidancecomplianceregulatoryinformation/guidances/ucm071696.pdf
[Accessed 11/06/2018]

[14] Wilke RJ, Mullins CD. ‘Ten commandments’ for conducting comparative effectiveness research using ‘real-world data’. J Manag Care Pharm. 17(9 Suppl A): S10–S15; (2011)

[15] Garrison LP, Neumann PJ, Erickson P, Marshall D, Mullins CD. Using real-world data for coverage and payment decisions: the ISPOR Real-World Data Task Force Report. Value in Health. 5(10): 326–335; (2007)

[16] ISPOR. Strategic Initiatives. Real-World Evidence.
www.ispor.org/strategic-initiatives/real-world-evidence
[Accessed 02/13/2020]


Disclosures:

Manuela Di Fusco is a paid employee of Pfizer Inc., with ownership of stock in Pfizer Inc. The views expressed are her own.

The opinions expressed in this feature are those of the interviewee/author and do not necessarily reflect the views of The Evidence Base® or Future Science Group.

Go to the profile of The Evidence Base

The Evidence Base

Community, Future Medicine

The Evidence Base is a community site covering the latest news, opinion and insight into the collection and application of real-world data to real-world problems.
130 Contributions
9 Followers
0 Following

No comments yet.