ISPOR 2020 – inside the second plenary: moving from population health to 'precision population health'

How can artificial intelligence (AI) help us progress from the provision of healthcare based on population science and medicine to precision medicine? Here, I share some of my takeaways from the second plenary session at Virtual ISPOR 2020 (18–20 May).

Like Comment

Due to the COVID-19 outbreak, the 2020 ISPOR Annual Meeting, due to be held in Orlando (FL, USA) over 18–20 May 2020, is instead taking place virtually over these same dates!

Missed our summary of takeaways from the first plenary session? Catch up now>>

Interested in partnering with The Evidence Base®? Contact us today>>>

The second plenary session of the virtual event was moderated by Jan Hansen (Genentech, CA, USA) and featured Marc Boutin (National Health Council, DC, USA), Rachael Callcut (University of California, San Francisco, CA, USA) and Nigam Shah (Stanford University, CA, USA) as panelists.

How can AI help progress from the provision of healthcare based on population science and medicine to precision medicine? Can AI help lower healthcare costs? Where are patients places in the algorithm-filled, ‘abstract’ space of AI? Here, I share some of my takeaways from the meeting’s second virtual plenary!

Want regular updates direct to your inbox? Become a member on The Evidence Base, for FREE, now>>

AI can be functional, not purely abstract

Hansen kicked off the session emphasizing that the analysis of real-world data – data collected at the point of care – and the application of these data to drive clinical decision making is the pinnacle of a learning healthcare system. Leveraging AI to make this faster, better and more efficient raises many cost and value questions and challenges that will mean health economics and outcomes research will play an ever-critical role in decision making surrounding the implementation of AI in healthcare.

Shah emphasized that AI is not purely abstract, it can have valuable functional uses that can impact patient care directly, too. He suggested that there are two overarching ways in which AI can be useful: first, AI can help with ‘intra-encounter’, as Nigam described it – both by automation of tedious tasks and by providing novel insights that can inform proactive actions.

Second, Nigam described how many healthcare decisions can be boiled down to a simple analysis of whether a risk associated with an action – e.g. taking a medication etc. – raises above some threshold and outweighs the threat of not taking that action. If this risk is greater, the action will be taken; AI can help both predict this risk, thereby informing whether one should act, as well as help determine how to act, by sourcing and analyzing vast amounts of data on the outcomes of patients in similar circumstances and how they responded to a series of ‘potential actions’.

Callcut further stressed that AI can have important practical applications; she described her involvement in developing and implementing AI-based technologies to help make critical point-of-care decisions, demonstrating the value of these technologies in real-life clinical practice. Later, Callcut also raised the point that engineering transparency in how outcomes from AI are generated could be key to building trust in the technology, by demonstrating the confidence of its predictions.

Moving from population health to personalized health  

Callcut raised the fact that, until relatively recently, the majority of healthcare research and, thus, the decisions it informs, has been focused on ‘average’ patients.

As Boutin concurred, patients with chronic conditions often feel health is something that is done to them, not with them. No patient is average; individuals with chronic conditions, even if these are relatively rare and so it may be assumed that their impacts on patients are relatively homogenous, are not a homogenous population.

AI, Callcut explained, may be able to help progress from population medicine to ‘precision population health’, by representing a faster, more efficient way of collecting large amounts of data and identifying phenotypic variations that may explain why any single patients is different from average and why they may respond differently.

Boutin stressed that AI coupled with behavioral science could be key to helping achieve this precision health. He explained that whilst some healthcare providers may understand the factors that are important to patients and that they value, they often fail to understand the balance of the these; this is where behavioral science may have a role to play.

As Callcut corroborated, we need to consider societal and other factors beyond simply just biology.

Nigam explained that we can’t have personalized medicine when we have a standardized approach to healthcare delivery; though it is unclear how exactly, this misalignment will need to be addressed, leveraging AI to do so, in the future.

AI and COVID-19: two big buzzwords

As held true for the first plenary of the meeting, it is impossible to have some of these big conversations outside of the context of the COVID-19 pandemic, the reason this meeting is taking place virtually in the first place.

When Hansen directed the conversation to how AI could be applied in the context of COVID-19 to help drive more informed decision making, Nigam raised an interesting point that there is, in his opinion, a disconnect in the was AI has been used to date in the COVID-19 response, compared with how it could have been used, perhaps more effectively and efficiently.

So far, a lot of AI efforts in the response to COVID-19 have involved use of AI to perform risk stratification and build epidemiological models. However, if we don’t know what to do with the outputs of these, their benefits may be limited.

Instead, Nigam explained, perhaps we have missed an opportunity to use AI to ‘weed out the noise’ and progress evidence-based decision making. Nigam described how there are many thousands of publications, many ahead of print, not peer reviewed, detailing research into COVID-19. AI could be valuable for helping sort through these and determining which are most useful for informing actions.

Boutin also highlighted that an opportunity has been missed, in the response to COVID-19, to engage patients and discuss what risks regarding experimental treatments etc. they are willing to take.

Privacy: less of a buzzword than you might think?

When Boutin was questioned on whether patients are concerned over data privacy with regards to AI, he made an interesting point that, in fact, a more prudent worry is actually that big data collection may lead to some forgetting to engage patients at all; we must not let this happened.

Nigam corroborated that we can’t just let big technology companies view health as just another billion-dollar industry to sell products to; we need to always have the end consumer in mind and make sure that quality is delivered.

How cost effective are AI technologies?

As we consider what the use of AI, machine learning and advanced technologies in clinical practice and decision making may look like in the future, as Hansen explained, these are interventions that will need pricing and to be evaluated in terms of the value they deliver for money. The question therefore arises: are these technologies cost effective?

In describing the Stanford ‘Green Button’ project in which he has been heavily involved in, Nigam attested that ‘production’ costs are low, however, more in depth analyses to determine ‘return’, as Nigam put it, on the benefits of the report the system produces and thus what value for money is achieved, are needed, which are planned to be carried out.

Callcut described that AI technologies have the potential to help streamline many processes in care delivery, as well as identify patients that may need care specifically that can only be provided in hospitals, or whether that can be achieved at other facilities, as we have seen with the increased use of telemedicine recently. These will result in increased efficiency and ultimately lower costs if this can be translated into less time spent in hospitals.

In addition, many believe that, in the future, these technologies could help actually help deliver safer care, which could reduce preventable deaths.

Callcut also raised the point that outcomes researchers will need to start taking into account wider aspects of care costs when they come to assess value and cost–effectiveness; these can often be forgotten or overlooked, such as cost savings associated with preventing patients travelling far.

The discussion ended with optimism that, in the future, although many questions will still remain regarding regulation and approval of these technologies, we will not so much be asking them from the perspective of ‘if’ and ‘when’ with regards to AI technologies, but rather ‘how’.

Register to join The Evidence Base®, for free, to be the first to hear about 'Look behind the lecture' interviews>>

Go to the profile of Ilana Landau

Ilana Landau

Editor, Future Science Group

194 Contributions
0 Following

No comments yet.