Skip Navigation LinksHome | Editors' Blog | Post

AI in health care 2024: Experts weigh in

[Here are some thoughts on the growth of artificial intelligence in health care as covered in the Feb. 5, 2024 issue of Part B News from major players in the field.]
 
Bob Rogers, data scientist and CEO of Oii.ai, a supply chain AI company: [ICD-10 diagnosis] coding with AI is ready for primetime… [DRG coding with AI] is close to being solved as well. I understand from AI practitioners in the space that the current accuracy limitations are due to access to data for training AI algorithms. This makes sense since grouping data reduces the number of examples that can be used for training, and the way each hospital or health systems documents conditions will impact how they are grouped as well.
 
Sri Velamoor, president & COO of NextGen Healthcare, Los Angeles, makers of the AI-assisted “ambient listening solution” Ambient Assist: The sheer scale, scope and speed of the computations and probabilistic analysis that we can perform and access to domain specific datasets make the current version of the AI models more accurate and effective.
 
Open AI and others that published their work gave a significant leg up to companies that are now able to build off those foundational models.  Folks did the hard work of going out and securing the available information, testing their models using millions of parameters, and pre-trained these models for a variety of tasks.
 
So, if you're starting a new company now in the space, you can start with a foundational model and significantly accelerate the work to adapt that model, and to train it based on your domain specific data. And of course, the key difference is the speed that the ability to use foundational models gives you.
 
Jessica Jarvis, principal and digital transformation lead at ZS, a management consulting and technology firm in Los Angeles: After decades of struggling to move the needle, the potential of AI to greatly improve patient adherence is here and some leading-edge companies have actually quantified the material value of these solutions.
 
For example, across therapeutic areas we see as much as 40-50% patient loss post prescription and leading edge companies leveraging AI solutions to personalize patient support are reducing that patient loss by as much as 15% to 25%.
 
Velamoor: There are areas where AI still needs to get better, and one is establishing cause and effect relationships. It is trained on making statistically probable associations. If the way in which that model is trained doesn't account for what we would consider common rules and constraints, then it's essentially giving you possibilities vs. definitive associations. So you have to take any instance where it’s tasked with a causal relationship with a grain of salt.
 
Another thing it's not capable of is what we might call ‘common sense’. In the real world there’s unbounded complexity. The ability to translate these “general rules” about the world that we all intuitively learn over our lifetime i.e. what you might call common sense guidelines and our ability to train models to use these rules is far from efficient today.
 
Susan Boisvert, senior patient safety risk manager with The Doctors Company in Jacksonville, Fla.: [Think about] electronic medical records: When they came out, they were supposed to fix everything. And then we discovered a whole set of new problems.
 
Fortunately for us, the generative AI models are still sort of caged in academic medical centers, think tanks and big companies. So the input/database for the generative AI is curated and they're not out in the wild yet.
 
[In those settings] we can look at error and bias and trace it back -- if there's control of the input and the program is transparent, and if the developers are adhering to the guidelines suggested by the AI community and the government (NIST, ONC) and aligned with [fair, appropriate, valid, effective, and safe (FAVES) AI principles], then we'll be able to study that and measure AI bias and its effects on patient care.
 
Rogers: There can be security challenges with some of these systems, since healthcare organizations can have concerns about putting clinical data directly into large language model (LLM) services such as ChatGPT. Privacy-enhancing technologies (PETs) such as those called out in the President’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence can dramatically improve the security of AI-powered clinical data analysis systems.
 
For example, at BeeKeeperAI, algorithm owners can test their systems against data owned by other enterprises. The entire system runs encrypted, in a separate, secure enclave. A report of the performance of the algorithm is generated, and that’s the only info that leaves the enclave. The algorithm and data are then destroyed, eliminating any chance of either being stolen or manipulated.
To comment, login here.
Reader Comments (0)

Login

User Name:
Password:
Welcome to the new Part B News Online. If you are a returning user having trouble logging in, please click here.
Back to top