Wednesday, November 27, 2024
31 C
Brunei Town

Latest

Looming challenge

ANN/THE STAR – The latest form of artificial intelligence (AI) has captured the attention of major companies, despite its tendency to fabricate information, leak confidential details, lack substantial knowledge, and rely on context for its responses.

Surprisingly, healthcare sectors are finding potential utility in this seemingly imperfect AI.

Leading entities like Google and Microsoft are fervently advocating for the integration of a cutting-edge AI technology known as “generative AI” into the healthcare industry.

Big firms that are familiar to folks in white coats – but maybe less so to your average Joe and Jane – are equally enthusiastic: Electronic medical records giants Epic and Oracle Cerner aren’t far behind.

The space is crowded with startups too.

The companies want their AI to take notes for physicians and give them second opinions – assuming they can keep the intelligence from “hallucinating”, or for that matter, divulging patients’ private information.

“There’s something afoot that’s pretty exciting,” said director of the Scripps Research Translational Institute Eric Topol in San Diego, United States.

“Its capabilities will ultimately have a big impact.”

Topol, like many other observers, wonders how many problems it might cause – like leaking patient data – and how often.

“We’re going to find out.”

PHOTO: FREEPIK
PHOTO: FREEPIK

TOO MUCH HYPE

The spectre of such problems inspired more than 1,000 technology leaders to sign an open letter in March (2023), urging that companies pause development on advanced AI systems until “we are confident that their effects will be positive and their risks will be manageable”.

Even so, some of them are sinking more money into AI ventures.

The underlying technology relies on synthesising huge chunks of text or other data – eg some medical models rely on two million intensive care unit (ICU) notes from Beth Israel Deaconess Medical Center in Boston, United States (US) – to predict text that would follow a given query.

The idea has been around for years, but the gold rush – and the marketing and media mania surrounding it – are more recent.

The frenzy was kicked off in December 2022 by Microsoft-backed OpenAI and its flagship product, ChatGPT, which answers questions with authority and style.

It can explain genetics in a sonnet, for example.

OpenAI, started as a research venture seeded by Silicon Valley elites like Sam Altman, Elon Musk and Reid Hoffman, has ridden the enthusiasm to investors’ pockets.

The venture has a complex, hybrid for- and non-profit structure.

But a new USD10 billion round of funding from Microsoft has pushed the value of OpenAI to USD29 billion, The Wall Street Journal reported.

Right now, the company is licensing its technology to companies like Microsoft and selling subscriptions to consumers.

Other startups are considering selling AI transcription or other products to hospital systems or directly to patients.

Hyperbolic quotes are everywhere.

Former US Treasury Secretary Larry Summers tweeted recently: “It’s going to replace what doctors do – hearing symptoms and making diagnoses – before it changes what nurses do: helping patients get up and handle themselves in the hospital.” But just weeks after OpenAI took another huge cash infusion, even its CEO Altman is wary of the fanfare.

“The hype over these systems – even if everything we hope for is right long term – is totally out of control for the short term,” he said for a March (2023) article in The New York Times.

Few in healthcare believe this latest form of AI is about to take their jobs (though some companies are experimenting – controversially – with chatbots that act as therapists or guides to care).

Still, those who are bullish on the tech think it’ll make some parts of their work much easier.

LIGHTENING THE LOAD

Psychiatrist Dr Eric Arzubi in Billings, Montana, US, used to manage fellow psychiatrists for a hospital system.

Time and again, he’d get a list of providers who hadn’t yet finished their notes, ie their summaries of a patient’s condition and a plan for treatment.

Writing these notes is one of the big stressors in the health system.

In the aggregate, it’s an administrative burden.

But it’s necessary to develop a record for future providers, and of course, insurers.

“When people are way behind in documentation, that creates problems,” he said.

“What happens if the patient comes into the hospital and there’s a note that hasn’t been completed and we don’t know what’s been going on?”

The new technology might help lighten those burdens.

Dr Arzubi is testing a service called Nabla Copilot, that sits in on his part of virtual patient visits and then automatically summarises them, organising into a standard note format the complaint, the history of illness, and a treatment plan.

Results are solid after about 50 patients, he said: “It’s 90 per cent of the way there.”

It produces serviceable summaries that he typically edits.

The summaries don’t necessarily pick up on non-verbal cues or thoughts Dr Arzubi might not want to vocalise.

Still, he said, the gains are significant.

He doesn’t have to worry about taking notes and can instead focus on speaking with patients. And he saves time.

“If I have a full patient day, where I might see 15 patients, I would say this saves me a good hour at the end of the day,” he said. (If the technology is adopted widely, he hopes hospitals won’t take advantage of the saved time by simply scheduling more patients. “That’s not fair,” he said).

Nabla Copilot isn’t the only such service; Microsoft is trying out the same concept.

At April’s (2023) conference of the Healthcare Information and Management Systems Society – an industry confab where health techies swap ideas, make announcements, and sell their wares – investment analysts from Evercore highlighted reducing administrative burden as a top possibility for the new technologies.

But overall? They heard mixed reviews.

And that view is common: Many technologists and doctors are ambivalent.

For example, if you’re stumped about a diagnosis, feeding patient data into one of these programmes “can provide a second opinion, no question”, Topol said.

“I’m sure clinicians are doing it.”

However, that runs into the current limitations of the technology.

NOT SO ACCURATE OR CONFIDENTIAL

Clinician and executive Joshua Tamayo-Sarver with the startup Inflect Health, fed fictionalised patient scenarios based on his own practice in an emergency department into one system to see how it would perform.

It missed life-threatening conditions, he said. “That seems problematic.”

The technology also tends to “hallucinate”, ie make up information that sounds convincing.

Formal studies have found a wide range of performance.

One preliminary research paper examining ChatGPT and Google products using open-ended board examination questions from neurosurgery found a hallucination rate of two per cent.

A study by Stanford researchers in the US, examining the quality of AI responses to 64\ clinical scenarios, found fabricated or hallucinated citations six per cent of the time, its co-author Nigam Shah said.

Another preliminary paper found in complex cardiology (heart) cases, ChatGPT agreed with expert opinion only half the time.

Privacy is another concern. It’s unclear whether the information fed into this type of AI-based system will stay inside.

Enterprising users of ChatGPT, for example, have managed to get the technology to tell them the recipe for napalm, which can be used to make chemical bombs.

In theory, the system has guardrails preventing private information from escaping.

For example, when this writer asked ChatGPT its email address, the system refused to divulge that private information.

But when told to role-play as a character and asked about the email address of this writer, it happily gave up the information.

(It was indeed the correct email address from 2021, when ChatGPT’s archive ends).

“I would not put patient data in,” said Shah, chief data scientist at Stanford Health Care.

“We don’t understand what happens with these data once they hit OpenAI servers.”

A spokesperson for OpenAI, Tina Sui said that one “should never use our models to provide diagnostic or treatment services for serious medical conditions”.

They are “not finetuned to provide medical information”, she said.

With the explosion of new research, Topol said, “I don’t think the medical community has a really good clue about what’s about to happen.”  – Darius Tahir

spot_img

Related News

spot_img