Future Tech

Here's why AI could be listening in on your next doctor's appointment

Tan KW
Publish date: Sat, 10 Aug 2024, 01:03 PM
Tan KW
0 464,851
Future Tech

The next time you go to the doctor, don't be surprised if an artificial intelligence program is listening in and transcribing what you and your doctor say. And if a summary of your next X-ray or MRI scan pops up more quickly than expected in your health app, AI could be the reason why.

Technology boosters talk of AI's potential to improve and accelerate medicine, from constructing new proteins to improving disease diagnostics. Many such uses are fraught with potential risks - and are still locked away in labs or awaiting government approval.

But short of such esoteric breakthroughs, AI is starting to entrench itself in the parts of health care that involve communicating with patients: uses that take advantage of the technology's strengths in parsing and condensing language.

Some doctors are already harnessing AI to get their work done faster and keep burnout at bay, although, as with any new technology, there are occasional glitches.

"It is life changing," said Dr Alice Woo, a plastic surgeon with Sutter Health in San Francisco who specialises in hand injuries and has used a software tool from a company called Abridge AI since April.

The program automatically records, transcribes, and condenses Woo's patient visits, cutting down on the time she has to spend dictating and documenting what happened during her roughly 40 patient visits a day. After reviewing the summaries, Woo checks and signs off on them before sending them out to patients.

Abridge AI, based in Pittsburgh, recently opened a San Francisco office and is working with health providers across the US, including UC Irvine and University of Chicago Medicine, in addition to Sutter.

Kaiser Permanente Ventures, the venture capital arm of the Oakland health care giant, has invested in the company, but Kaiser's hospital system wouldn't confirm whether it is using Abridge software, saying only that it is "currently evaluating ambient listening technology," with patient consent.

The time savings are immediate. Woo said she used to spend up to half her work day writing up the results of patient meetings with the help of a nurse. Now it's more like 25%.

Time savings also translate to cost savings. A 2019 Mayo Clinic study found that ER doctors spend more than two hours during and after their shift "charting" patient visits if they don't have an assistant's help. That can add up to a cost of US$600 per day, the study found.

Abridge also delivers more than just transcription. As it listens in on patient appointments, the software can capture discussion of previous medical problems, mentions of medications, and even information about issues such as drug and alcohol use - comments the doctor might have missed during the meeting. When the visit is over, Abridge summarises any exams performed and helps write up the physician's assessment and treatment plan.

A different kind of AI is helping Dr Nicholas Galante, a radiologist and the chief medical informatics officer at Radiology Associates of North Texas, which includes around 300 practitioners.

Galante and his group have been using a program from San Francisco company Rad AI that takes his dictated findings from a scan and runs them through an AI language program to condense them down into the summary, called an impression, that gets passed on to a patient after a thorough check by their physician.

"We're all facing this intense tsunami of imaging," Galante said. Using tools like those made by Rad AI means he can spend more time reading images and diagnosing issues and less time writing.

The program saves Galante and his colleagues around an hour of work time a day, penciling out to a 10% to 20% productivity gain, he said.

Efficiencies of that sort help doctors like Galante and Woo avoid overwork, which is a growing problem in the US, studies have found. In the past, they might have written up clinical notes after hours and on weekends. "Burnout is a very real thing," said Galante.

Of course, the AI programs are not infallible. They can and do make mistakes in transcribing speech or condensing information. Doctors like Woo and Galante said they are careful to read back over what the programs spit out to make sure it's accurate.

Galante said he has never actually seen the Rad AI program trip up. "If the summary is off that means your findings were a bit off," he said, recalling a time the program caught him conflating a patient's lateral and medial meniscus when dictating what he saw in a scan.

Woo said sometimes Abridge doesn't write down everything perfectly, especially in Mandarin and Cantonese, languages she speaks fluently and which the program does not always master.

"Sometimes it types up gibberish," she said, showing one such transcript in Chinese on her office monitor. "It is better at capturing conversations in English."

While Abridge might occasionally "hallucinate," Woo said, it lets her focus on what the patient is saying instead of having to make hurried notes and parse them later. It also captures things that may slip through the cracks during a hectic day in the clinic.

Twice since April, Woo said, she forgot something a patient said during an appointment, realising it when Abridge picked up the information and added it to the visit recap. That caused her to update the patients' treatment plan in both cases, she said.

Just how often AI programs more generally get it wrong in medical contexts is something Stanford researchers have been looking into. A recent study used chatbots made by OpenAI and Meta to summarise findings from radiology scans.

A panel of five Stanford radiologists rated the summaries on completeness, consistency, and correctness. Researchers asked the participants in a blind test which they thought were generated by real doctors, which were spun up by the bots, and which they preferred.

The physicians preferred the AI-generated summaries over those written by people more often than not, said Akshay Chaudhari, one of the study's authors.

Programs made by Abridge and Rad AI are fine-tuned with medical terminology. Even without that speciality training, ChatGPT was able to fool trained radiologists by sounding convincingly human, the study found.

"Still, I don't think I would feel comfortable as a company providing these (responses) without human oversight," Chaudhari said of the results of his study, which did not use Abridge or Rad AI products. Trusting the summaries from his study blindly without a doctor checking them "would make me not sleep well at night," he said.

For now, Sutter Health is using Abridge in pilot form only, asking patients to consent to the program being used during their visits. Woo said so far out of hundreds of patients, only a couple have opted out. Sutter wouldn't say how many of its doctors are now using Abridge.

Getting doctors to opt in could be another issue.

While Woo was enthused about the time-saving technology, Sutter Health manager Angela Bates cited an older doctor she knew who still preferred to work up patient reports the old fashioned way with the help of an assistant.

Not every patient is thrilled about allowing Abridge to eavesdrop on their appointments, either.

A man in the parking lot outside Woo's office who gave his name only as Dan said he was a patient of Woo's, sporting a bandaged hand he said was from a laceration.

He said he signed the Abridge consent form, adding "It's going this way anyway" and comparing it to the nearly unavoidable use of facial recognition technology at the airport.

Asked if the use of Abridge during his appointment with Dr Woo bothered him, he shrugged. "It does, but it doesn't matter," he said before sidling off.

 

 - TNS

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment