Mayo Clinic Testing AI Tool For Administrative Tasks and Searches

Jun 13, 2023 | Blog

Source: Post Bulletin

Written By: Dené K. Dryden

ROCHESTER — Mayo Clinic is testing a new generative artificial intelligence tool from Google Cloud that could someday help doctors and medical researchers compile information without spending hours poring through documents and patient records.

Google Cloud announced this week that the health system is testing a new service, Enterprise Search on Generative AI App Builder. Clients like Mayo Clinic can use the AI as a chatbot that can receive a question, pull information from internal webpages and documents and summarize that information into an answer.

Vish Anantraman, chief technology officer for Mayo Clinic, said this tool could help staff find information more quickly and reduce their administrative workload. The chatbot — or chatbots — could be used to answer queries such as “How do I set up my company cell phone?” or “Does this patient have a history of smoking?”

“Imagine I can ask — maybe not too (far) in the future — the search engine, ‘Does a 60-year-old male who is suffering from sickle cell anemia and has XYZ gene eligible for any new clinical trials?'” Anantraman said. “If this technology works as promised … it would simply go and search through hundreds of web pages of criteria for this clinical trial and instantly give you back an answer.”

The generative AI program is compliant with the Health Insurance Portability and Accountability Act, or HIPAA, meaning that it can pull information from patients’ electronic health records. The electronic health record of one patient could contain 7,000 to 8,000 data points, Anantraman said, and sifting through that much information to answer one question can be overwhelming for clinicians.

“What we think search and generative AI can do with this is take all of that information is allow clinicians to ask ad hoc questions,” Anantraman said.

One example, Anantraman said, is if a clinician asked the AI chatbot “Is this patient a smoker?” If the words “smoker” or “smoking” don’t show up in the patient’s record, a typical search engine would not show that the patient has a history of smoking. But because Enterprise Search looks for the meaning of words — called semantic searching — rather than just specific keywords, Amantraman said the AI could find text in the patient’s record that reads “patient consumed tobacco five years ago.”

“Generative search takes the document, summarizes that, yes, the patient is a smoker, here’s the evidence why the search engine thinks that the patient is a smoker,” Anantraman said.

Additionally, the AI uses a pattern called retrieval augmented generation, which means the chatbot’s answers should be based just on the facts available to it — the goal is that the AI won’t have the “hallucination aspects” of other large language models, such as ChatGPT, that would result in the chatbot trying to fill in gaps in its answer by speculating or guessing, which can result in incorrect information.

“It would be searching information about a patient’s record, it would be searching information about our intranet website, but all grounded in facts that we’ve supplied the search engine with rather than facts that this language model might be been trained in,” Anantraman said.

Because the new tool could draw answers from patient information, Anantraman said safeguards will be used to ensure that only authorized clinicians can access specific data.

“The intent of searching the patient information is for clinicians who are providing care for that patient,” he said.

Though it is a Google product, Anantraman said Mayo’s Enterprise Search AI is installed within Mayo Clinic’s IT systems, so it only pulls information from internal documents and doesn’t connect to Google’s online search engine.

Right now, Mayo Clinic is still experimenting with the Enterprise Search tool. One early experiment, Anantraman said, is having the chatbot help Mayo Clinic’s Help Desk staff answer questions from employees.

“You can take all the knowledge, articles and policy documents we have and allow it to be summarized into a very nice, single paragraph so that you can get your work done much more efficiently,” he said.

Testing could last up to 18 months to ensure that the chatbot is producing answers that are sound and factual.

“We have to make sure that it is safe, responsible — it’s not as much about whether there’ll be patient information leakage, but we want to be providing trusted information,” Anantraman said.

At the end of the day, Anantraman said his team expects the chatbot to be a time-saving tool, whether its compiling a one-paragraph answer from hundreds of pages of internal policy documents or sifting through clinical trial criteria to find a match for one particular patient.

“I know, speaking with lots of clinicians, it’s going to be a big win for clinician satisfaction, allow them to avoid ‘pajama time,'” Anantraman said, “which is sitting in the night and trying to find information and respond to patient queries and so on, in a much more efficient way so they can actually spend more time with the patient face-to-face.”