Automated medical scribe

Automated medical scribes based on Large Language Models (LLMs, commonly called "AI", short for "artificial intelligence") increased drastically in popularity in 2024.

Some retailers say their tools use zero-knowledge encryption (meaning that the service provider can't access the data).

Others explicitly say that they use patient data to train their AIs, or rent or resell it to third parties; the nature of privacy protections used in such situations is unclear, and they are likely not to be fully effective.

[2][1][3][4] Most providers have not published any safety or utility data in academic journals,[1] and are not responsive to requests from medical researchers studying their products.

[3] It is intrinsically impossible to prevent an LLM from correlating its inputs; they work by finding similar patterns across very large data sets.

Zero-knowledge encryption implies that the only unencrypted copy is at the client, and the server cannot decrypt the data any more easily than a monster-in-the-middle attacker.

[3] A survey found that most doctors preferred, in principle, that scribes be trained on data reviewed by medical subject experts.

[18] Software trained on thousands of real clinical conversations generated transcripts with lower word error rates.

Medical professionals are generally considered to have a duty to review the terms and conditions of the user agreement and identify such data reuse.

[21] Some vendors market scribes specialized to specific branches of medicine (though most target general practitioners, who make up about a third of doctors[where?]).

[18][2] Extracting information from the conversation to autopopulate a form, for instance, may be problematic, with symptoms incorrectly auto-labelled as "absent" even if they were repeatedly discussed.

Models failed to extract many indirect descriptions of symptoms, like a patient saying they could only sleep for four hours (instead of using the word "insomnia").

The use of templates and rules can make them more reliable at extracting semantic information,[21] but "confabulations" or "hallucinations" (convincing but wrong output) are an intrinsic part of the technology.

[25][26] By taking advantage of decentralized ledger, DeepCura AI aims to maintain transparent records of document creation and modification, which can be critical for compliance and auditing purposes.

The company is also recognized for pioneering the concept of “ai gridhooks,” an evolution of traditional webhooks designed to facilitate complex, scalable interactions between healthcare systems.