Cryptography

Published on April 18th, 2023 📆 | 3838 Views ⚑

0

Will AI replace doctors? How new tech could harm healthcare


iSpeech.org

Dr. Marc Siegel

Artificial intelligence can be a useful scientific tool, but it also could threaten a doctor’s essential role. 

Medical school provided a similar kind of intelligence for me in that my brain was bombarded with a billion factoids, which laid a tapestry of information. Buried in my unconscious mind was the minutiae of nephrology, which I could bring to bear when I had a sick kidney patient, or the physiology of a failing heart when my patient’s lungs filled with fluid.

But the medical applications of computerized artificial intelligence are different. AI is based on a more precise pattern recognition from retinal analysis of diseases such as diabetes or heart disease before it occurs, or even the kind of mutations of a glioma (one of the worst kinds of brain tumors) while surgery is still going on.

Opinions in your inbox: Get exclusive access to our columnists and the best of our columns

Recent studies also have examined pre-cancerous stem cells in the blood as well as other factors for AI to analyze that could help with diagnosis and treatment. 

Nonetheless, what AI will always lack is my clinical judgment formed from years of experience, not to mention my empathy for my patients. Increasingly, AI threatens that.

Boundary between computer and doctor is easily blurred

You have only to look at a study recently published in Nature Biomedical Engineering that looked at intraoperative video monitoring to instruct surgeons to understand that the boundary between doctor and computer is too easily blurred in a way that could intimidate or even threaten a surgeon’s abilities.

ChatGPT falsely accused me of sexually harassing students:Can we really trust AI?

Not your parents' Google:Why universities should embrace, not fear, ChatGPT and AI

And what about malpractice risk? If you are a radiologist, a dermatologist or a surgeon who decides to disagree with your AI feed, based on years of clinical judgment and experience, and you end up being proven wrong in retrospect, what is to prevent your patient from suing you and using the AI recommendations as evidence?

Visitors observe a remote-controlled medical robot at the Mobile World Congress in Barcelona, Spain, on Feb. 27, 2023.

This might well intimidate doctors from going against AI recommendations for treatment, even if their judgment tells them to do so.

Consider that AI, when applied to clinical medicine, can give you only general answers. It cannot know the nuances of your case or history.

ChatGPT is a popular new AI bot that answers users' questions. Patients already are using it for medical advice. 





AI-created misinformation:ChatGPT made up research claiming guns aren't harmful to kids. How far will we let AI go?

Another view:John Oliver is wrong to worry about ChatGPT. AI can help us solve complex problems.

As Dr. Isaac Kohane, chair of the Department of Biomedical Informatics at Harvard, told the New England Journal of Medicine: “Now with these large language models like ChatGPT, you see patients going to and asking questions that they’ve wanted to ask their doctors – because, as is commonly the case, you forget things when you go to the doctor, perhaps because you’re stressed, and because, unfortunately, doctors don’t have that much time.”

I'm concerned about how patients will use advice from AI

Kohane is excited about this advancement, but I am deeply concerned about it. While he is right that my availability in the office for face time with my patients is limited, especially because of electronic health records documentation requirements, the solution is definitely not after-visit consultations with artificial intelligence, which could easily provide information that misleads rather than helps a patient.

In the same New England Journal of Medicine article, another AI expert, Dr. Maia Hightower, chief digital and technology officer at University of Chicago Medicine, pointed out the growing role of AI as an administrative tool in the opaque interface among doctors, patients and insurance companies.

“So in order to communicate with payers, with our insurance companies, we’ll often have bots or automation that transfers information from the health system to the insurance company and back," Hightower said. "In the case of insurance companies, we know that they often will use AI algorithms for prior authorization of procedures, whether or not to cover a particular medication or test. And in those cases, there isn’t much transparency on our side as a provider organization.”

Dr. Marc Siegel, a member of USA TODAY's Board of Contributors and a Fox News medical correspondent, is a professor of medicine and medical director of Doctor Radio at NYU Langone Health.

As a practicing internist, I have a big problem with this and can envision a future where fights for insurance coverage become even more escalated than they are already – and where personalized medicine is replaced by algorithms. What’s to stop insurance companies from replacing me with a cheaper, more predictable AI robot, who practices some of the science but none of the art of medicine?

At best, AI will be like the automatic pilot in a commercial jet. It can help fly the plane, but passengers still want a human pilot in the cockpit ready to take control when needed.

Dr. Marc Siegel, a member of USA TODAY's Board of Contributors, is a professor of medicine and medical director of Doctor Radio at New York University's Langone Health. His latest book is "COVID: the Politics of Fear and the Power of Science." Follow him on Twitter: @DrMarcSiegel

More from Dr. Marc Siegel:

A lion at dusk: Jimmy Carter's greatest accomplishments have been in health and welfare

Should my kid get the COVID vaccine? As a doctor, I strongly recommend it.

Dr. Anthony Fauci knows he's made 'a lot of enemies.' Here's what I know about him.





Source link

Tagged with: ‱ ‱ ‱ ‱ ‱



Comments are closed.