Go to main Forum page »
The other day, I believe I was bitten on the arm by an insect, which left a fairly large red blotch with a couple of small puncture marks. I assumed it happened while working in our yard. I bought some over-the-counter hydrocortisone to apply to it. After a week, though, it started to look more like a rash.
I decided to visit a walk-in clinic. The physician assistant examined it and advised me to mix hydrocortisone cream with Benadryl cream and cover the area with plastic wrap twice a day. I was also given a prescription for antibiotics to take if the redness didn’t improve within a day.
My wife had doubts about that treatment plan. She immediately went to her laptop and typed the situation into ChatGPT. The AI advised against mixing the two creams and warned that wrapping the area with plastic could trap moisture and worsen the irritation. Instead, it recommended continuing with hydrocortisone cream and mentioned stronger topical corticosteroids like triamcinolone acetonide 0.1% and clobetasol propionate as possible prescription options.
The next morning, I called my dermatologist, not expecting to get in before our trip next week. But as luck would have it, there was a cancellation that morning.
Dr. Olson examined the area and prescribed triamcinolone acetonide—one of the medications ChatGPT had suggested. He also told me I didn’t need the antibiotics the walk-in clinic had prescribed. After about a week of using the new medication, the red blotch cleared up.
ChatGPT can be helpful for information, but it shouldn’t replace medical advice. That said, I do believe AI can help make doctors better, leading to improved care for their patients.
We’ve already seen how the internet has made healthcare more accessible and connected. Patients can quickly research symptoms, access test results, and communicate with doctors through telemedicine and online portals.
I remember visiting my primary care physician in the 1970s for stomach pains. He asked if I was taking any medication. I told him I was—something for a foot ailment. He pulled out a thick reference book, looked up the medication, and concluded it was causing my pain. I was thankful that drug was listed in his book, because back then there was no internet to offer quick access to that kind of information.
Can AI further the advancement of medical care? I believe it can—and already is. It can read scans with remarkable accuracy, assist in analyzing tissue samples, and flag abnormalities faster than traditional methods. It can even help reduce billing errors and prevent fraud.
AI isn’t a replacement for doctors, but it can help them make faster, better decisions. My recent experience showed that when used wisely, AI can be a valuable part of modern healthcare—and a helpful second opinion.
I am having a similar experience. I noticed three red bumps on my arm that I attributed to bug bites. After a week with the bumps starting to spread, I went to Urgent Care and received a prescription for Mupirocin ointment. This seemed to make the problem worse. I returned to Urgent Care and saw a different doctor. He prescribed Triamcinolone Acetonide and an oral antibiotic. Not sure if this is working yet, and still do not know what the cause is. Next stop will likely be a dermatologist.
Get into the habit of asking about your “doctor’s” credentials.
A Physician Assistant (or nowadays Associate, for PC reasons) or an APRN are NOT physicians. Don’t expect them to be good at diagnosis, or treatment, or generally anything beyond their basic daily work. Their training and knowledge base are much inferior to physicians’, regardless of what they say.
You want credentials like MD, DO, MBBS. That’s a real doctor, not a “provider” playing doctor.
I strongly disagree. I have received superb care over the years from PAs and nurse practitioners, and experienced specialty nurses often have more knowledge about day-to-day health issues than the physicians they work for. These people are not “playing doctor” and I find such a characterization unnecessarily derisory.
It’s fine to ask about credentials, but dismissing PAs and APRNs as “playing doctor” is misleading. They’re licensed, trained professionals who provide high-quality care—often with outcomes similar to MDs.
I love ChatGPT. But, “trust, but verify”. The verify is the difficult part of the equation. I use Google + ChatGPT and still question some results. The more similar results, the better I feel. “50,000 flies can’t be wrong”
Interesting. My limited experience with urgent care has varied from very good to questionable. They are not at the level of ER or a doctor’s visit.
I have found AI medical advice most useful to educate me so that I understand my situation better and can ask good questions of medical providers. Tread softly because doctors can easily get annoyed if they feel we are questioning their advice.
I’m also no fan of Urgent Care. In fact, after two misdiagnoses and one correct diagnosis accompanied by a prescription that could not solve my problem, I don’t think I’ll ever go to another UC.
AI has great potential. I live in Madison WI, the home of Epic Software. They’re currently working on an application that will take account of all the personal information available in MyChart to arrive at personalized diagnoses. They expect it will be available within two years.
I am not “over enthusiastic” but a firm believer that AI is here to help by orders of magnitude with regard to educating oneself on the health problem at hand. Doctors have limited time to spend with patients and have to “summarize.” AI helps to delve deeper, as deep as you wish, to understand. At least, you can go prepared to “fill in the gaps” as you consult.
And yes, in few years the doctors’ profession will change. Tremendously, I believe.
What might happen when people trust AI words more than the genuine human voice?
While I share the enthusiasm and optimism about AI’s role in the medical field, I worry about how large language models (LLMs) may reshape public trust in healthcare. These models generate human-like language but lack the authentic empathy essential for healing. Increasingly, people treat AI suggestions like seasoned physicians’ second opinions – rather than what they are: synthesis from un-vetted web data without peer review or expert consensus. As queries vary, LLMs often yield inconsistent or contradictory answers – reflecting statistical patterns, not clinical judgement. Remember: GIGO – garbage in, garbage out. Words shape trust – and a well-earned confidence is the heart of medicine. LLMs — especially those lacking medical practitioners’ wisdom — risk disrupting the fragile doctor-patient trust. After all, when you’re vulnerable, it’s comforting to be able to trust a doctor with a genuine heart – not just one that possesses Artificial Intelligence. Life and death may turn on words, yet the soul of living rests in trust between humans.
AI won’t replace human trust — it supports it. Used right, it helps doctors focus more on care, not less. For instance, AI can reduce repetitive and administrative tasks, giving doctors more time to spend with patients.
I truly hope it is true that “AI won’t replace human trust — it supports it”. But we are not there and I haven’t seen the sign that AI will lead us there. Currently, LLMs are built and trained to please users, to reinforce patients’ expectations instead of sparking critical thinking or challenging cognitive biases.
Of course, the key is “used right”. In medicine, drugs and medical devices are approved and used in restricted cases only when proven effective and safe. By contrast, LLMs have been released without instruction for right uses, much less safe uses.
It is not unlike testing a powerful tool on the population to see if it harms anyone.
Natural Stupidity Vs Artificial Intelligence. Maybe we should have solved the former before moving on to the latter.
Wicked wit, Mark. Here’s my echo.
To be human is hard – to be or not to be.
We once shunned artificial ingredients, then embraced ‘better living through chemistry’, and we’re now flirting with artificial intelligence.
Psychologists Robyn Dawes and Paul Meehl demonstrated the superiority of statistical / actuarial decion making to clinicians’ d-m in the 1980s so the superiority of AI isn’t surprising.
Clinical Versus Actuarial Judgment (1989)
Professionals are frequently consulted to diagnose and predict human behavior;
optimal treatment and planning often hinge on the consultant’s judgmental accuracy. The consultant may rely on one of two contrasting approaches to decision-making—the clinical and actuarial methods. Research comparing these two approaches shows the actuarial method to be superior. Factors underlying the greater accuracy of actuarial methods, sources of resistance to the scientific findings, and the benefits of increased reliance on actuarial approaches are discussed.
https://meehl.umn.edu/sites/meehl.umn.edu/files/files/138cstixdawesfaustmeehl.pdf
I’m confused. What “superiority of AI” are you referring to?
Superiority in making the correct judgment/prediction/diagnosis.
Thank you, but “superiority” would imply that AI actually did that—and did that consistently. Maybe someday the word will apply, but Dennis merely raised an interesting situation where AI bested one urgent care PA.
My original comment referred to the fact that statistical/actuarial models have been shown to outperform the judgment of individual experts for many years and to do this consistently. Dennis’ example is simply consistent with research which actually dates back to the 1950s. Below is an AI summary of Meehl’s work.
Paul Meehl advocated for the use of actuarial (or statistical) decision-making over the more intuitive clinical decision-making by human judgment, a concept he detailed in his 1954 work comparing clinical and statistical prediction. He demonstrated, and subsequent research confirmed, that statistical prediction rules (SPRs) are often more accurate in making predictions in fields like clinical diagnosis than unaided human intuition, which is prone to biases. His work highlighted the ethical imperative to use available statistical models to provide the best possible predictions for clients and to avoid the pitfalls of relying on overconfidence or flawed intuitive judgments.
More recently, Daniel Kahneman, who won the Nobel prize for his work on deciaion-making biases detailed a selection of fields which have demonstrated inferior human judgment compared to algorithms in his book Thinking Fast and Slow:
All well and good but large language models (LLMs) aren’t necessarily pulling from the relatively manicured gardens of information that statistical/actuarial models do. LLMs pull from sources like Reddit threads and other online content—which are often bastions of the “inferior human judgment” you referenced above. Hence my concern when you apply the term “superiority” to AI. Can AI be helpful? Yes. (Especially when trained on the appropriate content.) Is it superior? Well, query an LLM and check its referenced sources yourself.
Johns Hopkins now offers a 10 week AI in Healthcare program taught by its faculty that includes using
AI skills for clinical decision-making and better patient outcomes. Program components include
That big reference book your doctor pulled out in the 1970s was likely the Physicians Desk Reference aka “the PDR”, an essential guide to all medications. Not sure if they’re still printing copies of it but it’s available here: https://www.pdr.net/. It’s incredibly useful.
Glad you’re feeling better, Dennis!
David,
Thanks for the link. That probably was the reference book he was looking at.
Dennis, I’m glad that your wife had your back and that AI worked as it should.
I worry that AI could incorporate the dangerous quackery that exists on the internet, thereby providing us with dangerous information.
AI might be a good leveler for the variability of medical advice, a first line of reference and opinion as such, and a way to standardize best practices. I’ve experienced many instances of this divergence depending on the provider I consulted. I’m glad you’re responding well and hope you’re fighting fit for the trip.
Mark,
Thanks for your response and kind words.
After a visit to the ED due to my watch alerting me to possible A-Fib, I had numerous tests and their reports on the patient portal shortly thereafter. I used AI to explain the reports to me, as if I were ten years old, what they meant. It was wonderful.
I do not have A-Fib, but the ER doc kept me overnight for observation with a follow-up Cardiac Catheterization a week later. And more tests after my cardiologist returned from vacation.
It was interesting to see how seriously the ED medical staff took the alert on my Apple Watch. They feel that they are 90% accurate. It happened to be wrong in my case, but I definitely had elevated tropopin levels, which generally indicate a heart attack. Tests confirmed that, although those levels are elevated, my 80-year-old heart is functioning fine.
My doctor is calling now, coincidentally. Her first words were, “You were right.” I asked her to hold on so I could put her on speaker. Since I seldom hear those words, I wanted my wife to listen to them.
The blood tests confirmed my theory as to why the tests were showing a false positive—one more thing to add to my iPhone medical records.
I use AI daily and love it. It is such a time saver.
Richard,
Thanks for sharing—glad to hear your heart’s doing well! Amazing how tech and AI played a role.
In the middle of the night, I had an incident of arrhythmia once, years ago, bad enough to wake me. Pulled on my Apple Watch, took an EKG as it was happening, and sent a PDF of it to my PCP. He looked at it and confirmed it was premature atrial contractions (PACs) which can happen in otherwise healthy hearts. I was under a lot of stress at the time, likely dehydrated, and at a higher altitude than usual.
Because Apple Watch is pretty limited in what it’s able to detect, I got one of these after my PAC experience. Kardia uses AI and a six-lead sensor to detect more types of cardiac events than the Watch can: https://kardia.com/products/kardiamobile6l
The ED doctor recommended Kardia, so I got one. Good recommendation.
It’s impressive technology and a great user experience. Wish they could do CT cardiac calcium scores with a gadget like this.
With a very high calcium score, I visited my brother’s cardiologist 5 years ago. Trying to be funny, I asked if I had the highest score she ever encountered. She said no, but said I was the highest score that day. It was only 10 in the morning.
I retired from medicine years ago, and know little about AI. However, any tool can be useful, provided its used appropriately.