Cant wait for vibedoctoring!
Just allow AIs to write prescriptions. I’m sure it will be fine.
/s
One step closer to Idiocracy.
But if ChatGPT is able to pass med school it must mean AI is good?
Jokes on Future Doctor because we’re closing down the hospitals, cancelling the research grants, and taking all the sick people to jail for the crime of being unemployable.
We’re so cooked.
Take 1 moment to imagine the enshitification of medicine.
We’re gonna long for the days of sawbones and “you got ghosts in your blood. You should do cocaine about it”
Medicine has already been enshitified…
PE initiated take of provider groups in early 2010s.
Consolidation by PE and health insurance parasites is about complete.
Nurse and mid level providers are being pressured. Doctors are next on the chopping block.
Service quality is down across the board and they haven’t even started squeezing in earnest.
You would get better service at 2005 McDonalds than at 2025 urgent care 🤡
What is “PE”?
Private equity. These are funds admined by fund managers for a fee but the capital comes from high networth individuals.
You have to be a “qualified investor” to even access it.
They are used for risky plays but general play us buy up a market, consolidate it, then start extracting and then exiting for a fat profit. This will generally result in low quality services and business is unsustainable after the extraction.
You prolly participated in markets they ruined. For example the play for sears was about looting the pension fund of the employees. That one was extra nasty.
I really like Harvard’s Nutrition Source for science-based nutrition info that’s easy to understand.
“I’m sorry, but you have Fistobulimia. You may want to put your affairs in order.”
“Oh my god, Doc! That’s terrible. I came here for a runny nose. Are you sure?”
“Pretty sure. It lists… runny nose, tightness of jaw, numb toes, and a pain in your Scallambra region as typical symptoms.”
“I don’t have any of those other things and what the heck is a Scallambra?”
“You don’t have those? Hmm, let me double-check.”
(types)
“Good news! It’s likely a type of Fornamic Pellarsy. Says 76.2387% recovery rate by leeching. System’s already sent a referral.”
Doctors right now Google or ask chat gpt what they don’t know ( atleast the goofling part is good if rather than barking out false stuff )
I’m a doctor and I google all the time. There’s nothing inherently wrong with googling the question is what source are you using from there.
Therrre it is. Or, here it comes maybe.
“Why are you sitting on your sandwich?”
“ChatGPT said the healthiest way to eat was through the anus.”
I have had absolutely terrible luck with PCPs believing my symptoms or looking at them holistically - even just to gat a referral to the right specialist. In this moment AI has been better at pointing me in the right direction than my previous PCPs. 🤷
Doctors doing this has been a historical issue but it only becomong obviouse that they behave like this, thanks to the internet.
Many doctors view the patient with contempt
This is what people miss. If you’ve experienced a chronic condition that doctors don’t know what to do with, then trying alternatives seems pretty attractive.
I call it the “witchcraft par of my health journey.”
Ok but my counter argument is that if they pass their exam with GPT, shouldn’t they be allowed to practice medicine with GPT in hand?
Preferably using a model that’s been specifically trained to support physicians.
I’ve seen doctors that are outright hazards to patients, hopefully this would limit the amount of damage from the things they misremember…
I love having a doctor who offloaded so much of their knowledge onto a machine that they can’t operate without a cell phone in hand. It’s a good thing hospitals famously have impenetrable security, and have never had network outages. And fuck patient confidentiality, right? My medical issues are between me, my doctor, and Sam Altman
And the people Sam Altman sold your info to.
It is our info now comrade.
Thag might be okay if what said GPT produces would be reliable and reproducible, not to mention providing valid reasoning. It’s just not there, far from it
It’s not just far. LLMs inherently make stuff up (aka hallucinate). There is no cure for that.
There are some (non llm, but neural network) tools that can be somewhat useful, but a real doctor needs to do the job anyway because all of them have various chances to be wrong.
Not only there’s a cure, it’s already available: most models right now provide sources for their claims. Of course this requires the user the gargantuan effort of clicking on a link, so most don’t and complain instead.
Why bother going to the doctor then? Just use Web Md.
“just replace developers with ai”
You bother going to the doctor because an expert using a tool is different than Karen using the same tool.