Doctors are using AI and why I am ok with that to a degree…
CNN just posted an article and It was pointing out that AI is being used, The title of the article is
5 ways your doctor may be using AI chatbots — and why it matters
Specialized medical AI chatbots have quickly become a go-to source for many doctors and trainees. The CEO of one of these medical chatbot companies recently claimed that more than 100 million Americans were treated by a doctor who used their platform last year.
You know what. If the doctor is using AI to help diagnose an issue i am ok with this, But, if the doctor is using the AI as a replacement of his diagnostic I would be against that, the challenge with AI is using it in a way that is not a replacement of the doctors own agency.
One thing that should be majorly addressed here, is that the doctor should tell you right out how he is using the AI and what data is being used. If you are like me, you want to have a transparent doctor. I’ve explained my conditions to doctor and I see it ever time, Mid explaination his or hand falls down to the pocket level and you see in like martial arts the Sen no Sen its the move that you know he is getting his phone to google what you just told him or her. They will excuse themselves in the moment to go google my condition..
My normal reaction is .. to call the doctor right out, I tell them if you are going to google this do it in front of me and not to be embarrassed, I am the zebra in your career. I do not need the illusion of mastery of you are a doc, I personally want you to accept that you are not the god of your position and every instance as a doctor is a learning experience. I am not going to look down on a doctor that doesn’t know a rare genetic condition. I will look up to a doctor that uses the moment as a “Classroom” moment where he becomes the learner and I am the master. Because as far as doctor/patient this is the higher praise you can give to a doctor and it shows him as the “master” that he is willing to learn.
“ChatGPT is like your crazy uncle,” said Dr. Ida Sim, a professor at the University of California, San Francisco, who studies how to use data and technology to improve health care.
Any AI can be turned into your crazy uncle if you input enough information to them, but if doctors collaborate with the patient and the AI , I think a more diverse diagnosis would be made without the “symptom checker” fatigue that AI’s can load out on any Doctor,patient or third party can come up with.
As for AI’s they are not great doctors, they are the median doctor that is good at anything that slightly drifts from the center, they will be good for health upkeep or catching stuff before it happens. but on major issues the AI’s are so far in left field that they are irrelevant and become crazy unclue(pun intended) bob, That will start diagnosing diabetes before neuropathy in a chemical exposure case if context is done wrong.
The most common use case
Millions of research papers are published every year — and keeping up with them all is impossible.
“You’d need like 18 hours a day to stay up to date,” said Dr. Jared Dashevsky, a resident physician at the Icahn School of Medicine at Mount Sinai.
But doctors are expected to stay current on new research and guidelines to maintain their licenses. Many say they now use medical chatbots as a reference tool to help them stay updated.
Yes, there are millions of papers, but for Dr. Jared Dashevsky he doesn’t need to keep up with all of them. that would be insane. Millions of papers come out a year, by the end of said year 400,000 of those papers are changed or phased out into new research. Cnn and the doctor are wrong here, if you have a patient with a rare condition, AI can be used to contextualize the papers and come up with a mean average of the output to give the doctor a clue, I am not expecting the doctor to read all of the papers because he would rabbithole down so many roads that treatment and diagnostics would be a mess.
Save the papers in rare research for the specialist, your GP doesn’t need to know the ins and outs of 1 million papers that half of the time fail in the real world because lab controls do not equal real world observation. The Doctor that is slightly questioning his diagnosis and inputs some weird statistical drift will get a better answer out of AI and know what specialist to give the information too. But the doctor can use the AI as tool to faster make information available to him. If he tries the google search method it leads to bullshit that starts saying vitamins and sunning your butthole is a cure.
But many doctors use unauthorized chatbots called shadow AIs, according to doctors CNN spoke with. Some of these shadow AIs also advertise HIPAA compliance features.
HIPAA is a federal law that requires certain organizations that maintain identifiable health information — such as hospitals and insurers — to protect it from being disclosed without patient consent.
Here’s where doctors can win, Create a system that strips out all PII and just get to a processor that strips out the information and gets down to the numbers. Otherwise, the companies on the other end use the data as resaleable materials and ignore HIPAA , The healthcare entity should have an end to end chain of ownership to show the patient where there data begins and ends. the second and LLM user that data that is protected by HIPAA the LLM should be charged, if they sell it to insurance companies or walmart to figure out sales trends. I’m not saying AI should not be used , I’m saying accountability should be transparent.
We’ve been through this bullshit with the human genome with everyone attempting the copyrighting of the DNA of the human body, Now we are at the precipice with code of the human condition itself. We have Named Entity Recognition (NER) system to strip names and Privacy to ensure that even if the AI “learns” from your data, it cannot be reverse-engineered to identify you. We need this institutionalized across the system.
Otherwise we are creating a dangerous system that the human credit score will make it where insurance will have a value on a child before its born and create ways that have been used in the past to make people uninsurable.
GIve or take, Google Classroom makes Google a school admin, but also if you look in to common-core most people don’t realise its a job app to corporations across america. We do not need this to happen again. Common core in itself can feed LLM’s and Hippa issues since the IEP, typically the most powerful force for education can be identified later in life by LLM who are technical admins and further if the information from Common core and human condition meet you have an identifiable plot to unmasking the user. It could be connected the child who had suicidal ideations in school over low stress can be weighted and for a temporary issue , cause a person insurance to go through the roof.
Dr. Carolyn Kaufman — a resident physician at Stanford Medicine — and other doctors say that patient information is making its way into unauthorized chatbots, potentially opening the door to new ways of commodifying patient data.
“Data is money,” Kaufman said, noting that she has never uploaded HIPAA-protected information onto an unapproved chatbot. “If we’re just freely uploading those data into certain websites, then that’s obviously a risk for the individual patient and for the institution, as well.”
This statement here is a perfect reflection of above.. In the end, IEPS , common core assessments and more need to be Air gapped and when you leave school an agreement should be made between the student(or parents) on who can assess the information.
AI chatbots have also stepped in to help doctors draft summaries of patient visits and long hospital stays. These notes are viewable on online patient portals and help doctors track a patient’s course and communicate plans across the care team.
I am not worried here, If anything AI could be useful in suggestions to add to the file and give a treatment idea to the doctor, But no doctor should take this as gospel.
“From a med student perspective … you’re seeing a lot of things for the first time,” said Evan Patel, a fourth-year medical student at Rush University Medical College. “AI chatbots sort of help orient me to what possibilities it could be.”
Just No, First, fourth, or fortieth , Should never go in with AI , If you end with AI as a counterpoint or a co-researcher in the end is ok, but the doctor should not cognitively offload to the AI to help with diagnosis. Because if that becomes standard the cognitive process of diagnostics goes out the window and dies.
Med Students out of the gate should be regulated that AI is a non-negotiable in any point of the process before, during and any time patient contact is being made. At any time after if a Student uses AI after for a confirmation or a research Node, that can be agreed to but using the AI as attending physician is career suicide.
This preserves the Agency of the Physician and Occam’s razor.. The problem with AI is humans with 8.3 billion variations that AI tends to only use the mean average. It leaves many doctors with zebras that AI will hallucinate to high hell about and be dangerous.
The Final Word here. AI is Ok, but used correctly, not shoehorned into the medical spectrum..
Acknowledgements: Article from CNN.com 5 ways your doctor may be using AI chatbots — and why it matters










