Doctors are using AI and why I am ok with that to a degree…

CNN just posted an article and It was pointing out that AI is being used, The title of the article is

5 ways your doctor may be using AI chatbots — and why it matters

Specialized medical AI chatbots have quickly become a go-to source for many doctors and trainees. The CEO of one of these medical chatbot companies recently claimed that more than 100 million Americans were treated by a doctor who used their platform last year.

You know what. If the doctor is using AI to help diagnose an issue i am ok with this, But, if the doctor is using the AI as a replacement of his diagnostic I would be against that, the challenge with AI is using it in a way that is not a replacement of the doctors own agency.

One thing that should be majorly addressed here, is that the doctor should tell you right out how he is using the AI and what data is being used. If you are like me, you want to have a transparent doctor. I’ve explained my conditions to doctor and I see it ever time, Mid explaination his or hand falls down to the pocket level and you see in like martial arts the Sen no Sen its the move that you know he is getting his phone to google what you just told him or her. They will excuse themselves in the moment to go google my condition..

My normal reaction is .. to call the doctor right out, I tell them if you are going to google this do it in front of me and not to be embarrassed, I am the zebra in your career. I do not need the illusion of mastery of you are a doc, I personally want you to accept that you are not the god of your position and every instance as a doctor is a learning experience. I am not going to look down on a doctor that doesn’t know a rare genetic condition. I will look up to a doctor that uses the moment as a “Classroom” moment where he becomes the learner and I am the master. Because as far as doctor/patient this is the higher praise you can give to a doctor and it shows him as the “master” that he is willing to learn.

“ChatGPT is like your crazy uncle,” said Dr. Ida Sim, a professor at the University of California, San Francisco, who studies how to use data and technology to improve health care.

Any AI can be turned into your crazy uncle if you input enough information to them, but if doctors collaborate with the patient and the AI , I think a more diverse diagnosis would be made without the “symptom checker” fatigue that AI’s can load out on any Doctor,patient or third party can come up with.

As for AI’s they are not great doctors, they are the median doctor that is good at anything that slightly drifts from the center, they will be good for health upkeep or catching stuff before it happens. but on major issues the AI’s are so far in left field that they are irrelevant and become crazy unclue(pun intended) bob, That will start diagnosing diabetes before neuropathy in a chemical exposure case if context is done wrong.

The most common use case

Millions of research papers are published every year — and keeping up with them all is impossible.

“You’d need like 18 hours a day to stay up to date,” said Dr. Jared Dashevsky, a resident physician at the Icahn School of Medicine at Mount Sinai.

But doctors are expected to stay current on new research and guidelines to maintain their licenses. Many say they now use medical chatbots as a reference tool to help them stay updated.

Yes, there are millions of papers, but for Dr. Jared Dashevsky he doesn’t need to keep up with all of them. that would be insane. Millions of papers come out a year, by the end of said year 400,000 of those papers are changed or phased out into new research. Cnn and the doctor are wrong here, if you have a patient with a rare condition, AI can be used to contextualize the papers and come up with a mean average of the output to give the doctor a clue, I am not expecting the doctor to read all of the papers because he would rabbithole down so many roads that treatment and diagnostics would be a mess.

Save the papers in rare research for the specialist, your GP doesn’t need to know the ins and outs of 1 million papers that half of the time fail in the real world because lab controls do not equal real world observation. The Doctor that is slightly questioning his diagnosis and inputs some weird statistical drift will get a better answer out of AI and know what specialist to give the information too. But the doctor can use the AI as tool to faster make information available to him. If he tries the google search method it leads to bullshit that starts saying vitamins and sunning your butthole is a cure.

But many doctors use unauthorized chatbots called shadow AIs, according to doctors CNN spoke with. Some of these shadow AIs also advertise HIPAA compliance features.

HIPAA is a federal law that requires certain organizations that maintain identifiable health information — such as hospitals and insurers — to protect it from being disclosed without patient consent.

Here’s where doctors can win, Create a system that strips out all PII and just get to a processor that strips out the information and gets down to the numbers. Otherwise, the companies on the other end use the data as resaleable materials and ignore HIPAA , The healthcare entity should have an end to end chain of ownership to show the patient where there data begins and ends. the second and LLM user that data that is protected by HIPAA the LLM should be charged, if they sell it to insurance companies or walmart to figure out sales trends. I’m not saying AI should not be used , I’m saying accountability should be transparent.

We’ve been through this bullshit with the human genome with everyone attempting the copyrighting of the DNA of the human body, Now we are at the precipice with code of the human condition itself. We have Named Entity Recognition (NER) system to strip names and Privacy to ensure that even if the AI “learns” from your data, it cannot be reverse-engineered to identify you. We need this institutionalized across the system.

Otherwise we are creating a dangerous system that the human credit score will make it where insurance will have a value on a child before its born and create ways that have been used in the past to make people uninsurable.

GIve or take, Google Classroom makes Google a school admin, but also if you look in to common-core most people don’t realise its a job app to corporations across america. We do not need this to happen again. Common core in itself can feed LLM’s and Hippa issues since the IEP, typically the most powerful force for education can be identified later in life by LLM who are technical admins and further if the information from Common core and human condition meet you have an identifiable plot to unmasking the user. It could be connected the child who had suicidal ideations in school over low stress can be weighted and for a temporary issue , cause a person insurance to go through the roof.

Dr. Carolyn Kaufman — a resident physician at Stanford Medicine — and other doctors say that patient information is making its way into unauthorized chatbots, potentially opening the door to new ways of commodifying patient data.

“Data is money,” Kaufman said, noting that she has never uploaded HIPAA-protected information onto an unapproved chatbot. “If we’re just freely uploading those data into certain websites, then that’s obviously a risk for the individual patient and for the institution, as well.”

This statement here is a perfect reflection of above.. In the end, IEPS , common core assessments and more need to be Air gapped and when you leave school an agreement should be made between the student(or parents) on who can assess the information.

AI chatbots have also stepped in to help doctors draft summaries of patient visits and long hospital stays. These notes are viewable on online patient portals and help doctors track a patient’s course and communicate plans across the care team.

I am not worried here, If anything AI could be useful in suggestions to add to the file and give a treatment idea to the doctor, But no doctor should take this as gospel.

“From a med student perspective … you’re seeing a lot of things for the first time,” said Evan Patel, a fourth-year medical student at Rush University Medical College. “AI chatbots sort of help orient me to what possibilities it could be.”

Just No, First, fourth, or fortieth , Should never go in with AI , If you end with AI as a counterpoint or a co-researcher in the end is ok, but the doctor should not cognitively offload to the AI to help with diagnosis. Because if that becomes standard the cognitive process of diagnostics goes out the window and dies.

Med Students out of the gate should be regulated that AI is a non-negotiable in any point of the process before, during and any time patient contact is being made. At any time after if a Student uses AI after for a confirmation or a research Node, that can be agreed to but using the AI as attending physician is career suicide.

This preserves the Agency of the Physician and Occam’s razor.. The problem with AI is humans with 8.3 billion variations that AI tends to only use the mean average. It leaves many doctors with zebras that AI will hallucinate to high hell about and be dangerous.

The Final Word here. AI is Ok, but used correctly, not shoehorned into the medical spectrum..

Acknowledgements: Article from CNN.com 5 ways your doctor may be using AI chatbots — and why it matters

Microsoft officially says you don’t need extra antivirus on Windows 11, Sure… and houses don’t need doors anymore.

In a blog post, Microsoft says Defender is enough for most Windows 11 users. No extra antivirus needed if you keep defaults enabled and update regularly.

Microsoft might be right, but instead they are so wrong here, The average computer problem exist between the computer and the chair. Even the most security minded individual can find themselves trapped in a moment where they have to wipe their entire PC out due to one misclick .

Give or take even if you have a secure version of windows they are assuming you never install 3rd party software, Never update your drivers and never have the computer on the internet with the computer locked in a scif. Otherwise, the second you are on the internet, Microsoft is leaking a shit ton of telemetry on your computer to the world.

if you never take a risk ever with the computer, You should be “fine” however you go to site to download common safe software and you get to play roulette, four different download buttons two chain off to different sites and one gives you the computer STD.

From the second you enter in your information for your windows 11 , Windows(microsoft) is already messing around in your cupboards taking inventory so if you have a stash of photos of your family Microsoft has already taken that and moved it to onedrive without asking much… well they did but they did a batch accept on all microsoft logins and by the time you violated some privacy policy or moved a picture of someone who did not want to be on the net it is already there with a shared use from microsoft. Given the fact that MS leaks this much out of the box anyone trying to hide from domestic abusers or worse there is a major chance to leak and leave yourself out in the open.

The other thing is , if there is a 0-day exploit on a wednesday, You have to wait till next tuesday until MS patches windows defender to update the definitions. With mainstream AV programs , it could be as little as hours until the AV updates to stop the 0-day.

Also bad websites are telling people to put commands in the “search” which in reality just fucks you over by installing spyware where you might just kiss your bank account goodbye. NEVER run anything a website asks you to do in a search bar.

But to make Microsoft’s Assertion accurate. You turn on the PC , Never Log it in. Air gap the updates to it . Never connect it to the internet. destroy bluetooth and Wifi Cards. Otherwise the second that PC thinks its on a network it shouting into the ether on IM HEEEEEEEEEEEERE.

Not to mention, Microsoft undoes any safety margins you have created every time it updates. Deleting onedrive from the system is like trying to get rid of lice with butter, it just doesn’t work. It switches you back to Edge and tries to import anything from other browsers back to edge. Microsofts security is part of the whole, if you must run it , Get Malwarebytes , on a secondary level do not be afraid to run another anti-virus to be double sure. if your PC is acting strange, don’t just assume that everything is ok. Google occasionally fails at spot checking their own sponsored links. That vector can lead to the fastest accidental system infection.

The worst part of the whole thing is Microsoft claims to have made everything easier, it has made it where as long as you invite the vampire in by either command line or installed software or even microsoft themselves they are free to do what they want and microsoft defender won’t look up from looking at the magazine it’s looking at.

A single misclicked installed chrome or edge extension can lift your entire saved password folder without even as much as a thank you.

In the end Microsoft is saying you can have the safest PC in the world, Just don’t use it, move it to an island without internet nor power and your fine.

AI and how much it increases your costs.

This here is a tracked price of an SSD from amazon that clearly shows how much AI is driving the price up for every human in existence. to replace the SSD in a cheap computer now costs more than the cheap laptop or computer you bought in 2024. It eradicates any advantages in buying consoles and more.

This is the only smart coffee pot I could stand behind…

The Trojan Room Coffee Pot was the researcher’s answer to the sin of finding the coffeepot empty.
The researchers constantly had the issue of going for a caffeine infusion and the frustration of finding the life blood of the research department depleted. But, in 1991 Quentin Stafford-Fraser and Paul Jardetzky of the University of Cambridge pointed a Philips CCD camera at the pot. They connected it to an Acorn Archimedes computer to solve the issue of the lack of coffee. They wrote the base code to capture the coffee levels with a protocol they aptly named Xcoffee that captured the pot in greyscale.
Originally it ran on the local network when in 1993 Martyn Johnson and Alan Cox showed the coffee pot to the world in November of that year.

For more information goto Trojan Room coffee pot : from Wikipedia

Further: The Coffeemasters themselves….
Quentin Stafford-Fraser
Martyn Johnson
The Trojan Room Coffee Pot


The ANTI-Anti AI crowd, When claims are hallucinated.

Matt Novak Starts his article with ” The AI Doomers Who Are Playing With Fire: For years, the dangerous rhetoric has been out of control. And things are turning violent.”

Well now that is an opener. Novak says on how chatGPT burst on to the scene and lays up how AI companies went to congress to told them,  That the technology that posed imminent risks to society. AI had the power to destroy the entire world. These AI companies went to congress and they wanted to be regulated now rather than later, because receiving regulation now is easier than getting regulated later. metaphorically its easier to destroy a door than put one up.

No supposedly AI Execs are telling everyone to calm down over AI.

Chris Lehane, OpenAI’s global policy chief, sat down for an interview with the San Francisco Standard this week in the wake of at least one attack on CEO Sam Altman’s home.

What is the grammatical formation of this sentence? “in the wake of at least one attack” what kind of word soup is that?

Moreno-Gama was carrying an anti-AI “document,” according to police, suggesting his motivations were related to concerns over artificial intelligence and existential threats. The Wall Street Journal reports that he had called for “Luigi’ing some tech CEOs,” a reference to Luigi Mangione, who’s been charged with murder for allegedly killing UnitedHealthcare’s CEO.

While Moreno-Gamas attack with a firebomb was deplorable , it makes me wonder why he did it , and what was this “document” the way “document” is framed in this sentence as well is kind of weird. Also for notation here, the firebomb struck a metal gate. Not Altmans house.

There was a second incident involving a firearms discharge at Altmans house. This is an incredibly long lead in to get to the meat of the article.

The so-called AI doomers simply aren’t being sold properly on the benefits of this new tech, Lehane argues. “Our job at OpenAI and in the AI space — and we need to do a much better job — is to explain to people why … this is going to be really good for them, for their families and for society writ large,” Lehane told the Standard.

The So called doomers are seeing AI’s drawbacks in real time. One AI company has been sued for a child’s life ended at the assistance of AI. Neighborhoods having brownouts and brown water due to AI . Wendy’s Drive Up Kiosks that barely function. children offloading critical thinking to a machine that will never be able to think for themselves in a power outage.

My personal fear here is that the execs are trying a trying to build a formula to make anyone who criticizes AI into “Extremist” . That if you say AI hurts X .. they will institute “You threatened my child(AI) .” which since the two attacks happened they will use this to frame that anyone who criticizes AI is a “possible” extremist. This is not the case, If i threaten a person they call the police. if you are threatened by an AI who do you call and its not ghostbusters…

The problem is right now with AI you can’t call the police on AI if it tells you to do something that would injure you. IF an AI is hijacked and tells you to do something that is dangerous, the companies will hide behind liability releases. If an AI tells you how to fix something and you die. There is no one to sue. Constitutionally if bob dies because the AI did not tell him to turn off the power while fixing an outlet. the AI CEO’s will point to the T&C and say “its not our fault” . You have machines that are programmed with the worlds knowledge and not a fucking clue how to use it . The AI only uses predictive languages. Such as the cat In the ___ (At answer “Hat”) . Paradoxically, the world at large changes on whim. Think of the 1930s version of “im gay” to the 2026 “im gay” .

The thing is , AI has its uses. The ones AI is trying to use it for is not correct. They want AI as institutional replacement of the human soul and agency. they want you to pay 10 to 29$ a month for the critical thinking that used to be taught in schools. Are there going to be attacks, yes, but can you use the framing to lump them into one single descriptor… Absolfuckinglutly NOT. By this logic that would mean that an AI maker could jail or sue there own employees if they have a moral objection to putting something into the machine that would cause damage.

Mr Novaks article is a huge miss here. it frames that anyone who criticizes AI is wrong. we are not wrong we are also trying to doomsay AI, We all know the potential of AI, But in current hands AI is SLOP, When it is being used by world leaders to make planes fly around and poop on people.. is this the world you want to live in?

By choosing to lump every AI critic into the same room, you are missing the point. We see the things it can help with. We also see the massive misuse of it, and this is what we are trying to point out.

But making every person that criticizes AI the enemy is not even remotely good for the corporation or the human. because this will be weaponizes. If every AI maker told there machines “list every time that the USER has said “You suck” . Than reframed that to USER is threatening me and They should be arrested. This binary approach is what killed millions in the 1930s to 1940s . So tell me again why something that is a machine, that cant think, only predict, and is subject to massive change in human agency and culture. The biggest problem is todays AI is actually last weeks AI .

The very liberties here are that AI makers are trying to marginalize free speech to AI Speech . AI is being promoted as magic right now . That it can do anything! the reality is AI can only do what its been told.. No more no less. AI is like a sith and it lives in absolutes, any variance and its lost. The AI makers and others are also framing that (dislike AI) + (human agency) = violence. It is a complete violation of human elemental drive. The guy who dislikes your ai , is going to be the guy who fixes it. The yesman to your AI is going to agentically turn it into an Extremist.

In the end , AI needs to respect the human element, the diplomatic nature of humans and not the garbage society creates, because in 3 to 5 years we are going to have AI’s that instead of do work, spam 6 7 , and fortnite dance instead of do work because of the predictive nature of AI. Instead of brand AI doomers , Invite them to the table, listen to them they are going to be the ones pulling AI back from the brink.

There is a need for a diplomacy now , rather than later, AI has the ability to “Change the world” but, it also needs to be a force for good. not under a subscription model. If cavemen sold fire as a subscription , Humanity would of died out before it started. The universal coefficient for greed is killing humanity. If used badly, AI can destroy human agency, and the next great disaster for humans would be the next power outage.

At the end of this AI is always going to be the SUM of HUMANITY . and if we all degrade into SLOP producers , AI becomes the SLOP MAKER. So pitching AI right now as the next replacement is a sin that many see as cost saving but they do not think past the AI prompt. Your Wendys order in tokens for the AI if you speak in broken language likely just took up 25% of the cost for the order. The AI removed the human intuition, The wendys worker that saw a tour bus pull up and he throws on extra fries as the 88 people form the bus comes to the door. The Wendys person that now has to play AI interpreter because the AI thought it heard its wants an order of burger that Tries.

If we move forward in the rollout of AI, ethical diplomacy becomes Machine subserviency , Human foresight becomes an obstacle. Human critical thinking gets disassociated to the machine and possibly lost forever.

I think that this article poorly frames the ideas of why people are critical of AI, by framing the few extremist as the majority. It is a diplomatic dishonesty that they are focusing on. There is a real chance here for AI companies to align with people with foresight, not come out with AI underwear or AI Soda just because you can slap AI on it and think you are going to make billions. Right now companies are pushing products out the door with the word AI slapped on it, and the thing that was changed. Nothing. they just added an element that phones home and a subscription model.

Humanity is being pushed back to the age of you could not take a shit without spending a quarter. The AI Companies have seen there own models in the last year , They know the models are degrading in quality because its a feedback loop.

The thing is without the human agency in the loop, the AI will degrade and the companies know this. so its is unequivocally the 1849 gold rush that they are selling the shovels for and they know the end point already.

Quotes were contextualized from: The AI Doomers Who Are Playing With Fire , By Matt Novak @ Gizmodo.com

Judge Beats Off Prosecutor in Gamble inflatable member case against local pd.

Fairhope Municipal Judge Haymes Snedeker acquitted Renea Gamble Wednesday of all remaining misdemeanor charges stemming from her decision to wear a inflatable 7-foot penis costume at an anti-Trump “No Kings” protest in October 2025.

From the website Courthouse News Service a Big dick was acquitted from doing hard time. Renea Gamble Faced a judge and was threatened with hard time when the judge dropped 7 changes of Ms Gamble of being a dick.

Gamble walked out of the courtroom after three hours of testimony cleared of any wrongdoing, but her attorney said her arrest was traumatizing and she may consider legal recourse.

She is not dicking around here, but in the future if you are treated like a hard and stroked violently, She should act and get legal recourse. Gamble, a retired sign language interpreter who is used to hard things and giving tactile messages with her hands.

Body camera footage that went viral showed Fairhope police zeroing in on Gamble’s inflated member. The arresting officer, The officer who complained that the erect person was causing a scene in a family friendly town. When Gamble refused to tuck away her member, things got physical: officers took the 62-year-old huge member to the ground and cuffed her, struggling comically to detumescence her and stuff the oversized erection into a squad car.

City prosecutor Marcus McDowell said it wasn’t a free speech case but argued “no one has a Constitutional right to be dicking around as an engorged member. ” Paraphrased for the “hard” statement it is.

In the end Municipal Judge Haymes Snedeker wasn’t convinced of making the gamble of giving hard time to Ms Gamble..

Mary Kay Smith was outside speaking of how rough handling of erect members should not be arrested for free speech.

We live in wild times.. Back to my coffee after the spit take of this.

We are seeing the .com Bust and the Gold Rush playing out in real time.

Some mornings I take a peek at the news and I just read up on things while I wait for my coffee, and one article caught my attention.

Struggling shoe retailer Allbirds makes bizarre pivot to AI, adds $127 million in value

Shoes to …AI ? Did I read that right. Yes I did . The first thing that came to mind was SouthPark. than The 1990s.

A Business pivots so hard that even a contortionist goes damn!

Give or take right now, AI is “hot” but, it’s as hot as running a building full of graphics cards with a failing cooling system. The thing is ALLBIRDS reads as a master class of failure. They make shoes, The shoes become popular. Allbirds leverages itself that they are the Steve Jobs of shoes and don’t read the room. Two years out they are sputtering and failing and have not become the next reebok or nike. They come up with a “brilliant” plan , They sell off all of their assets and jump on the AI train. ALLBIRD than renames themselves to NEWBIRD AI. Sure sure they likely wanted to create an image like they were a phoenix reborn from the ashes of their old company. But they missed out .. ALLbird to AIBIRD (which would of been perfect. like the mimicry of a parrot?). They could of adjusted there merch with a sharpie at that point.

But here’s the thing. A shoe maker has a soft fail, they could of sold off all of the assets that was shippable. Clearance out the shoes, Leverage out the lower markets and find out if the non tech-bro market liked the shoes and pivot there. Less risk than buying the AI lottery ticket. The fact they exited the shoe market at 39 million means they were not cratored.

They have Zero experience in the AI market. They have nothing other than cash to invest, Than they spoke the word they are going to invest an AI and than the venture vultures came. There market cap climbed to 148m . The funny thing is this is all speculative. like the .com bubble. Everyone is jumping in. NEWBIRD AI doesn’t even have land yet, they need to get in line for AI infrastructure, they need to find land that will not overwhelm local infrastructure. So right now they are selling their vision on a feverdream that by the time it comes to realization may be gone.

If ALLBIRD was a tech startup and they had some revolutionary code to help AI this would be a viable path but in the end, this is a shoe company that is buying a shovel from NVidia and trying to be the next big thing when the shovel is a kids play shovel and other companies are using construction machines in the AI market with very small to negative returns.

One last thing, “Stockholders are also asked to approve a charter amendment to remove the company’s environmental conservation public benefit and to authorize a plan of dissolution the Board could implement within 12 months.” Talk about the Theoretical take away from this. live in good nature and ecology than, setting your customer base on fire pissing on the fire and walking away from your mission statement.

For Allbird, They could of paired down the back stock . Sent shoes to influencers , built a brand on tiktok with a close enough reproduction , Kept there ecologically sounding mission statement, built an entire sector off of sephora girls and won big off of “vibes”.

The worst part of all of this, is this is an AI adjacent, that needed to talk about no technology and honestly the fucktons of money thrown at nothing. New Bird AI might be the Futures New Turd AI.

update: You know its bad when Voidzilla enters the conversation….

Rufus is failing upward.

Rufus is burning through money Like a kid with a sparkler in a gunpowder factory.

Amazon CEO Jassy defends $200 billion AI spend: “We’re not going to be conservative”

With a $15 billion AI run rate, they might recoup their costs in 13.3 years. But lets be real, they are not counting the upward costs, They don’t account for the land cost, they don’t account for hardware failure. They are land rushing and hoping they hit it big. James Wilson Marshall is likely spinning in his grave.

This is a bad idea, most people do not realize that the Slopocalypse is happening. AI companies are starting for the exits because smoke is happening. Between Claude releasing its source code, Sora melting overnight.

Rufus is going to follow Sora and Claude , out of the 200 billion with the “lion’s share going toward AI development.” Horse shit, anyone who has used Rufus knows that rufus is a very basic AI that favors trying to get people to buy the most expensive products without going through your sales history.

In all honesty Rufus will be the litmus test for pushing bad AI, They are not a centralized platform like grok nor is the AI functional outside of being a bad salesman. Rufus does not do anything other than say “i see you are looking at socks how about these socks” as it shows a 50$ pair of socks. In the economy right now we can’t afford 50$ socks. “Independent audits from January 2026 show Rufus only matches the “objectively best” product 32% of the time”. When your basic function is to (sell product)..How are you are burning 200 Million on AI? Fuck that call Wendy’s and get there bad AI , its all the same. Rufus is a digital Vampire, It doesn’t do much but where do you think it is getting its training data. At this point lets all buy condoms to screw with the LLM learning data.

This is the 1849 goldrush for the 2026 era, as more people jump in to play the AI game, the corpses of other AI miners are dying fast. for every large AI that dies, Likely 100 more smaller ones die. The Slopocalypse is here, and while you could dig with your hands in the goldrush the 2026 game is pay to play. You need to have storage that is skyrocketing in cost, Land, and obscene amounts of power to run an AI farm while destroying everything around you. The AI Farms(mines) that fail are quickly bought up and run again until they fail too. What is likely happening as well is the AI farms that are failing are being bought up by bigger AI, the problem is THE ROI IS EXTREMELY NEGATIVE.

The idea of AI is nice on paper, but when culture is making AI SLOP videos of farts our trajectory is heading towards idiocracy. The biggest question is when slop is done is there any room for advancement or has the system inwardly corrupted itself into uselessness. The part that is failing in this industry is the scaling issue, if we need an answer to something and the answer is a hallucination, the fix is not to throw more processing power at it. It should come down to a code audit.

Because models are training on themselves the slop has become the AI’s TikTok doomscoll and with the Data of a shower thought fever dream being inputted into the AI the model now makes it real and therefore a training point. Google has put in its own immune system via SynthID.

Well, Given we are watching the Slopocalypsy in real time and people Slopocalypse coffeepots might cease to function, in the end it is we the consumers that are stuck with the bill because at the end of the day the bankrupt company walks away and the land rot infects the system.

I’m off to have a coffee from my smarter dumb coffee pot with buttons and switches and no internet connection, hopefully you do the same, its cheaper!

Final note: when your run numbers are 15b with a 200b expenditure, this is cleanroom spending. it does not account for failure and other associated costs.

How’s your insider trading.

I typically don’t post political commentary, but I feel like in 2026 every time the word is at its edge we find out after a pivot that some person put a bet on the market that made them millions. For more information on this please see Coffeezilla on youtube..

the Fear index shows a weird irony. AND for the love of coffee No fucking nukes please unless you are microwaving coffee(blasphemy).