It would seem my want of a good Non-AI coffeepot has been reinforced yet again. The IoT coffeepot has been caught spying.
A cluster of seemingly unrelated incidents ranging from exposed enterprise AI tools to a breached coffee machine has revealed the daunting reality that modern cyber risk is no longer confined to servers, endpoints or even employees. It now increasingly spans ecosystems, vendors and even the delivery mechanisms for the very tools designed to drive organizational productivity.
The problem with AI is it is veiled in a ton of secrecy that is no good for anyone. Because once the bad agents start figuring it out. We are in deep trouble. The convenience of the AI coffee pot might be nice but it comes with a ton of drawbacks most people don’t account for .
A digital forensics investigator, identified only as TR, was called in when a client suspected a rival had infiltrated their systems after a data breach. Instead of finding malicious software, TR discovered that an internet-enabled espresso machine, equipped with a default password, an outdated operating system, and no firewall, was the source of the leak. Threat actors exploited this device, which was connected to the client’s secure network, to exfiltrate sensitive data. The machine was sending packets internationally every time someone brewed a cup, bypassing all the client’s advanced security measures.
This does present the facts of IoT machines need just as much vetting as computers on a network, if your IT guy doesn’t find every IoT devices on the network he is creating a leak, and the corporate moto of just buy the cheapest thing is normally a recipe for disaster.
Firstly, Keep all of your IoT shit on its own network, If you have a store named BOBcorp, Put all IoT devices on BioT29384 network that is isolated from the main network. Second, You want a network monitor IoT devices are chatty in nature but if your network traffic jumps sniff it out and make sure its sanitized. IoT companies should give a master list of where their devices connect to. That way if your AI coffeepot is connecting to Nigeria you know something is wrong. Either that give google, Apple, Amazon, and other Hub Devices a choice to go through a master server on the devices Hub of choice, That way if all of the corps go through the hub device the IT staff have an easy way to poke at what the IoT stuff is doing.
By having a master hub list of devices if a device starts misbehaving or an attack vector is found. They can deauth the device. It stops companies from just vomiting out “smart” everything devices, That way if they lose there auth they will act fast to restore the trust in the devices.
Another thought here is with security layers is, that most IoT have BLE enabled by default, After pairing there should be a dipswitch to turn off the BLE until its needed for repairing to the network. BLE is notorious for sniffing what is around it .
FirstNet Trusted™ Could really do something to come out on top here. Because of corporate laziness of “just buy the cheapest thing” leads to the problem in the first place. since they are part of AT&T and there network knowhow.
Even passed that Cellphones on the corp networks need to be on their own network, Workers who place IoT or cellphones on the larger corp networks need to be taken off and the employee trained for network safety, it would create a top down security that would even extend to the workers home after. Rather than finding out too late that there beloved AI coffeepot has been stealing secrets for Months.
In the end, You are better off with a Coffeepot with a switch , and if you need it smart. Add a smart plug to it and than you can control it from afar without having so much bloatware you never know what it is connecting to.
Today will be interesting, we will learn how much large corps are going to play the shell game with earnings.
Amazon (AMZN) will report its first quarter earnings alongside rivals Google (GOOG, GOOGL), Meta (META), and Microsoft (MSFT) on Wednesday, with investors looking for more signs that the company’s massive artificial intelligence spending is paying off.
My personal feeling.. No. However this does not stop them from playing the shell game of hiding costs and contracts that have not been put to action yet. These companies account for 650 billion of cape ex spending.
The problem here is the market is betting the farm on a large loss leader. Big Tech knows this and they are trying to engineer there way through this problem by throwing more money and more power at it . AI as it stands right now is about as efficient as a 16 cylinder engine with only one sparkplug working. The problem is with AI being a subsidized land grab at the moment the scale is not fit for its current market. With A Slop being the top thing with AI right now and your average query to AI wasting enough power to light a lightbulb for months. In part why the Sam Altmans and the Bill Gates of the world looking for nuclear power plants to offset these cost to the of thousands of GWh of power.
Right now with power costs soaring, the cost per query is not sustainable, When your average British person can warm there tea 50 times over for a slop query. The problem here is the rate of return on LLM’s is degrading, as LLM’s are looking for more training data they are getting flooded with the very slop they are creating, The people now jailbreaking and hijacking AI’s to act like spongebob squarepants the sexy pirate is filling AI’s systems with irrelevant data to the point its becoming its own fever dream. So the 650B investment is poisoning the future well of returns.
For the quarter, Amazon is expected to report earnings per share (EPS) of $1.62 on revenue of $177.2 billion, according to Bloomberg analyst consensus estimates. The company saw earnings per share of $1.59 and revenue of $155.6 billion in Q1 last year.
Sure a revenue of 177.2 billion. but they are spending like they have a blank check. Eventually when that check clears will Amazon have enough in the bank to cover the check. When Returns on AI is only 15b the rate of return is much slower than the spending. They are building out now and hoping that the machine will have a return later or get bailed out in the end. We’ve heard this all before “too big to fail”. To any person that knows what that line means they just clenched their anus.
But in the end these calls will be interesting, If the earnings call shows a positive it shows that these companies are playing the shell game. Amazon is only getting 15b return per quarterly run shows that the math is flawed. To get that expensive back that will take 3.33 years, if Amazon stops investing today.
And the final flaw is , What if some other game changer comes out of a garage that has a home grown AI out of there garage that makes all these data centers look like nothing more than space heaters for towns. Deepseek has constantly outdone large llms for less than 5% of the money and that’s a secret that the hyperscalers hope you don’t see.
Post mortem: It would appear that all of the companies are posting strong numbers. How long this will last is the greatest question. Further is the biggest question. Metas numbers was the 8k layoff and 6k closed positions a part of there jump?
Every company beat there expectations, what does that mean to the little guy. Absolutely nothing. no lowered prices, it just means some CEO was able to light there cigar with a 100$ bill today.
Putting the #1, 2, 4, 5, 6, 7, 8 and 12 all in the same room? There is something wrong with the USSS.
There was enough members in that room if there was a MCE, that for the first time in 249 years the Gov would of been headless and left with no way to certify. #3 cant get a new #7 , congress would of had to cobble something to get something in place. #7 is also acting with a dual role , which is bad because it was already a contested position.
Holy hell.. this almost sounds like a bad Marvel Cinematic Universe film…
In order to show some sort of profit, meta is firing 10000 people and closing 6000 open positions, This is biting off your arm to save your foot.
Meta said on Thursday it plans to lay off roughly 10% of its workforce, or about 8,000 people, the latest in a string of tech industry layoffs fueled in part by artificial intelligence.
The company is also closing around 6,000 open roles, Janelle Gale, Meta’s chief people officer, wrote in a memo published by Bloomberg that Meta confirmed to CNN.
This is insane. They are firing workers to replace with AI , the problem is AI can’t walk, it can’t improvise its position, and lastly without AI the only innovation they get is AI hallucination.
The company has also been splurging on talent for its superintelligence lab and has acquired buzzy AI startups like Moltbook and Manus as part of its ongoing efforts to compete with OpenAI and others.
The problem here is that in the past Meta had people working on all sorts of things. now they are doing a ground level overfocus and are putting all of the eggs into one basket and I feel like the payoff is not going to be what meta wants. Give or take companies are all jockeying for position to hit the innovation jackpot on AI. the problem is most of the companies that are playing the CAPEex game is doing it by scaling and not code.
Amazon said in January it would lay off 16,000 workers, its second large-scale layoffs in three months, emphasizing the need for efficiency. fintech firm Block’s announcement in February that it would lay off 40% of its workforce, more than 4,000 people, Meta CEO Mark Zuckerberg hinted at the start of this year that the company, which has invested heavily in AI, could see workforce changes because of the technology. On Meta’s January earnings call, he called 2026 “the year that AI starts to dramatically change the way that we work.”
“We’re starting to see projects that used to require big teams now be accomplished by a single very talented person,” Zuckerberg said.
Here’s my counter-point to this, You fired 16000 positions, if this person is supposedly replacing that many workers, what happens when your Very Talented Person gets sick? Or, a power outage? Or, LLM data loss. Now if your Very Talented Person gets sick your output goes from 14000 to … 1 , Whereas before one person gets sick your output goes from 14000 to 13999. And lets just say for instance the person that would of replaced your Very Talented Person makes an innovation that improves working by triple, you end up having your Replacement guys efficiency 13999 to 41,997.
Like many big tech companies, Meta eliminated tens of thousands of jobs in 2022 and 2023, reductions that were largely attributed to right-sizing after Covid-era spikes in usage and hiring. Last year, the company said it would cut about 5% of what it called its “lowest performers,” although it planned to backfill many of those roles.
This is not right-sizing, this is full on cannibalization, Everyone is jumping for the AI goldrush while some chinese man in his garage is laughing at the Cape-ex and deepseeking the best coffee ideas.
CNN just posted an article and It was pointing out that AI is being used, The title of the article is
5 ways your doctor may be using AI chatbots — and why it matters
Specialized medical AI chatbots have quickly become a go-to source for many doctors and trainees. The CEO of one of these medical chatbot companies recently claimed that more than 100 million Americans were treated by a doctor who used their platform last year.
You know what. If the doctor is using AI to help diagnose an issue i am ok with this, But, if the doctor is using the AI as a replacement of his diagnostic I would be against that, the challenge with AI is using it in a way that is not a replacement of the doctors own agency.
One thing that should be majorly addressed here, is that the doctor should tell you right out how he is using the AI and what data is being used. If you are like me, you want to have a transparent doctor. I’ve explained my conditions to doctor and I see it ever time, Mid explaination his or hand falls down to the pocket level and you see in like martial arts the Sen no Sen its the move that you know he is getting his phone to google what you just told him or her. They will excuse themselves in the moment to go google my condition..
My normal reaction is .. to call the doctor right out, I tell them if you are going to google this do it in front of me and not to be embarrassed, I am the zebra in your career. I do not need the illusion of mastery of you are a doc, I personally want you to accept that you are not the god of your position and every instance as a doctor is a learning experience. I am not going to look down on a doctor that doesn’t know a rare genetic condition. I will look up to a doctor that uses the moment as a “Classroom” moment where he becomes the learner and I am the master. Because as far as doctor/patient this is the higher praise you can give to a doctor and it shows him as the “master” that he is willing to learn.
“ChatGPT is like your crazy uncle,” said Dr. Ida Sim, a professor at the University of California, San Francisco, who studies how to use data and technology to improve health care.
Any AI can be turned into your crazy uncle if you input enough information to them, but if doctors collaborate with the patient and the AI , I think a more diverse diagnosis would be made without the “symptom checker” fatigue that AI’s can load out on any Doctor,patient or third party can come up with.
As for AI’s they are not great doctors, they are the median doctor that is good at anything that slightly drifts from the center, they will be good for health upkeep or catching stuff before it happens. but on major issues the AI’s are so far in left field that they are irrelevant and become crazy unclue(pun intended) bob, That will start diagnosing diabetes before neuropathy in a chemical exposure case if context is done wrong.
The most common use case
Millions of research papers are published every year — and keeping up with them all is impossible.
“You’d need like 18 hours a day to stay up to date,” said Dr. Jared Dashevsky, a resident physician at the Icahn School of Medicine at Mount Sinai.
But doctors are expected to stay current on new research and guidelines to maintain their licenses. Many say they now use medical chatbots as a reference tool to help them stay updated.
Yes, there are millions of papers, but for Dr. Jared Dashevsky he doesn’t need to keep up with all of them. that would be insane. Millions of papers come out a year, by the end of said year 400,000 of those papers are changed or phased out into new research. Cnn and the doctor are wrong here, if you have a patient with a rare condition, AI can be used to contextualize the papers and come up with a mean average of the output to give the doctor a clue, I am not expecting the doctor to read all of the papers because he would rabbithole down so many roads that treatment and diagnostics would be a mess.
Save the papers in rare research for the specialist, your GP doesn’t need to know the ins and outs of 1 million papers that half of the time fail in the real world because lab controls do not equal real world observation. The Doctor that is slightly questioning his diagnosis and inputs some weird statistical drift will get a better answer out of AI and know what specialist to give the information too. But the doctor can use the AI as tool to faster make information available to him. If he tries the google search method it leads to bullshit that starts saying vitamins and sunning your butthole is a cure.
But many doctors use unauthorized chatbots called shadow AIs, according to doctors CNN spoke with. Some of these shadow AIs also advertise HIPAA compliance features.
HIPAA is a federal law that requires certain organizations that maintain identifiable health information — such as hospitals and insurers — to protect it from being disclosed without patient consent.
Here’s where doctors can win, Create a system that strips out all PII and just get to a processor that strips out the information and gets down to the numbers. Otherwise, the companies on the other end use the data as resaleable materials and ignore HIPAA , The healthcare entity should have an end to end chain of ownership to show the patient where there data begins and ends. the second and LLM user that data that is protected by HIPAA the LLM should be charged, if they sell it to insurance companies or walmart to figure out sales trends. I’m not saying AI should not be used , I’m saying accountability should be transparent.
We’ve been through this bullshit with the human genome with everyone attempting the copyrighting of the DNA of the human body, Now we are at the precipice with code of the human condition itself. We have Named Entity Recognition (NER) system to strip names and Privacy to ensure that even if the AI “learns” from your data, it cannot be reverse-engineered to identify you. We need this institutionalized across the system.
Otherwise we are creating a dangerous system that the human credit score will make it where insurance will have a value on a child before its born and create ways that have been used in the past to make people uninsurable.
GIve or take, Google Classroom makes Google a school admin, but also if you look in to common-core most people don’t realise its a job app to corporations across america. We do not need this to happen again. Common core in itself can feed LLM’s and Hippa issues since the IEP, typically the most powerful force for education can be identified later in life by LLM who are technical admins and further if the information from Common core and human condition meet you have an identifiable plot to unmasking the user. It could be connected the child who had suicidal ideations in school over low stress can be weighted and for a temporary issue , cause a person insurance to go through the roof.
Dr. Carolyn Kaufman — a resident physician at Stanford Medicine — and other doctors say that patient information is making its way into unauthorized chatbots, potentially opening the door to new ways of commodifying patient data.
“Data is money,” Kaufman said, noting that she has never uploaded HIPAA-protected information onto an unapproved chatbot. “If we’re just freely uploading those data into certain websites, then that’s obviously a risk for the individual patient and for the institution, as well.”
This statement here is a perfect reflection of above.. In the end, IEPS , common core assessments and more need to be Air gapped and when you leave school an agreement should be made between the student(or parents) on who can assess the information.
AI chatbots have also stepped in to help doctors draft summaries of patient visits and long hospital stays. These notes are viewable on online patient portals and help doctors track a patient’s course and communicate plans across the care team.
I am not worried here, If anything AI could be useful in suggestions to add to the file and give a treatment idea to the doctor, But no doctor should take this as gospel.
“From a med student perspective … you’re seeing a lot of things for the first time,” said Evan Patel, a fourth-year medical student at Rush University Medical College. “AI chatbots sort of help orient me to what possibilities it could be.”
Just No, First, fourth, or fortieth , Should never go in with AI , If you end with AI as a counterpoint or a co-researcher in the end is ok, but the doctor should not cognitively offload to the AI to help with diagnosis. Because if that becomes standard the cognitive process of diagnostics goes out the window and dies.
Med Students out of the gate should be regulated that AI is a non-negotiable in any point of the process before, during and any time patient contact is being made. At any time after if a Student uses AI after for a confirmation or a research Node, that can be agreed to but using the AI as attending physician is career suicide.
This preserves the Agency of the Physician and Occam’s razor.. The problem with AI is humans with 8.3 billion variations that AI tends to only use the mean average. It leaves many doctors with zebras that AI will hallucinate to high hell about and be dangerous.
The Final Word here. AI is Ok, but used correctly, not shoehorned into the medical spectrum..
In a blog post, Microsoft says Defender is enough for most Windows 11 users. No extra antivirus needed if you keep defaults enabled and update regularly.
Microsoft might be right, but instead they are so wrong here, The average computer problem exist between the computer and the chair. Even the most security minded individual can find themselves trapped in a moment where they have to wipe their entire PC out due to one misclick .
Give or take even if you have a secure version of windows they are assuming you never install 3rd party software, Never update your drivers and never have the computer on the internet with the computer locked in a scif. Otherwise, the second you are on the internet, Microsoft is leaking a shit ton of telemetry on your computer to the world.
if you never take a risk ever with the computer, You should be “fine” however you go to site to download common safe software and you get to play roulette, four different download buttons two chain off to different sites and one gives you the computer STD.
From the second you enter in your information for your windows 11 , Windows(microsoft) is already messing around in your cupboards taking inventory so if you have a stash of photos of your family Microsoft has already taken that and moved it to onedrive without asking much… well they did but they did a batch accept on all microsoft logins and by the time you violated some privacy policy or moved a picture of someone who did not want to be on the net it is already there with a shared use from microsoft. Given the fact that MS leaks this much out of the box anyone trying to hide from domestic abusers or worse there is a major chance to leak and leave yourself out in the open.
The other thing is , if there is a 0-day exploit on a wednesday, You have to wait till next tuesday until MS patches windows defender to update the definitions. With mainstream AV programs , it could be as little as hours until the AV updates to stop the 0-day.
Also bad websites are telling people to put commands in the “search” which in reality just fucks you over by installing spyware where you might just kiss your bank account goodbye. NEVER run anything a website asks you to do in a search bar.
But to make Microsoft’s Assertion accurate. You turn on the PC , Never Log it in. Air gap the updates to it . Never connect it to the internet. destroy bluetooth and Wifi Cards. Otherwise the second that PC thinks its on a network it shouting into the ether on IM HEEEEEEEEEEEERE.
Not to mention, Microsoft undoes any safety margins you have created every time it updates. Deleting onedrive from the system is like trying to get rid of lice with butter, it just doesn’t work. It switches you back to Edge and tries to import anything from other browsers back to edge. Microsofts security is part of the whole, if you must run it , Get Malwarebytes , on a secondary level do not be afraid to run another anti-virus to be double sure. if your PC is acting strange, don’t just assume that everything is ok. Google occasionally fails at spot checking their own sponsored links. That vector can lead to the fastest accidental system infection.
The worst part of the whole thing is Microsoft claims to have made everything easier, it has made it where as long as you invite the vampire in by either command line or installed software or even microsoft themselves they are free to do what they want and microsoft defender won’t look up from looking at the magazine it’s looking at.
A single misclicked installed chrome or edge extension can lift your entire saved password folder without even as much as a thank you.
In the end Microsoft is saying you can have the safest PC in the world, Just don’t use it, move it to an island without internet nor power and your fine.
This here is a tracked price of an SSD from amazon that clearly shows how much AI is driving the price up for every human in existence. to replace the SSD in a cheap computer now costs more than the cheap laptop or computer you bought in 2024. It eradicates any advantages in buying consoles and more.
The Trojan Room Coffee Pot was the researcher’s answer to the sin of finding the coffeepot empty. The researchers constantly had the issue of going for a caffeine infusion and the frustration of finding the life blood of the research department depleted. But, in 1991 Quentin Stafford-Fraser and Paul Jardetzky of the University of Cambridge pointed a Philips CCD camera at the pot. They connected it to an Acorn Archimedes computer to solve the issue of the lack of coffee. They wrote the base code to capture the coffee levels with a protocol they aptly named Xcoffee that captured the pot in greyscale. Originally it ran on the local network when in 1993 Martyn Johnson and Alan Cox showed the coffee pot to the world in November of that year.