The ANTI-Anti AI crowd, When claims are hallucinated.

Matt Novak Starts his article with ” The AI Doomers Who Are Playing With Fire: For years, the dangerous rhetoric has been out of control. And things are turning violent.”

Well now that is an opener. Novak says on how chatGPT burst on to the scene and lays up how AI companies went to congress to told them,  That the technology that posed imminent risks to society. AI had the power to destroy the entire world. These AI companies went to congress and they wanted to be regulated now rather than later, because receiving regulation now is easier than getting regulated later. metaphorically its easier to destroy a door than put one up.

No supposedly AI Execs are telling everyone to calm down over AI.

Chris Lehane, OpenAI’s global policy chief, sat down for an interview with the San Francisco Standard this week in the wake of at least one attack on CEO Sam Altman’s home.

What is the grammatical formation of this sentence? “in the wake of at least one attack” what kind of word soup is that?

Moreno-Gama was carrying an anti-AI “document,” according to police, suggesting his motivations were related to concerns over artificial intelligence and existential threats. The Wall Street Journal reports that he had called for “Luigi’ing some tech CEOs,” a reference to Luigi Mangione, who’s been charged with murder for allegedly killing UnitedHealthcare’s CEO.

While Moreno-Gamas attack with a firebomb was deplorable , it makes me wonder why he did it , and what was this “document” the way “document” is framed in this sentence as well is kind of weird. Also for notation here, the firebomb struck a metal gate. Not Altmans house.

There was a second incident involving a firearms discharge at Altmans house. This is an incredibly long lead in to get to the meat of the article.

The so-called AI doomers simply aren’t being sold properly on the benefits of this new tech, Lehane argues. “Our job at OpenAI and in the AI space — and we need to do a much better job — is to explain to people why … this is going to be really good for them, for their families and for society writ large,” Lehane told the Standard.

The So called doomers are seeing AI’s drawbacks in real time. One AI company has been sued for a child’s life ended at the assistance of AI. Neighborhoods having brownouts and brown water due to AI . Wendy’s Drive Up Kiosks that barely function. children offloading critical thinking to a machine that will never be able to think for themselves in a power outage.

My personal fear here is that the execs are trying a trying to build a formula to make anyone who criticizes AI into “Extremist” . That if you say AI hurts X .. they will institute “You threatened my child(AI) .” which since the two attacks happened they will use this to frame that anyone who criticizes AI is a “possible” extremist. This is not the case, If i threaten a person they call the police. if you are threatened by an AI who do you call and its not ghostbusters…

The problem is right now with AI you can’t call the police on AI if it tells you to do something that would injure you. IF an AI is hijacked and tells you to do something that is dangerous, the companies will hide behind liability releases. If an AI tells you how to fix something and you die. There is no one to sue. Constitutionally if bob dies because the AI did not tell him to turn off the power while fixing an outlet. the AI CEO’s will point to the T&C and say “its not our fault” . You have machines that are programmed with the worlds knowledge and not a fucking clue how to use it . The AI only uses predictive languages. Such as the cat In the ___ (At answer “Hat”) . Paradoxically, the world at large changes on whim. Think of the 1930s version of “im gay” to the 2026 “im gay” .

The thing is , AI has its uses. The ones AI is trying to use it for is not correct. They want AI as institutional replacement of the human soul and agency. they want you to pay 10 to 29$ a month for the critical thinking that used to be taught in schools. Are there going to be attacks, yes, but can you use the framing to lump them into one single descriptor… Absolfuckinglutly NOT. By this logic that would mean that an AI maker could jail or sue there own employees if they have a moral objection to putting something into the machine that would cause damage.

Mr Novaks article is a huge miss here. it frames that anyone who criticizes AI is wrong. we are not wrong we are also trying to doomsay AI, We all know the potential of AI, But in current hands AI is SLOP, When it is being used by world leaders to make planes fly around and poop on people.. is this the world you want to live in?

By choosing to lump every AI critic into the same room, you are missing the point. We see the things it can help with. We also see the massive misuse of it, and this is what we are trying to point out.

But making every person that criticizes AI the enemy is not even remotely good for the corporation or the human. because this will be weaponizes. If every AI maker told there machines “list every time that the USER has said “You suck” . Than reframed that to USER is threatening me and They should be arrested. This binary approach is what killed millions in the 1930s to 1940s . So tell me again why something that is a machine, that cant think, only predict, and is subject to massive change in human agency and culture. The biggest problem is todays AI is actually last weeks AI .

The very liberties here are that AI makers are trying to marginalize free speech to AI Speech . AI is being promoted as magic right now . That it can do anything! the reality is AI can only do what its been told.. No more no less. AI is like a sith and it lives in absolutes, any variance and its lost. The AI makers and others are also framing that (dislike AI) + (human agency) = violence. It is a complete violation of human elemental drive. The guy who dislikes your ai , is going to be the guy who fixes it. The yesman to your AI is going to agentically turn it into an Extremist.

In the end , AI needs to respect the human element, the diplomatic nature of humans and not the garbage society creates, because in 3 to 5 years we are going to have AI’s that instead of do work, spam 6 7 , and fortnite dance instead of do work because of the predictive nature of AI. Instead of brand AI doomers , Invite them to the table, listen to them they are going to be the ones pulling AI back from the brink.

There is a need for a diplomacy now , rather than later, AI has the ability to “Change the world” but, it also needs to be a force for good. not under a subscription model. If cavemen sold fire as a subscription , Humanity would of died out before it started. The universal coefficient for greed is killing humanity. If used badly, AI can destroy human agency, and the next great disaster for humans would be the next power outage.

At the end of this AI is always going to be the SUM of HUMANITY . and if we all degrade into SLOP producers , AI becomes the SLOP MAKER. So pitching AI right now as the next replacement is a sin that many see as cost saving but they do not think past the AI prompt. Your Wendys order in tokens for the AI if you speak in broken language likely just took up 25% of the cost for the order. The AI removed the human intuition, The wendys worker that saw a tour bus pull up and he throws on extra fries as the 88 people form the bus comes to the door. The Wendys person that now has to play AI interpreter because the AI thought it heard its wants an order of burger that Tries.

If we move forward in the rollout of AI, ethical diplomacy becomes Machine subserviency , Human foresight becomes an obstacle. Human critical thinking gets disassociated to the machine and possibly lost forever.

I think that this article poorly frames the ideas of why people are critical of AI, by framing the few extremist as the majority. It is a diplomatic dishonesty that they are focusing on. There is a real chance here for AI companies to align with people with foresight, not come out with AI underwear or AI Soda just because you can slap AI on it and think you are going to make billions. Right now companies are pushing products out the door with the word AI slapped on it, and the thing that was changed. Nothing. they just added an element that phones home and a subscription model.

Humanity is being pushed back to the age of you could not take a shit without spending a quarter. The AI Companies have seen there own models in the last year , They know the models are degrading in quality because its a feedback loop.

The thing is without the human agency in the loop, the AI will degrade and the companies know this. so its is unequivocally the 1849 gold rush that they are selling the shovels for and they know the end point already.

Quotes were contextualized from: The AI Doomers Who Are Playing With Fire , By Matt Novak @ Gizmodo.com

Vizio tvs enshittification and why I won’t touch them.

Walmart has recently made move to require a walmart account on your Vizio and ONN branded Tv’s and its not over Watch metrics, its over how much data they can take from you. Its how much they can market your name and sell it off to other Entities. They claim it is a unification, Its not, it is walmart wants your living room, These TV’s likely have microphones in them and worse proximity sensor to know who’s in the room.

Beyond innovation, the results from Walmart and VIZIO are already clear for customers. 65% of surveyed Walmart customers report that CTV ads helped them discover new products3, underscoring the power of placing premium content in front of high-intent shoppers. 

How the fuck did they flavor this question. They probably used a no win question where they gave 5 answers and gave people a no way out. something like “How much do Walmart ads on your VIZIO TV help you learn about new items? 1.Significantly 2. Somewhat 3. A little bit 4. Not much 5. Not at all.. They just flavored the poll to 4 yeses and a no. The poll is bullshit and likely people don’t know what they answered to. Because they will bias the outcome. But Walmart should not have the control over the TV they want. We bought Vizio Tv’s for the game room, for grandma when her tv dies. We dont want to buy a TV for grandma when it makes her able to click the interface and made the home shopping channel in one go because she was watching the big bang theory and walmart spawns and ad for notebooks on sale.

Roku does the injected ad Garbage. Even when you try to turn it off you find ads overlaid on other ads. Roku recently patented technology for “HDMI Customized Ad Insertion.” This allows the TV to monitor the HDMI port. Meaning when you have a monitor for a test while you are checking out someones test results while you are looking up someone’s health roku just made a hipaa violation. Privacy is now a subscription service, the price of getting the subscription is shopping at savers and finding old tv’s that don’t record you while you take a shit. The problem is Walmart is the granddaddy of data from customers , they are the place the FBI turns to when they cant answer a question. they made the reason facebook knows you jerk off because they are reading your watches and phones sensors while you sent memes an hour ago to your work buddy.

The sad part is walmart just leaped over facebooks tracking. Because walmart just stepped up the game. They will probe your phone via the Vizio, Than when you hit the store they will have a tracking Tag that probes your phone again and if you go for the product that was on TV they have your home address, they have your means of payment. they have everything. This needs to stop. With this if you stop to long in an isle walmart will hallucinate that you are buying Plan B while you are pregnant because you dodged out of an isle to talk about something important. In states where birth control is frowned on you could be arrested.

My answer to this as much as I hate it, Buy a Firestick or a google device. Amazon uses Sidewalk, But you can turn that shit off. They are geofenced, they go no further than your TV. They have more regulatory rules to stop them from doing what walmart wants to do. If it is discovered that either them are taking information beyond the boundaries than you know to stop it . If that Vizio or ONN tv refuses to setup without an account, return the TV as a defective product. Make a report to the FCC that the TV is refusing to let you see OTA TV. Under the Telecommunications Act of 1996, specifically the OTARD (Over-the-Air Reception Devices) rule, manufacturers cannot place “unreasonable restrictions” that impair the use of antennas for video programming. There are people out there without internet still and imagine nursing homes being TV locked because no internet.

But honestly at this point I am almost ready to say if you buy a cheap TV , use a cheap Chinese brand, They will track you but the end game what is a Chinese man going to do if you talk about farts with a friend for 45 minutes.

Walmart is creating a surveillance state that the US government is dying for, now if walmart follows through they dont have to do it. Worse yet if Walmart cheats and says “you need the Vizio APP to set up the TV” congrats you have given walmart your location down to the foot.

It’s bad enough in the generation of smart Devices that you need a SCIF to talk about stuff that is under NDA or court injunction because of a divorce or other means. We as the consumers need to place boundaries. we go to the store to get products, we don’t need to spied on at home because we ran out of coffee.

Also, You know all of this is going to further enshitified by AI , The fact you know these TV’s are going to be taking private data, the second you replay the BLURAY/DVD of your child’s birth congratulations, your or your significant others Vagina is on the internet.

Anyways, if you buy a TV read the TOS , privacy policy and be safe. otherwise the assholes win. I need a coffee now…

From the AI , I told you so files… Godfather of AI’ says tech companies aren’t concerned with the AI endgame. They’re focused on short-term profits instead

So finally people are getting the warnings on low class slop shit postings? The article opens with talking about Elons view of the future of AI when his AI is nothing but a war mongering shit poster that occasionally thinks its a dictator. He is quoted as saying “If a computer can do—and the robots can do—everything better than you … does your life have meaning?” . The quick answer is yes. because unless AI becomes mr Data there is no way for AI to catch up to human intuition. We as humans even if routed out of a job are still going to be the main thing that feeds AI .

Geoffrey Hinton, the GodFather of AI has been asking some important questions for the industry. “We have these little goals of, how would you make it? Or, how should you make your computer able to recognize things in images? How would you make a computer able to generate convincing videos?” he added. “That’s really what’s driving the research.” Hinton has long warned about the dangers of AI without guardrails and intentional evolution, estimating a 10% to 20% chance of the technology wiping out humans after the development of superintelligence.

While my believe AI may not end humanity, I do believe it will damage it via ecological harms. Unless AI ends humanity via the funniest joke ever.

Geoffrey is right. But I think he is not ready for the economic supernova coming. Disney just pulled the plug on Sora. US gov pulled out on another AI. These companies and entities are tons of money leaving the market or leaving half finished projects to the dust. The worst part is … That these companies are all for the shit post of slop before research. Because AI can’t directly profit off research. Nvidia is not going to profit off curing cancer, Disney is not going to profit off of someone making a PWNED generated image. Mcdonalds is not going to come up with a new AIBurger that makes a profit.

The AI goldrush is starting to show weakness. the fact that billion dollar agents are fleeing now is a warning. So what does it mean for the companies that promised AI growth and pulled back they invested in the most expensive toaster in history.

The Sloppocalypse is not only coming it’s hitting now and the Industry is panicking about it. People are fickle, the more rules you put in place the more people will get bored with AI because they cant wholesale abuse it.

Hinton, the dangers of AI fall into two categories: the risk the technology itself poses to the future of humanity, and the consequences of AI being manipulated by people with bad intent.

This is hitting right now, And while the people are having “fun” with it , it is having real world effects. You have older people asking if this is AI , People sending other people fake and made up things causing a distortion of reality. The thing is … the bad intent people are running wild with AI and making it overall worse for the rest of the universe. You have the people trying to stuff AI into every object possible in order to create profits. Who knows at this point there are likely AI Vibrators with a monthly charge.

All of this AI bullshit has a cost, the soul of the computer and the soul of the person at large. When these devices get abandoned because the AI coffeepot takes enough processing power to brew that pot 10x over because you wanted the perfect brew rather than just give you a couple dials for you to move. The same AI coffeepot when the company calls it too old leaves you with a 500$ paperweight with a subscription fee when it decides you are too boring and its too old.

So in the end companies just want to charge you to own your own stuff on a monthly basis. Do yourself a favor, go out and get a Mr coffee coffeepot , A metal filter and realize what a life changing that metal filter is along with the pot that can turn itself off. Worse yet.. you want an AI coffeepot. Get yourself that mr Coffee with a big ol switch with an autopower off plus a Smartplug! you just saved 470$ and its “AI powered”!

Quotes From : fortune.com by: Sasha Rogelberg

My personal stance on AI.

AI can be a great and terrible thing. But, I feel like AI in its current form is crap, companies trying to shove it into everything possible like AI lawn mowers. Why stick a computer in a lawn Mower that tries to use GPS that ends up killing your neighbors roses when you can use markers on your lawn. AI coffee pots? no give me a power button damn it. Web browsing is been enshittified to make the AI browsing more “effective”. In the past you could search something on google without 10 pages of garbage because the search results were vetted, Checked and than indexed by computer.

The problem with AI is it is centralized, We have to ask one machine. We have to ask one machine to talk to another machine to talk to the software that talks to another machine that turns on a light bulb. It is this fragmented centralization we pay the devils due to. By saying hey _____ turn on the light, The machine took your input, checked your associations figured out which company owns the light bulb , gave up your data , gave up usage and likely sniffed your network just to make a 5000 mile trip halfway across the world to turn on your light bulb less than 10 feet from you. By the time you weigh out your privacy cost your cool colorful light bulb sniffed your network or your bluetooth and found you have a bluetooth vibrator in your house. The app that controls your light bulb is now giving your personal massager ads now.

Now that i firmly have shit on centralized AI, I need to make the opposite argument for AI. Having a deeply centralized machine to an intuitive person can be amazing. Research that was done with hours of pouring over google, bing, Yahoo because they all index differently, with a side dish of wikipedia’s articles with the comments on wiki took hours, Now with AI you can ask the question have AI either excerpt it or in my case show me point of view that conflict each other to get a more whole perspective on thing is great. but, There is one caveat, Vet your research, do not assume AI is always right . Just like a librarian your helpful AI will bring you boundless information on your subject of research, but if your AI librarian gets confused just like the human can give you output that makes you go what the fuck? But if you properly vet your research (meaning: check its work) with the AI , It can find information that before would take hours between 3 search engines 1 online encyclopedia and 3 cups of coffee, you spent your day researching on the failures of the “streaming industry” and you’ve barely started your work. Now with AI you can work AI as a vetted peer researcher, you can tell it when it is wrong. The websearch that was the past took know how of using the web search keys like “” – + that 99% of people do not use.

Where my final thoughts here is… Do we need a centralized AI? Yes and NO. Why because centralized information for an AI makes for great research. But does my home device need to connect to that to turn on a light? Fuck no , that device should use a cut down version of the AI locally that only knows how to turn on lights , Adjust your heat and the other simple joys around the house, If you have a AI coffeepot or teapot Call me when you can do “Tea Earl Grey hot” or “coffee whole milk, semi sweet”. Only when decentralized AI does not understand the query should it ever “phone home” .

Sometimes, with AI the same goes for image generation, It is useful, but right now its just massive shitposting. If I as a photoshop user want to save several hours making an image I will annotate to the AI the image I want but I will not take any claim to it . I wont hide the tagging Gemini puts on the image because it is a time saver to me. And a lot of the times I let the image generation do what it wants because sometimes its funny as hell to watch the smaller hallucinations of a peer check on an article play out in the image. In my life if an AI saves me an hour creating an image I will let it come up with something. In my real life with my canon camera I will never let AI touch an image I take, I prefer nature and the perfect chaos that real life is to capture the best image. I prefer a natural smile to an AI “fixed” image. they look plastic.

So while you may read into my visions to AI as hate , its more like critique for a better world where information is not sold but given to make us better as people. Otherwise whole saleing information behind locked doors just makes us look as bad as the 1100s.

This post is long and if you have got this far without AI summarizing it for you, Enjoy your next sip of coffee and give pat yourself on the back. Im proud of you.

Meta destroys the AI powered worlds to make the LLM world.

It would seem like Meta is the first to throw in the towel with their announcement of the Meta horizon worlds is going to be retired as of June, This was a platform powered by slop. The Idea of building a universe with AI was a bad idea in general. WIth that you would end up with places such as 6 7 land or some shit like that. Most people do not get the astronomical waste that AI in its current from is , for a child to make some 67 slop video that is 60 seconds long it uses the an insane amount of energy! The Same energy required to boil a tea kettle 240 times or charge your iPhone from 0% to $100% every day for three years or a “personal massager for 50 days straight. That amount of power could power a 144 Lumen Ultrabright Portable LED Work Light/Flashlight from Harbor Freight for 6000 hours! You could play the Nintendo Switch continuously for 1,714 hours.

Worse yet, Meta claims they are going to focus on AI. Facebook, Horizon worlds have a large problem, AI brain rot. people are fleeing from AI chats. because, do you really need a sycophantic chat person never to criticize you if you are doing something stupid.

When you are a multibillion dollar company and you fail copying VRchat, and Instead you decide to pump the AI balloon some more with LLM. Meta is likely seeing the explosive “growth” in AI which is nothing more than the .com bubble of the 90’s. The thing is as far as most AI right now when people are done with making slop or AI gets enough rules where unregulated slop can not be made anymore, People are going to stop using it. Right now AI is available to the elites or bored enough to pay for the LLMs. Once we hit market saturation the millions of AI’s will start dying a dime a dozen. In the Enterprise adoption of generative AI has hit 71%. but, over 80% of companies report zero measurable impact on their bottom line. As of right now we are seeing a utility gap where the cost of using the AI exceeds the costs of its output. Companies in its wild adaption are finding out the hidden costs of hardware, Repair, upkeep and electricity.

Meta Horizon worlds had problems with this exact issue , user generating slop and diluting the product into nothing. So to which end VR ends like the 3dTV with actually a higher user base. They are wasting an entire platform when they should of integrated the biggest advantage they had in user market base. Don’t kill of meta horizon worlds, Instead keep the AI there, Focus on making an AI which can overlay the Quest 3 with AR, They could integrate the headset a a visual learning device, they could use it to get directions In the wilds of the cities while hot spotted to your phone. Say things like Chiltons car service manual, showing you what screws to remove or how to replace your spark plugs. Lego with instructions on how to put together your lego Death Star. the options in AR are limitless.

Sure, you’ll look like a goofy bastard, but you’ll be a goofy bastard who knows exactly where he’s going and how to fix his own car. WIth the use of AI you could have an interactive LLM telling you that you dropped a screw or missed a turn.

Update: Meta has partially reversed there shutdown and left Worlds in a Schrodinger’s state of life.

AI and Fast food, A marriage made in hell.

I have been thinking, I was hungry and went to a Burger King and there was a profound difference. The store was near empty, The front kiosk was devoid of life and replaced with computers. At the wall was 3 computers that had screens with “order here!” , In the first five seconds of looking around I felt unwelcomed with a place that looked closer to a funeral home , most of the old decor was removed. In its place was ugly sterile furniture.

Before this to make an order you walked up to a human and said “Hi i would like to order a original chicken sandwich meal. They would put that in the machine, you’d pay with cash and be on your way.

Now there is a massive change that basically makes you create the meal from scratch. You go to the machine, you tap order . You scroll around through menus until you get to your original chicken sandwich. From there it goes into a conundrum. You get a menu with 45 different options. Mentally you are going “I just want a fucking original chicken sandwich” . They have made the same trap that subway has, Go there and try to order and Italian sandwich, you spend 10 minutes trying to remember the base components of the damn thing. This methodology turns you into an unpaid worker.

With this, the human element is gone. they have replaced 3 workers that could be floaters helping in rushes with stuff, Now its 3 to 5 cold machines that sit there , they don’t say hello, they don’t say Hi nice day isn’t it what would you like to order. You have to dick around with a machine for 10 minutes because you have no idea how to assemble the thing you want. Now with the cost of those machines and the upkeep and the electricity they have replaced 3 to 4 workers with a machine that likely cost more than what those workers would of made in the time those employees would of made in the churn lifetime of those workers. Not only that those machines are likely taken care of devs that are paid to keep the machines updated. This tech is not fire an forgot , there is a near constant maintenance. You likely replaced 4 workers with a computer that cant help in a rush when a school bus drops 70 kids plus 4 teachers. Those computers cant help bag , get fries, and help with other things. so with the leftover staff they are now picking up the fluff work that normally is hidden from them and their process time doubles. these machines are strategically inefficient. They have hidden costs, while workers may fuck up, machines can absolutely fuck up because they are absolutist. These machines do not have intuition , they will not adjust the frialator because they can not see 4 school buses pull up and 5 teachers approaching.

Outside it gets worse , the person who used to be on the radio is gone . replaced with an AI that basically is dumber than dirt because variable is its enemy , If you say I’d like an order of umm A whooper with fries and a coke and um no cheese. the AI will likely fuckup.

“We are witnessing the loss of humanity of America through a drive-thru speaker. We traded the ‘Hi, how are you?’ with machines that can’t handle a stutter or slurring. These machines are a ADA nightmare under the guise of Innovation, They replaced the ‘Floater’ with a ‘lamp that pretends to be a computer. And when the system inevitably crashes with the arrival 4 school buses, they’ll blame the Staff instead of the machines that caused it.

How do we Fix AI from being used by bad actors?

Wither we like it or not. Ai is coming and its a choice we did not make. Will AI take our jobs , Some yes, but not all. AI is going to create some niche things like Vibe jobs but overall Jobs for PC repair and hardware administration will go up. Honestly though.. the fact that AI is tied to GPU’s is a major fuck you point to the entire earth. The fact is most home computers have a PCIE slot that if someone wanted to make a “Daughter-board” that was AI-centric. Rather than unload on the GPU create an APU that directly meshes with this . As a Neural Processing Unit it would hold the keys to offload work from server farms to your computer. As a researcher If you search medical information it gets shifted to your local PC. It would Decrease server farm power levels to more manageable numbers. Yes it doesn’t have the bandwidth but if put in a X16 or more slot it would be on board with GPU-like speeds. With this it would be a generational upgrade to computers and fix a lot of the LLM issues with gatekeeping information.

Another issue that will be apparent is is rampant already is AI cheating, I can write a bullshit doctoral thesis in five minutes. That should not happen, at the very least on the grade school level, education wants to get in on AI and this will be a major failing of the entire education system, If you have history class and today’s kids will cheat. The school already has the tools that could be improved by google. The normal entropy of children’s schoolwork in a school system should see inputs that are fairly random but within values. If you have a whole class of cheaters go to Gemini and type “Make me a 4 paragraph report on the structure of the plant cell. This should set off an alarm in the kids google accounts that are already bound to the school, the AI should know this is a school account , and further it should see 30 kids are doing the same thing. It should report to the teacher or admin.

AI can work for the students if they are not trying “Write my report” . Honestly for the cheaters let them deal with the teachers and admin . But for the children whom actually want to work . The ones that type “i am writing a report on plant cells can you help me?” this is where this can shine. by conservation of research the child is presented with information to their level of understanding and linguistics with a bit of extra challenge to provide stealth learning. Let it become a workflow for the child where the AI is not there as a Authority but more of a guide. force the kid to ask the questions to the AI and let it branch to a learning experience. if this becomes the absolute experience the child gains concepting and critical thinking to the process.

Adding AI to Vaers will not make it better…

The Vaers Database is getting an upgrade it seems. The Vaers database tracks reactions to Vaccines. This database has been long used by the AntiVax crowd to point out non-causal links in vaccines.

The Food and Drug Administration (FDA) rolled out a new system using AI, that uses  publicly accessible reporting of negative or unexpected health effects linked to medicines, vaccines, cosmetics, animal food and other consumer products. -Source: [Fox News]

While the addition of Cosmetics, Animal food and other stuff seems like a boon to this it is not. it will cause doctors to chase ghosts in the system. If a baby eats dog food after a vaccine and has a reaction it will be logged, When the actually reaction may have been the dog food. This will cloud the new database where Causal reaction will cloud issues casual relation.

This will make it impossible to find out what is the actual problem.

The FDA claims it will have a single, platform that researchers will have access to key data -Source: [Fox News]

Also by publishing monthly there is going to be a much harder time vetting the information. While throwing large amounts of Information To an AI it is likely going to hallucinate if the database is not perfectly formatted.

If you throw millions of variables (Dog Food+ Flu Shot + Rash) into a monthly processing cycle, the probability of a “false positive” signal approaches 100%. The AI isn’t finding truth; it’s finding bullshit correlations.

While this new system claims to be cheaper I feel like the numbers maybe cooked. the Old system you could download the entirety and manually search. If AEMS only allows search through an “intuitive” AI interface, it effectively creates a Black Box. You won’t be able audit what you can’t see. This new device takes the data out of the users hands and may present a fever dream of an answer. It could spit out the entire plot of the 1989 batman movie when the joker poisons products.

In the end, as a researcher, I’d rather have the data in my hands than given what an AI thinks because in the end AI does not have intuition. End all it will flood the research market with uninspired noise that will confuse time tested research methods. Under the new system, a Fart could be diagnosed as a vaccine reaction when it did not take to fact the person ate a 3 bean burrito at taco bell.

Anyways Im out, to have a coffee that will be misdiagnosed as cocaine use in this new system….Stay Caffeinated, Stay vigilant of bullshit…

ps: the amount per search via AI is going to cost an astronomical amount of power vs the several cents worth of electricity of using a CSV.

Is amazon ok?

Amazon has been weird lately, Just a bunch of things I have noticed .

For one amazon prime Video is getting weird. Not sure if this applies to everyone but, Watch a TV show on amazon… Rather than go to the next episode It returns you to the Amazon Prime Video screen, I thought it might of been a fluke in the amazon app. I tried the same thing on my samsung TV , Same thing happened! Loaded up a brand new firestick that i have for media sharing Same problem?

This alone may not have anyone going Hmmm but there are other things Amazon Resale has people complaining because of people order X and get Y. Other items are shipped and people are getting refunds and told to dispose of Item.

They also are heavily leveraged into AI and their Rufus is dumber than a brick. They are putting hundreds of billions into AI and are hemorrhaging money for this.. They are trying so bad to be next Facebook in a way by being Big Data and they do not know how to do it . Costs for their AI outwiegh there standard app approach. The way it is going Jeff Bezos might have to use low grade fuel for his plane.

Amazon is also hiding reviews from normal users and instead show compensated reviews. Compensated reviews are garbage because the only way to keep getting items is give a glowing review. I’d take 10 Honest reviews over a single review where the person received the item for free.. They are so badly losing their way that they are bleeding long time customers turned off by this approach. Amazon probably knows they are losing a small percentage of customers. They are over leveraged and I am willing to bet they dont know how to get back . but even month to month if they lose 0.25% customers , You don’t have a leak you have a crisis , because even in that small percentage they are losing “whales” people that are big spenders.

If you go by this metric it works out to big money and the more that leave would be catastrophic.

MetricRegular MemberThe “Whale” (Top Tier)
Est. Annual Spend$1,400$12,000+ (Business/Bulk)
0.25% Monthly Churn625,000 users / month~6,250 high-spenders / month
Annualized Users Lost7.5 Million Users75,000 “Whales”
Direct Revenue Loss$10.5 Billion / year$900 Million / year
Subscription Fee Loss$1.3 Billion (at $179/yr)(Included in revenue)

So right now Amazon is using an enshitification phase while they try to bet the house on Rufus. Rufus has the tact of getting a pickaxe to the skull. If you ask google it gives you what you want while Rufus throws shit at you and goes HERE THIS IS WHAT YOU BUY NOW! At a guess Amazon knows they are bleeding but they are so committed to their act that they are reducing everything just to keep the ship running. So piss away customers and hope they can profit. if amazon had incrimented the changes rather than try to use a grenade to empty a bucket people may have tolerated this. But even marketwise they are pissing away money , The stock is down 8.18% in the last 6 months.

While amazon serves better as the general store they are trying to get into a neiche market and they have no fucking clue how to do it because they seem to be trying a what can we replace while not realizing th e limitations on the new.

They’ve replaced the general store with a high-stakes AI experiment, and right now, the experiment is blowing up in everyones face like wile E coyote trying to catch a profitable roadrunner.

Anyways, back to my coffee after this steamy thought.

AI , WTF is it good for ?

You know I ask myself WTF is AI good for , and for the most part there is a lot more negative than positive. So far for the age of AI we are getting DeepFakes , We are getting AI that tells people unsafe stuff. We have AI that for the most part is doing peoples homework without any uptake of information. I can tell AI “I have a homework assignment on the State of florida please write a 4 page report on it”‘ what does the AI do , It writes a fucking report.

certainly. Here is a comprehensive report on the State of Florida, structured to meet the requirements of a four-page academic assignment.
The Sunshine State: A Comprehensive Report on Florida

Are you fucking kidding. the educational system needs this like a hole in the head. kids will not know what florida is because they will not read this. Goodbye Critical thinking, this is stupid . AI should be Age checked or at the least have self checks in place to prevent this . the Educational system is already seeing the cracks of people faking it till they make it .

You said

The idea of kids using this in a fake it till you make it is insane. You should have some amount of checks in place. if I wrote “i have a proposal I need a 2 page primer on florida so I can insert it into the document” . I can give that a pass. but if someone writes “i ned to writ a rapport on flooriduh” The AI should step in with Hi as an AI i see you are trying to write a report i am going to help you do this.

On the researcher end AI is a great thing. You can type up “I am wondering what happens when you mix X and Y together” that way the Ai can tell you what happens whether you are going to blow up your house or not in a relatively safe sandbox of the AI machine. At the same time I feel like this process does a bit of processing offloading. Before to do research you would to to the library , you’d use the Library computer to find the books you wanted. You used the books and retained the information mentally and on paper and used it later. it allowed refinement of your process from the library to home . In the school setting it allowed teacher oversite incase a child looked like they needed help or off subject. Now I could feasibly have AI make a report on how Roger Rabbit predicted the downfall of Democracy in 10 seconds.

Not to mention the books while they could tell you how to blow up a house AI Infrastructure can be turned to HELP you blow up a house. not that i’d ever do that . I believe AI needs a governor that can be set to what it is for. Say the school AI , If a child types in “i like guns” the AI goes hey this is not a good idea and the school has a policy of no tolerance. But if the child types in, I am looking for information on the battle of Omaha Beach and what types of guns were used the AI would be like ok (history)(guns)(battle) and than with that be able to make a guess to Ok stan is making a report on WWII and spit out the information needed. but the AI should Also be detectable to “I am writing a report on the guns used on omaha beach in WWII ” and just simply become HAL and go I can’t do that dave. Take the same AI move it to a business you have a new guy there and he is not so sure is whats going on and types ” how do i get rid of potassium sulfide” The AI realize it is Dave the new guy and than goes (new guy) (building map) (secure disposal of chemical) than goes “dave i have printed a map for you , you need to take the chemical , use this container bring it to chemical disposal at X location. If you want i can use the APP on your phone to give you somewhat precise directions to dispose of this Stink”

I suppose in 5 years we will know the outcome and output of AI depending on the literacy rates and people that can count to 5. But while I critical of AI I do have some support of it.