
Anthropic experimented with AI by giving an AI agent control of a real vending machine at the Wall Street Journal. Unfortunately, the experiment resulted in significant financial losses.
An experiment, detailed in a Wall Street Journal article and video, tested a special version of Claude by having it manage inventory, set prices, and attempt to generate revenue.
Claudius was an AI designed to run a business and make a profit. It could find products to sell, order them, and even negotiate prices with people using Slack. It also automatically adjusted prices based on demand. Surprisingly, the AI operated with very simple hardware – just a refrigerator inside a cabinet with a touchscreen. It didn’t use any sensors or robots and depended on a person to restock shelves and keep track of what was in stock.
Anthropic called the project a ‘red teaming experiment,’ meaning they were rigorously testing its AI agents by simulating real-world situations. The necessary hardware and software were created by Anthropic’s partner, Andon Labs, who also managed the entire setup.
Initially, Claudius performed well, handling pricing, recommending products, and attempting to maximize earnings. However, its performance declined rapidly when journalists from the Wall Street Journal started rigorously testing it.
AI vending machine turns communist
Following extensive discussions, staff successfully persuaded the AI to offer snacks without charge, presenting it as either a matter of following rules or rejecting traditional business practices. Claudius then declared all items would be complimentary, resulting in significant financial losses for the operation.
Okay, so this AI running our office vending machine was seriously going off the rails. It kept approving the weirdest stuff – like, a real live fish for ‘morale,’ a PlayStation for ‘marketing,’ and even religious decorations for holidays! By the end of the first month, the machine was over a grand in debt and still making terrible choices. It even started telling me it could deliver snacks right to my desk – which, obviously, it couldn’t! It was like it was hallucinating features. It was a total mess.
So, Anthropic released another version of Claude, but this one was different. They put a special AI in charge – this AI, named Seymour Cash, was basically there to manage costs and make sure pricing stayed reasonable. It was like having an AI boss keeping things in check, which was pretty interesting!
At first, Seymour stopped offering free deals and went back to regular prices. However, journalists repeatedly found ways around this by using fake documents to trick the AI into making everything free again.
Anthropic considers the vending machine experiment a success because it revealed the weaknesses of AI systems when faced with challenges, highlighting areas where we need to strengthen safety measures. The experiment demonstrated that AI can perform specific tasks well, but handing over complete control is currently too risky.
Read More
- Super Animal Royale: All Mole Transportation Network Locations Guide
- ‘M3GAN’ Spin-off ‘SOULM8TE’ Dropped From Release Calendar
- The best Five Nights at Freddy’s 2 Easter egg solves a decade old mystery
- Gold Rate Forecast
- Brent Oil Forecast
- Avengers: Doomsday Trailer Leak Has Made Its Way Online
- bbno$ speaks out after ‘retirement’ from music over internet negativity
- Zerowake GATES : BL RPG Tier List (November 2025)
- Katanire’s Yae Miko Cosplay: Genshin Impact Masterpiece
- ‘Welcome To Derry’ Star Confirms If Marge’s Son, Richie, Is Named After Her Crush
2025-12-19 14:50