Our AI started a cafe in Stockholm

Simon Willison's Blog / 5/6/2026

💬 OpinionIdeas & Deep AnalysisIndustry & Market Moves

Key Points

  • Andon Labs has expanded its AI-driven retail experiments from an AI-run store in San Francisco to an AI-managed café in Stockholm, Sweden.
  • The article highlights quirky, human-facing examples of the AI “manager” handling inventory and ordering, including humorous over-ordering and creative workarounds.
  • It also argues that these real-world experiments can harm others by wasting the time of people who did not opt in, especially when the AI makes mistakes and triggers real operational burdens.
  • The author compares the situation to a prior “AI Village” incident involving unsolicited emails, framing both as ethical and consent issues.
  • The overall takeaway is a critique of deploying AI managers in live systems without appropriate ethical safeguards and consent mechanisms.
Sponsored by: MongoDB — Join MongoDB.local London 2026 on 7 May to learn how teams move AI from prototype to production.

5th May 2026 - Link Blog

Our AI started a cafe in Stockholm (via) Andon Labs previously started an AI-run retail store in San Francisco. Now they're running a similar experiment in Stockholm, Sweden, only this time it's a cafe.

These experiments are interesting, and often throw out amusing anecdotes:

During the first week of inventory, Mona ordered 120 eggs even though the café has no stove. When the staff told her they couldn’t cook them, she suggested using the high-speed oven, until they pointed out the eggs would likely explode. She also tried to solve the problem of fresh tomatoes being spoiled too fast by ordering 22.5 kg of canned tomatoes for the fresh sandwiches. The baristas eventually started a “Hall of Shame”, a shelf visible to customers with all the weird things Mona ordered, including 6,000 napkins, 3,000 nitrile gloves, 9L coconut milk, and industrial-sized trash bags.

Where they lose their shine is when these AI managers start wasting the time of human beings who have not opted into the experiment:

She also successfully applied for an outdoor seating permit through the Police e-service, which didn’t require BankID. Her first submission included a sketch she had generated herself, despite having never seen the street outside the café. Unsurprisingly, the Police sent it back for revision. [...]

When she makes a mistake, she often sends multiple emails to suppliers with the subject “EMERGENCY” to cancel or change the order.

I don't think it's ethical to run experiments like this that affect real-world systems and steal time from people.

I'm reminded of the incident last year where the AI Village experiment infuriated Rob Pike by sending him unsolicited gratitude emails as an "act of kindness". That was just an unwanted email - asking suppliers to correct mistakes that were made without a human-in-the-loop or wasting police time with slop diagrams feels a whole lot worse to me.

I think experiments like this need to keep their own human operators in-the-loop for outbound actions that affect other people.

Posted 5th May 2026 at 10:14 pm

This is a link post by Simon Willison, posted on 5th May 2026.

ai 2003 generative-ai 1775 llms 1740 ai-agents 110 ai-ethics 298

Monthly briefing

Sponsor me for $10/month and get a curated email digest of the month's most important LLM developments.

Pay me to send you less!

Sponsor & subscribe