The US Government Fired 40% of an Agency, Then Asked AI to Do Their Jobs

Dev.to / 4/17/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisTools & Practical UsageIndustry & Market Moves

Key Points

  • The U.S. General Services Administration (GSA) cut nearly 40% of its workforce after October 2024 and shuttered key digital services teams, creating a major staffing gap.
  • In April 2026, GSA unveiled a plan to use an AI chatbot called “USAi” to automate one million work hours, aligned with an “Eliminate, Optimize, Automate” strategy.
  • GSA identified about 400,000 automatable hours so far and launched “GSA Labs,” where selected employees build and train on top of their regular duties without extra pay.
  • USAi is a ChatGPT-style assistant using multiple model providers (including Anthropic Claude variants and Meta’s LLaMA) for tasks like drafting emails, summarizing documents, and writing basic code, while barring input of non-public or personal/confidential data.
  • Early user reports suggest the system produces generic outputs and may be “intern”-level, with critics arguing it lacks the planning, ethics, and trust-building seen in more established AI government efforts.

Fire First, Automate Later

Here’s a timeline that reads like a corporate dystopia speed-run. The U.S. General Services Administration (GSA) lost nearly 40% of its workforce since October 2024. Entire teams vanished. The digital services unit 18F, home to almost 100 tech specialists who actually built things for the government, was shuttered completely. The Public Buildings Service shed 45% of its staff between September 2024 and November 2025.

And now, in April 2026, GSA has announced its bold plan to fix the mess: an AI chatbot called USAi, tasked with automating one million work hours. That’s roughly a year’s worth of labor from 500 full-time employees. The ones they already fired.

Let that sink in for a second.

The Million Hours Challenge (Yes, They Actually Called It That)

GSA Deputy Director Michael Lynch revealed the initiative at an industry conference, framing it under the agency’s “EOA” playbook: Eliminate, Optimize, Automate. So far, they’ve identified about 400,000 hours of automatable work, which puts them roughly 40% of the way to their goal. Lynch says the agency wants to “start with ourselves and expand as we go forward,” which is either admirably self-aware or mildly threatening, depending on your perspective.

To power this effort, GSA created “GSA Labs,” recruiting around 300 interested employees. An initial cohort of 30 is tackling five priority problems selected from 17 proposals. These employees do this work on top of their regular duties, with no extra pay. Nothing says “we value you” like asking surviving staff to train their own AI replacement during lunch breaks.

What USAi Actually Does (and Doesn’t)

The tool runs through a ChatGPT-style interface and leverages multiple AI models, including Anthropic’s Claude Haiku 3.5, Claude Sonnet 3.5, and Meta’s LLaMA 3.2. Its approved tasks include drafting emails, creating talking points, summarizing documents, and writing basic code. Employees are explicitly barred from feeding it non-public government data, personal information, or confidential work products.

Early feedback from actual users? It delivers “generic and guessable answers” and works “about as good as an intern.” Forrester analyst Charlie Dai was more pointed, noting that the approach “lacks the careful planning, ethical considerations, and public trust-building seen in other global efforts.” Which is a polite way of saying: this feels rushed.

If you’ve been following how AI companies are raising absurd amounts of money to build these tools, the contrast is striking. Billions flow into building the technology, but the agencies deploying it can barely staff a pilot program.

The DOGE Connection

None of this happens in a vacuum. GSA was a focal point for the Department of Government Efficiency (DOGE), Elon Musk’s government cost-cutting initiative. DOGE pushed to slash GSA’s real estate portfolio in half and was directly involved in the workforce reductions. Former 18F employees have filed a class-action appeal, claiming they were specifically targeted.

The Government Accountability Office has already flagged that deep staffing cuts at GSA’s Public Buildings Service created real problems: property sales stalled, access was restricted, and vetting procedures fell apart. Now the plan is to patch those gaps with AI that, by its own users’ admission, performs at intern level.

GSA isn’t alone in this approach. The EPA and IRS, also hit hard by layoffs, have announced similar plans to “rebuild capacity through AI.” It’s becoming a pattern across the federal government: cut the people, then scramble to replace institutional knowledge with language models that can’t access the actual institutional data.

The Real Problem Nobody’s Talking About

There’s a fundamental contradiction here that deserves more attention. The AI tools being deployed are explicitly restricted from handling sensitive government information. But the work that actually needs doing (property management, procurement, building operations) is inherently tied to that sensitive information. You can’t automate building disposal paperwork if the chatbot isn’t allowed to see building disposal records.

This is the gap between AI hype and operational reality. Language models are genuinely useful for certain tasks. AI finding hundreds of security vulnerabilities in open-source code is impressive and verifiable. But drafting generic emails and summarizing documents you could have read in five minutes isn’t going to replace 500 full-time employees. It’s not even close.

Meanwhile, the agency is now trying to rehire. The Public Buildings Service plans to bring on 400 new employees over six months and has invited roughly 400 previously laid-off staff to return. So the timeline goes: fire people, deploy AI, realize AI can’t do the job, try to rehire the people you fired. There has to be a German word for this.

What This Tells Us About AI in 2026

The GSA story is a perfect case study for where we are with AI right now. The technology is real, but the deployment strategy matters enormously. Using AI to augment existing workers, helping them handle repetitive tasks faster so they can focus on complex decisions, is a genuinely good idea. Using AI to justify mass layoffs and then hoping the chatbot figures it out is not.

The best AI applications in 2026 are the ones that work alongside humans, not the ones shoved into the gap where humans used to be. A million hours of automation sounds impressive on a conference slide. But when the tool you’re betting on gets reviewed as “about as good as an intern,” maybe the first step shouldn’t have been firing 40% of the agency.

Just a thought.

🐾 Visit [the Pudgy Cat Shop](https://pudgycat.io/shop/) for prints and cat-approved goodies, or find our [illustrated books on Amazon](https://www.amazon.it/stores/author/B0DSV9QSWH/allbooks).

Originally published on Pudgy Cat