I’ve been thinking a lot about how broken testing workflows feel right now.
Most of the time, writing end-to-end tests is slow, brittle, and honestly kind of painful. You write selectors, they break when the UI changes, and suddenly half your tests are useless.
Recently, I came across Autonoma AI, and it feels like a completely different approach.
Instead of writing test scripts, you just describe what you want in plain English.
Something like:
“Open the login page, enter credentials, and verify the dashboard loads.”
And that’s it.
Autonoma handles:
Running tests on real browsers and devices
Detecting elements using AI instead of fragile selectors
Automatically fixing tests when UI changes (self-healing)
That last part is huge. Anyone who’s worked with tools like Selenium or Cypress knows how annoying broken selectors can be.
What’s interesting is that this isn’t just a testing tool — it feels like part of a bigger shift toward LLM-native development.
Instead of writing code for everything, we’re starting to describe intent and let AI handle execution.
I can see this being useful for:
Startups that don’t want to maintain complex QA pipelines
Teams shipping fast where UI changes constantly
Solo devs who just want basic coverage without overhead
I haven’t fully integrated it into a production project yet, but even the idea of replacing brittle tests with something adaptive is pretty exciting.
Curious to see how this evolves.




