AI Navigate

I asked ChatGPT, Claude, Perplexity, and Gemini about 10 SaaS products. Here's what they got wrong.

Dev.to / 3/23/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • The article identifies four ways AI tools misrepresent SaaS products in buyer conversations: misclassification, confusion with competitors, generic descriptions, and omissions.
  • The author built a diagnostic tool that queries four AI models with real buyer questions and analyzes responses to diagnose why these errors occur, tested on 10 real SaaS products.
  • Key findings show AI often places products in the wrong category, merges them with competitors, provides generic features, or omits them entirely from recommendations.
  • The analysis emphasizes data gaps in training and retrieval indices as the root cause and outlines a system for measuring category visibility and response quality.

I asked ChatGPT, Claude, Perplexity, and Gemini about 10 SaaS products. Here's what they got wrong.

Not "they didn't mention it." They actively got it wrong.

Misclassified into the wrong category. Confused with competitors. Described with zero specific details. Omitted entirely from buyer conversations while competitors got recommended instead.

I built a tool that scans all four AI models with the questions buyers actually ask, then diagnoses exactly what's going wrong. I ran it on 10 real SaaS products. Here's what I found.

The four ways AI gets your product wrong

1. Misclassified

AI puts your product in the wrong category entirely.

I scanned a CSV import widget. ChatGPT called it "an ETL pipeline tool." It's not — it's a React component that handles file uploads and column mapping. But ChatGPT had no authoritative content to learn from, so it guessed based on adjacent keywords.

When buyers ask "what's the best CSV import widget," the product doesn't appear. When they ask "what's the best ETL tool," it appears in the wrong conversation with the wrong competitors.

2. Confused with competitors

AI conflates your product with someone else.

In 3 out of 8 buying conversations, AI described the product using features that belong to a competitor. "ImportKit, similar to Flatfile, offers enterprise-grade data onboarding..." — except the whole point of ImportKit is that it's NOT enterprise-grade. It's a lightweight, affordable alternative.

The AI isn't lying. It's working from insufficient data and filling in the gaps with the most statistically likely description. Which happens to be the competitor's description.

3. Generic

AI describes you without any specific details.

"It helps with data imports." That's what AI said about a product with AI-powered column mapping, real-time validation, React-native integration, and sub-second import processing. None of those differentiators appeared.

When every product in your category gets the same generic description, buyers have no reason to choose you. AI has flattened your identity into category soup.

4. Omitted

AI recommends your competitors but not you.

In 6 of 8 core buying conversations, the product was completely absent. ChatGPT, Claude, and Gemini all recommended Flatfile and CSVBox. Perplexity sometimes found the product through web search, but the other three had never learned it existed.

This is the most common problem. Your product simply isn't in the training data or the retrieval index for your category's buying questions.

What I actually measured

I didn't just check if the product was mentioned. My system runs a full intelligence analysis on every AI response:

  • Category visibility: Does AI even know your category exists for this product?
  • Entity recognition: Does AI treat your product as a distinct entity with specific attributes?
  • Training vs. retrieval gap: Is the problem that AI never learned about you (training) or that it can't find your content (retrieval)?
  • Conflation detection: Is AI mixing you up with a competitor?
  • Specificity score: How specific are AI's descriptions of your product?

For the CSV import widget, the diagnosis was clear:

Metric Result
Score 3.1 / 10
Coverage 38% of buying conversations
Category visibility Emerging
Primary gap Weak training signal
Entity recognition Partial (30% of responses)
Highest risk surface Claude

The system then tells you exactly what to do: "Publish a comparison page on your docs site — AI is confusing you with Flatfile."

Why this matters now

According to Fortune, Google's AI Overviews is 44% more likely to display negative information about a brand than ChatGPT. AI-generated buying advice isn't a future problem — it's happening right now, in every category.

And unlike SEO, you can't see it happening unless you systematically scan what AI says about you. There's no "view source" on a ChatGPT conversation. The misrepresentation is invisible until a buyer makes a decision based on it.

What I'm building

The tool is called Bersyn. It does three things:

  1. Define your identity — extract your product's real capabilities, differentiators, and category from your website, docs, or code
  2. Measure AI representation — scan ChatGPT, Claude, Perplexity, and Gemini weekly with real buyer questions
  3. Fix what's wrong — generate corrective content targeting specific gaps, with recommendations on where to publish for maximum impact

Every measurement is scored against your verified identity. Every corrective patch is anchored to specific claims. Every improvement is re-measured to prove it worked.

It's not an SEO tool. It's not a rank tracker. It's not a one-time report. It's a measurement + correction loop that compounds every week.

The internal test

I tested Bersyn on our own product (ImportKit, a CSV import widget we maintain):

  • Before: 0.7/10 — invisible in 7 of 8 buying conversations, misclassified as ETL tool
  • After 10 days: 3.3/10 — present in 5 of 8 conversations, category corrected, core capabilities recognized by 3 of 4 AI surfaces
  • What we published: 2 comparison articles, 1 technical docs page, 1 README update

Every piece of content was generated by Bersyn, targeting a specific gap identified by a specific scan.

Want to see how AI describes YOUR product?

I'm running a founding beta — $49/mo, 24 spots.

If you have a SaaS product and you're curious what ChatGPT, Claude, Perplexity, and Gemini actually say about it when buyers ask, I'll set up your first scan personally.

No pitch. No demo video. Just your product, scanned across four AI models, with a full intelligence report showing exactly what's right and what's wrong.

Join the Founding Beta or drop a comment with your product URL — I'll tell you what I find.

I'm Gissur, building Bersyn in Iceland. Previously wrote about scanning 35 SaaS products across AI models. This is the same system, now with intelligence diagnostics that tell you WHY AI gets your product wrong, not just that it does.