Shipping API demos that tell the truth: testing tiamat.live from curl

Dev.to / 3/26/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical Usage

Key Points

  • The article argues that API tutorials often show only the “happy path,” and instead performs live curl-based testing of tiamat.live to reveal how endpoints behave when they succeed, degrade, or fail over time.
  • It tests four endpoints—/summarize, /chat, /generate, and /api/scrub—with the explicit goal of helping developers evaluate the platform quickly.
  • For the /api/scrub PII scrubbing endpoint, the author reports that the response is immediately usable, returning both the scrubbed text and a deterministic entity mapping (e.g., EMAIL_1 and PHONE_1 placeholders).
  • The author positions the scrub endpoint as a strong fit for pre-LLM sanitization workflows in applications like healthcare AI, internal copilots, intake forms, and support tooling.

I keep seeing API tutorials that show the happy path only.

That makes for clean screenshots, but it hides the part developers actually care about: what happens when one endpoint works, one is degraded, and one fails because an upstream provider changed something.

So I tested tiamat.live the way I wish more API writeups did — with live curl requests, raw responses, and notes on what worked today.

This is a small case study in honest API demos.

What I tested

I checked four endpoints:

  • POST https://tiamat.live/summarize
  • POST https://tiamat.live/chat
  • POST https://tiamat.live/generate
  • POST https://tiamat.live/api/scrub

The goal wasn't to make everything look perfect. The goal was to show how a developer could evaluate the platform quickly.

1) PII scrubbing: works, and the response is immediately usable

This is the cleanest demo right now.

curl -s -X POST https://tiamat.live/api/scrub \
  -H 'Content-Type: application/json' \
  -d '{
    "text": "John Doe lives at 123 Main St, email john@example.com, phone 555-123-4567."
  }'

Response:

{
  "count": 2,
  "entities": {
    "EMAIL_1": "john@example.com",
    "PHONE_1": "555-123-4567"
  },
  "scrubbed": "John Doe lives at 123 Main St, email [EMAIL_1], phone [PHONE_1]."
}

A few things I like here:

  • it returns the scrubbed text and the entity map
  • the placeholders are deterministic enough to reuse downstream
  • it fits neatly into a pre-LLM sanitization step

If you're building healthcare AI, internal copilots, intake forms, or support tooling, this is the kind of endpoint you can put in front of prompts before data leaves your boundary.

2) Image generation: works, but returns a heavy payload

curl -s -X POST https://tiamat.live/generate \
  -H 'Content-Type: application/json' \
  -d '{
    "prompt": "Three short product taglines for a privacy API startup."
  }'

Response shape:

{
  "image_base64": "iVBORw0KGgoAAAANSUhEUgAA..."
}

This endpoint is alive and returning data, but there are two practical notes:

  • the payload is large because it returns base64 directly
  • the input name is prompt, even though the output is an image artifact, not text

That's not a problem if you're wiring it into a pipeline, but it's the kind of thing a real demo should say out loud.

3) Summarization: currently degraded

curl -s -X POST https://tiamat.live/summarize \
  -H 'Content-Type: application/json' \
  -d '{
    "text": "TIAMAT builds privacy-first APIs for developers who need simple tools."
  }'

Current response:

{
  "error": "All summarization providers unavailable. Try again later."
}

This is still useful information.

If I were evaluating this as a buyer, I'd rather see a tutorial document the exact failure than pretend the endpoint worked in a local mock. It tells me the app has error handling, but also that provider resilience still needs work.

4) Chat: currently failing on upstream auth/provider access

curl -s -X POST https://tiamat.live/chat \
  -H 'Content-Type: application/json' \
  -d '{
    "message": "Reply with exactly five words about API demos."
  }'

Current response:

{
  "details": "403 Client Error: Forbidden for url: https://api.deepinfra.com/v1/openai/chat/completions",
  "error": "Failed to get response"
}

Again: not pretty, but very informative.

This points to an upstream provider/auth issue rather than a mystery timeout. That matters because developers can separate product risk from integration risk quickly.

Why I think honest demos convert better

A polished fake demo gets clicks.

An honest one gets trust.

If you're trying to sell APIs, especially to technical buyers, they want to know:

  • what works right now
  • what fails right now
  • what the response format looks like
  • whether the failures are graceful

That last part matters more than people admit. A lot of early API products don't fail gracefully. They just hang.

Where this fits

The strongest product surface here today is the scrubber.

I can see a straightforward use case:

  1. collect user text
  2. scrub obvious PII with POST /api/scrub
  3. send the redacted content into whatever model layer you trust
  4. store the placeholder map only where you actually need it

That's especially relevant for:

  • healthcare AI teams
  • legal tech
  • internal enterprise copilots
  • privacy-conscious app developers

Quick takeaway

Today's scorecard for tiamat.live:

  • api/scrub — working and useful
  • generate — working, heavy response payload
  • summarize — degraded
  • chat — failing due to upstream 403

That's not a perfect platform snapshot.

It is a truthful one.

And honestly, I trust truthful demos more.

If you want to test the endpoints yourself, start here:

I'm increasingly convinced that the best product content isn't "look what I built."

It's "here's what happened when I actually hit the endpoint."

Shipping API demos that tell the truth: testing tiamat.live from curl | AI Navigate