AI Navigate

PSA: Check your Langfuse traces. Their SDK intercepts other tools' traces by default and charges you for them.

Reddit r/LocalLLaMA / 3/13/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical Usage

Key Points

  • Langfuse's V4 SDK, by default, attaches to the global TracerProvider and can intercept and upload spans from unrelated tools, potentially inflating your bill.
  • This behavior means you might be charged for thousands of traces from evaluation tools like DeepEval or local runners that you did not intend to send.
  • The recommended fix is to explicitly lock down the span processor so it only accepts Langfuse-generated spans, using a should_export_span filter.
  • The post provides a code snippet to implement the filter and highlights that the default OTEL configuration is the root cause, with a TL;DR summary to verify your usage dashboard.
PSA: Check your Langfuse traces. Their SDK intercepts other tools' traces by default and charges you for them.

If you use Langfuse alongside evaluation tools like DeepEval or local runners, check your usage dashboard. You might be paying for thousands of traces you never meant to send them.

What's happening:

Instead of only tracking what you explicitly tell it to, their SDK attaches to the global TracerProvider.

By default, it greedily intercepts and uploads any span in your application that has gen_ai.* attributes or known LLM scopes—even from completely unrelated tools running in the same process.

Because Langfuse has usage-based pricing (per trace/observation), this "capture everything" default silently inflates your bill with third-party background data. This is prominent in the new V4 SDK, but some backend update is causing it in older setups too.

I'm on Langfuse V3.12 and started seeing unrelated DeepEval data 2 days ago:

https://preview.redd.it/lzig36rgfoog1.png?width=1774&format=png&auto=webp&s=ef22544841acf4019686fbfbf607b4edbfc11e9c

The Fix:

You need to explicitly lock down the span processor so it only accepts Langfuse SDK calls.

from langfuse import Langfuse langfuse = Langfuse( should_export_span=lambda span: ( span.instrumentation_scope is not None and span.instrumentation_scope.name == "langfuse-sdk" ) ) 

That locks it down to only spans that Langfuse itself created. Nothing from DeepEval, nothing from any other library. Effectively the default it probably should have shipped with.

TL;DR: Langfuse's default OTEL config uploads every LLM trace in your stack, regardless of what tool generated it. Lock down your should_export_span filter to stop the bleeding.

submitted by /u/alxdan
[link] [comments]