AI Navigate

What happens if the LLMs are sabotaged?

Reddit r/artificial / 3/21/2026

💬 OpinionIdeas & Deep Analysis

Key Points

  • LLMs' quality depends on training data, so deliberately feeding garbage or poorly written code could degrade model performance and reliability.
  • The post highlights concerns about data poisoning and the need for data provenance, curation, and robust training pipelines.
  • It questions what guardrails and defenses exist to prevent data sabotage and ensure models remain trustworthy.
  • As a Reddit discussion rather than a formal announcement, the piece frames broader safety and governance questions surrounding AI training data and robustness.

Asking because I'm just curious.

The LLMs are only as good as the data they are trained with. Let's take coding for example. If as an attack, the sources for these LLM's training data are filled with garbage or deliberately poorly written code, what happens to these frontier models. I'm reading that more and more businesses, like travel etc are getting more and more paranoid about AI taking over because of how good they have gotten with the models trained with actual data. What if they deliberately flood the source with bad data to sabotage training? What are the guardrails in place to prevent such thing from happening?

submitted by /u/Life-is-beautiful-
[link] [comments]