ChatGPT voice mode is a weaker model

Simon Willison's Blog / 4/11/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The post argues that ChatGPT’s voice mode uses an older, weaker underlying model than many users assume, and that its reasoning capability therefore feels limited.
  • It notes that voice mode reports a knowledge cutoff of April 2024, which the author links to a GPT-4o-era model rather than the newest capabilities.
  • The author connects this experience to a broader pattern: different “access points and domains” expose users to different model tiers, leading to a perceived capability gap.
  • Citing a Karpathy tweet, the post contrasts weaker performance in consumer scenarios (e.g., casual questions) with stronger performance in high-tier paid or specialized coding/security contexts.
  • It suggests that domain-specific reward/feedback mechanisms and B2B incentives (e.g., unit-test verifiability) accelerate improvements in those higher-value settings.
Sponsored by: Teleport — Connect agents to your infra in seconds with Teleport Beams. Built-in identity. Zero secrets. Get early access

10th April 2026

I think it's non-obvious to many people that the OpenAI voice mode runs on a much older, much weaker model - it feels like the AI that you can talk to should be the smartest AI but it really isn't.

If you ask ChatGPT voice mode for its knowledge cutoff date it tells you April 2024 - it's a GPT-4o era model.

This thought inspired by this Andrej Karpathy tweet about the growing gap in understanding of AI capability based on the access points and domains people are using the models with:

[...] It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and at the same time, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems.

This part really works and has made dramatic strides because 2 properties:

  1. these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also
  2. they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them.
Posted 10th April 2026 at 3:56 pm

This is a note by Simon Willison, posted on 10th April 2026.

ai 1955 openai 405 andrej-karpathy 42 generative-ai 1735 chatgpt 193 llms 1702

Monthly briefing

Sponsor me for $10/month and get a curated email digest of the month's most important LLM developments.

Pay me to send you less!

Sponsor & subscribe