Every AI coding assistant forgets what it was doing the moment you close the terminal. Codex just fixed that.
OpenAI shipped v0.128.0 on April 30th with two features that matter more than they sound: /goal for persistent cross-session objectives, and /pet for ambient agent status feedback.
The Session Amnesia Problem
You ask your AI assistant to refactor a module. It gets halfway through. You close the terminal, grab coffee, come back -- and it has zero memory of what it was doing.
You re-explain the task. It starts over. You lose 15 minutes of context every single time.
This is the intent persistence problem. Not context window size -- the model simply forgets your objective when the session ends.
/goal: Define It Once, Codex Keeps Going
/goal lets you set a persistent objective that survives across sessions:
/goal create "Increase test coverage in src/auth/ from 62% to 90%"
Close the terminal. Reboot. Come back tomorrow. The goal is still there.
| Command | What it does |
|---|---|
/goal create |
Define a persistent objective |
/goal pause |
Suspend the goal, preserve progress |
/goal resume |
Pick up where you left off |
/goal clear |
Mark done or abandon |
Under the hood, goal state is managed through app-server APIs with runtime continuation. When you /goal resume, Codex restores the execution context -- not just the goal text.
This shifts AI coding from request-response to goal-driven agent: you define the destination, the tool figures out how to get there across as many sessions as it takes.
/pet: Agent Observability, But Cute
Type /pet and a small animated creature appears in your Codex interface. It reflects what Codex is doing in the background:
- Running a task? The pet is active.
- Tests passed? It celebrates.
- Something stuck? It reacts.
- Idle? It sleeps.
9to5Mac called them "little Dynamic Island-ish messengers." Sam Altman said: "This isn't the most important thing we've done, but it's more useful than it looks."
You can also /hatch a custom pet -- Codex generates one based on your project context.
Silly? Sure. But agent observability during long-running tasks is a real problem, and this solves it without requiring you to tail logs.
What This Signals
When Cursor, Claude Code, and Codex generate roughly similar code, what differentiates them?
| Dimension | Old | New |
|---|---|---|
| Task scope | Single-turn | Multi-session goal tracking |
| Agent visibility | Terminal output | Ambient status indicators |
| Session model | Stateless | Stateful across restarts |
Once core functionality reaches parity, experience becomes the differentiator.
v0.128.0 Quick Reference
| Feature | Command | Description |
|---|---|---|
| Virtual pet | /pet |
Animated agent status companion |
| Custom pet | /hatch |
AI-generated project-specific pet |
| Goal system | /goal |
Persistent cross-session objectives |
| Self-update | codex update |
Update from terminal |
| Side chat | /side |
Parallel conversation panel |
| Plugin marketplace | marketplace | One-click plugin install |
Practical Notes
- Use
/goalfor multi-day refactors, coverage targets, migration checklists. Not for one-off fixes. - Use
/petas ambient monitoring during long agent runs. - If you are juggling multiple AI tools (Codex, Claude Code, Gemini), the fragmentation tax is real. EvoLink unifies 30+ models behind one API gateway with smart routing.
References:




