Claude Launches 'Dream' Feature: Allowing Agents to Truly 'Dream'
Anthropic introduced the new 'Dream' feature for Claude at its developer conference, allowing users to select memories retained by the Agent in the Platform dashboard for asynchronous 'dreaming'.
This feature mimics the logic of memory pruning and organization during human sleep, reflecting, reorganizing, and improving existing memories to uncover new patterns that are not explicitly recorded, rather than simply summarizing.
Dario Amodei has previously mentioned in several podcasts that continuous learning does not necessarily require updating model weights; it can be achieved through excellent engineering architecture and large context. 'Dream' is a realization of this concept—an independent offline curation loop.
Source: Public Information
ABAB AI Insight
Anthropic's low-key launch of the Dream feature is a direct engineering implementation of Tononi & Cirelli's synaptic homeostasis hypothesis and Walker's sleep research. Dario has emphasized that true advanced intelligence requires a 'forgetting' mechanism rather than an infinite accumulation of memories, and Dream is the first productization of this biological insight into an offline processing module for Agents.
On the capital path, Anthropic is shifting engineering resources from merely expanding context windows to 'memory governance', achieving efficient long-term memory compression and schema construction through Dream, significantly reducing inference costs while providing core infrastructure for Claude Agents in long-sequence professional scenarios such as law and finance, forming a dual-loop architecture of 'real-time interaction + offline dreaming'.
Similar to the emphasis on long-sequence legal Agents in the Harvey LAB benchmark and OpenAI's optimization of thought chains in the o series, leading-edge laboratories are at a critical stage of transitioning from 'expanding memory capacity' to 'active memory governance'.
Essentially, this is a technological substitution: the Dream feature replaces simple context addition with asynchronous offline curation, shifting capital from infinite token window expansion to efficient memory pruning and pattern extraction. Mechanically, it mimics global downscaling and weak connection pruning during human sleep, enabling Agents to achieve true long-term continuous learning and driving AI from 'remembering everything' to 'remembering the most important and continuously evolving'.
ABAB News · Cognitive Law
The best memory system is not about remembering the most, but about knowing when to forget and reorganize. A truly continuously learning Agent is not always awake, but knows how to 'dream' periodically. When AI begins to actively organize its thoughts, it is no longer just a tool, but starts to possess a mind.