OpenClaw, the fast‑growing open‑source framework for building AI agents, released a major update on March 7, 2026 (v2026.3.7) that tackles one of the field’s most stubborn usability problems: long‑context forgetting. The release replaces a hardcoded, sliding‑window context manager with a pluginable “Context Engine”, and ships a new Lossless‑Claw mode that preserves access to old conversation material rather than discarding it.
The practical pain point is familiar to developers: large language models operate within a finite context window, and many systems cope by evicting older turns to make room for new ones. That blunt strategy reduces token costs but breaks continuity in long tasks — code projects, extended research threads and multi‑session creative work — because the model literally loses prior details. OpenClaw’s maintainers had long argued that context management was baked too deeply into the core; the new plugin architecture lifts that constraint.
At the heart of v2026.3.7 is Lossless‑Claw, an approach that treats compressed context as an indexed archive rather than disposable text. When history threatens to overflow the active context, the system creates compact summaries, tags them with bidirectional links to the original records, and expands the underlying material on demand. The result is a lightweight working memory that can rehydrate precise antecedents in seconds, restoring the kind of continuity human collaborators take for granted.
The design change produced measurable gains. In the OOLONG benchmark — a recognised test for coding tasks under very long contexts — OpenClaw in Lossless‑Claw mode scored 74.8 against Claude Code’s 70.3, using the same underlying model for both tools. The margin widened as context lengths increased, suggesting the advantage derives from architecture rather than raw model size or parameter tuning.
The update is broader than memory alone. OpenClaw added native support for top‑tier models including GPT‑5.4 and Gemini‑3.1 Flash‑Lite, persistent ACP channel bindings that survive restarts, per‑topic routing for Telegram and Discord agents, a slimmer Docker bookworm‑slim image for low‑resource hosts, a SecretRef mechanism for safer API key storage, and HEIF image support. The release notes also hint at an imminent Apple App Store submission, signalling a push to carry lossless memory from desktops to mobile devices.
That combination of technical improvement and pragmatic polish explains the frenzy around OpenClaw: more than 196 contributors are credited in the changelog, and community interest has spilled into tooling, hardware projects and commercial integrations. For teams building long‑running agents or multi‑session personal assistants, the update removes a major engineering headache and opens new product design space — persistent, cross‑device AI that remembers and revisits past decisions rather than reconstructing them each session.
Adoption is not automatic. The same ecosystems that benefit from Lossless‑Claw — intensive agents, enterprise automation and personal AI — can be sensitive to deployment complexity, cost of large context indexing, and new attack surfaces created by persistent archives. Organisations will need to weigh token and storage costs, secure the archive indexes, and integrate context plugins thoughtfully. Nevertheless, the release makes a compelling case that system architecture, not just model scale, can deliver outsized UX gains.
