< Explain other AI papers

ELMUR: External Layer Memory with Update/Rewrite for Long-Horizon RL

Egor Cherepanov, Alexey K. Kovalev, Aleksandr I. Panov

2025-10-13

ELMUR: External Layer Memory with Update/Rewrite for Long-Horizon RL

Summary

This paper introduces a new way for robots to make decisions over long periods of time, even when they don't have all the information they need right away.

What's the problem?

Robots often need to remember things that happened a long time ago to make good choices now, but current AI models struggle with this. Models like transformers have a limited 'memory' – they can only consider a recent history. Trying to give them a longer memory usually doesn't work well because it becomes too complicated and inefficient, especially when important information is spread out over a long time.

What's the solution?

The researchers created a new architecture called ELMUR, which stands for External Layer Memory with Update/Rewrite. Think of it like giving each part of the robot's 'brain' a small notebook to write down important things it observes. Each 'layer' of the AI keeps track of these notes, looks back at them when making decisions, and updates them using a system that prioritizes remembering the most recently useful information. This allows the robot to effectively remember things from much further back in time than traditional methods.

Why it matters?

This is important because it allows robots to perform complex tasks in the real world, where things don't happen instantly and robots need to plan ahead. The experiments showed ELMUR significantly improved performance on tasks requiring long-term memory, like navigating mazes and manipulating objects, bringing us closer to robots that can reliably operate in complex environments.

Abstract

Real-world robotic agents must act under partial observability and long horizons, where key cues may appear long before they affect decision making. However, most modern approaches rely solely on instantaneous information, without incorporating insights from the past. Standard recurrent or transformer models struggle with retaining and leveraging long-term dependencies: context windows truncate history, while naive memory extensions fail under scale and sparsity. We propose ELMUR (External Layer Memory with Update/Rewrite), a transformer architecture with structured external memory. Each layer maintains memory embeddings, interacts with them via bidirectional cross-attention, and updates them through an Least Recently Used (LRU) memory module using replacement or convex blending. ELMUR extends effective horizons up to 100,000 times beyond the attention window and achieves a 100% success rate on a synthetic T-Maze task with corridors up to one million steps. In POPGym, it outperforms baselines on more than half of the tasks. On MIKASA-Robo sparse-reward manipulation tasks with visual observations, it nearly doubles the performance of strong baselines. These results demonstrate that structured, layer-local external memory offers a simple and scalable approach to decision making under partial observability.