Hierarchical Reasoning Model

aug 2025

Authors:

  • Guan Wang
  • Jin Li
  • Yuhao Sun
  • Xing Chen
  • Changling Liu
  • Yue Wu
  • Meng Lu
  • Sen Song
  • Yasin Abbasi Yadkori

Abstract

Reasoning, the process of devising and executing complex goal-oriented action sequences, remains a critical challenge in AI. Current large language models (LLMs) primarily employ Chain-of-Thought (CoT) techniques, which suffer from brittle task decomposition, extensive data requirements, and high latency. Inspired by the hierarchical and multi-timescale processing in the human brain, we propose the Hierarchical Reasoning Model (HRM), a novel recurrent architecture that attains significant computational depth while maintaining both training stability and efficiency. HRM executes sequential reasoning tasks in a single forward pass without explicit supervision of the intermediate process, through two interdependent recurrent modules: a high-level module responsible for slow, abstract planning, and a low-level module handling rapid, detailed computations.

Introduction

Current large language models struggle with complex reasoning tasks, relying heavily on Chain-of-Thought (CoT) techniques that require extensive training data and suffer from brittle task decomposition. The Hierarchical Reasoning Model (HRM) addresses these limitations by drawing inspiration from the human brain's hierarchical and multi-timescale processing capabilities. With only 27 million parameters, HRM achieves exceptional performance on complex reasoning tasks using only 1000 training samples, operating without pre-training or CoT data.

Architecture Overview

HRM employs a novel recurrent architecture with two interdependent modules that work together to achieve significant computational depth while maintaining training stability.

High-Level Module

Responsible for slow, abstract planning and strategic decision-making. This module operates at a higher level of abstraction, focusing on long-term goals and overall task structure.

Low-Level Module

Handles rapid, detailed computations and immediate task execution. This module processes fine-grained details and implements the high-level plans through specific actions.

Single Forward Pass

Unlike traditional approaches that require multiple iterations, HRM executes sequential reasoning tasks in a single forward pass without explicit supervision of intermediate processes.

Key Innovations

The Hierarchical Reasoning Model introduces several key innovations that distinguish it from existing approaches to reasoning in AI systems.

Multi-Timescale Processing

Inspired by the human brain's hierarchical organization, HRM processes information at different timescales, allowing for both rapid, detailed computations and slower, more abstract reasoning.

Recurrent Computation

The model leverages recurrent connections to achieve significant computational depth, enabling complex reasoning without the need for explicit decomposition or intermediate supervision.

Parameter Efficiency

With only 27 million parameters, HRM is significantly smaller than most LLMs while achieving superior performance on complex reasoning tasks, demonstrating remarkable parameter efficiency.

Experimental Results

The Hierarchical Reasoning Model demonstrates exceptional performance across a range of complex reasoning tasks, outperforming much larger models with significantly less training data.

Benchmark Performance

HRM achieves state-of-the-art results on standard reasoning benchmarks, including mathematical problem-solving, logical reasoning, and algorithmic tasks.

Sample Efficiency

The model requires only 1000 training samples to achieve its performance, compared to the millions of examples needed for comparable performance with traditional approaches.

Computational Efficiency

By executing reasoning tasks in a single forward pass, HRM significantly reduces inference latency compared to iterative approaches like Chain-of-Thought prompting.

Applications

The Hierarchical Reasoning Model has potential applications across various domains that require complex reasoning capabilities.

  • Scientific research and discovery
  • Complex decision-making systems
  • Automated programming and algorithm design
  • Educational tools for teaching reasoning skills
  • Planning and optimization in resource-constrained environments

Limitations and Future Work

Despite its impressive performance, HRM has several limitations that present opportunities for future research.

  • Limited ability to handle very long reasoning chains
  • Challenges in scaling to extremely complex tasks
  • Need for task-specific fine-tuning for optimal performance
  • Limited interpretability of internal reasoning processes
  • Current focus on text-based reasoning rather than multimodal reasoning

Conclusion

The Hierarchical Reasoning Model represents a significant advance in AI reasoning capabilities, demonstrating that brain-inspired hierarchical architectures can achieve remarkable performance with minimal parameters and training data. By processing information at multiple timescales and leveraging recurrent computation, HRM offers a promising alternative to current approaches that rely on explicit decomposition and extensive training. Future work will focus on scaling the model to handle more complex tasks, improving interpretability, and extending the approach to multimodal reasoning.

For more details, see the original paper: Hierarchical Reasoning Model