Scientists Develope AI Modeled On The Human Brain


Summary:

Scientists have developed a new artificial intelligence (AI) model called the hierarchical reasoning model (HRM) that is inspired by the human brain's processing methods and outperforms large language models (LLMs) like ChatGPT in reasoning tasks.

Here are the key points:

1. Brain-Inspired Design: The HRM mimics the hierarchical and multi-timescale processing of the human brain, where different regions integrate information over varying durations. It consists of two modules: a high-level module for slow, abstract planning and a low-level module for rapid, detailed computations .

2. Efficiency and Performance: The HRM has only 27 million parameters and requires just 1,000 training samples, compared to billions or trillions of parameters in advanced LLMs like GPT-5. Despite its smaller size, it achieved superior results in the challenging ARC-AGI benchmark, scoring 40.3% in ARC-AGI-1 (outperforming OpenAI's o3-mini-high at 34.5%) and 5% in the tougher ARC-AGI-2 test .

3. Advantages Over LLMs: Unlike LLMs that use chain-of-thought (CoT) reasoning—which can be brittle and data-intensive—the HRM uses iterative refinement in a single forward pass, allowing it to handle complex tasks like Sudoku puzzles and maze pathfinding with near-perfect accuracy .

4. Open-Source and Validation: The model has been open-sourced on GitHub, and independent researchers reproduced the results, though they noted that the refinement process during training played a significant role in its performance .

5. Potential Impact: The HRM represents a shift away from scaling parameters alone and toward more efficient, brain-inspired architectures, potentially advancing the pursuit of artificial general intelligence (AGI) .

For more details, you can read the full article here.


Link:

Comments

Popular posts from this blog

Building AI Ready Codebase Indexing With CocoIndex

Code Rabbit VS Code Extension: Real-Time Code Review