This repository contains the official implementation for our ICLR 2025 paper, Intelligence at the Edge of Chaos. In this work, we investigate how the complexity of rule-based systems, specifically elementary cellular automata (ECA), impacts the intelligence of large language models (LLMs) trained on sequences generated by these systems. We train LLMs on a spectrum of ECA patterns—ranging from uniform and periodic to highly chaotic—and evaluate their performance across various downstream tasks, including logical reasoning and chess move prediction. Our experiments demonstrate that LLMs exposed to intermediate, structured complexity significantly outperform those trained on overly simple or excessively chaotic patterns. These results highlight a critical insight: structured complexity serves as an essential catalyst for developing advanced artificial cognition. Our findings propose that intelligence emerges most effectively at the boundary between order and chaos.
Explore how varying complexity levels influence model performance in both pretraining and subsequent evaluations. The implementations allow investigating whether and how increased complexity leads to better results across different tasks.
@article{zhang2024intelligence,
title={Intelligence at the Edge of Chaos},
author={Zhang, Shiyang and Patel, Aakash and Rizvi, Syed A and Liu, Nianchen and He, Sizhuang and Karbasi, Amin and Zappala, Emanuele and van Dijk, David},
journal={arXiv preprint arXiv:2410.02536},
year={2024}
}
License
This project is licensed under the GPL-3.0 License. See the LICENSE file for details.