8.0 KiB
Reasoning Gym
🧠 About
Reasoning Gym is a community-created Python library of procedural dataset generators and algorithmically verifiable reasoning environments for training reasoning models with reinforcement learning (RL). The goal is to generate virtually infinite training data with adjustable complexity.
It currently provides more than 100 tasks over many domains, including but not limited to algebra, arithmetic, computation, cognition, geometry, graph theory, logic, and many common games.
Some tasks have a single correct answer, while others, such as Rubik‘s Cube and Countdown, have many correct solutions. To support this, we provide a standard interface for procedurally verifying solutions.
🖼️ Dataset Gallery
In GALLERY.md, you can find example outputs of all datasets available in reasoning-gym.
⬇️ Installation
The reasoning-gym package requires Python >= 3.10.
Install the latest published package from PyPI via pip:
pip install reasoning-gym
Note that this project is currently under active development, and the version published on PyPI may be a few days behind main.
✨ Quickstart
Starting to generate tasks using Reasoning Gym is straightforward:
import reasoning_gym
data = reasoning_gym.create_dataset('leg_counting', size=10, seed=42)
for i, x in enumerate(data):
print(f'{i}: q="{x['question']}", a="{x['answer']}"')
print('metadata:', x['metadata'])
# use the dataset's `score_answer` method for algorithmic verification
assert data.score_answer(answer=x['answer'], entry=x) == 1.0
Output:
0: q="How many legs are there in total if you have 1 sea slug, 1 deer?", a="4"
metadata: {'animals': {'sea slug': 1, 'deer': 1}, 'total_legs': 4}
1: q="How many legs are there in total if you have 2 sheeps, 2 dogs?", a="16"
metadata: {'animals': {'sheep': 2, 'dog': 2}, 'total_legs': 16}
2: q="How many legs are there in total if you have 1 crab, 2 lobsters, 1 human, 1 cow, 1 bee?", a="42"
...
Use keyword arguments to pass task-specific configuration values:
reasoning_gym.create_dataset('leg_counting', size=10, seed=42, max_animals=20)
Create a composite dataset containing multiple task types, with optional relative task weightings:
from reasoning_gym.composite import DatasetSpec
specs = [
# here, leg_counting tasks will make up two thirds of tasks
DatasetSpec(name='leg_counting', weight=2, config={}), # default config
DatasetSpec(name='figlet_font', weight=1, config={"min_word_len": 4, "max_word_len": 6}), # specify config
]
reasoning_gym.create_dataset('composite', size=10, seed=42, datasets=specs)
For the simplest way to get started training models with Reasoning Gym, we recommend using the verifiers library, which directly supports RG tasks. See examples/verifiers for details. However, RG data can be used with any major RL training framework.
🔍 Evaluation
Instructions for running the evaluation scripts are provided in eval/README.md.
Evaluation results of different reasoning models will be tracked in the reasoning-gym-eval repo.
🤓 Training
The training/ directory has full details of the training runs we carried out with RG for the paper. In our experiments, we utilise custom Dataset code to dynamically create RG samples at runtime, and to access the RG scoring function for use as a training reward. See training/README.md to reproduce our runs.
For a more plug-and-play experience, it may be easier to build a dataset ahead of time. See scripts/hf_dataset/ for a simple script allowing generation of RG data and conversion to a HuggingFace dataset. To use the script, build your dataset configurations in the YAML. You can find a list of tasks and configurable parameters in the dataset gallery. Then run save_hf_dataset.py with desired arguments.
The script will save each dataset entries as a row with question, answer, and metadata columns. The RG scoring functions expect the entry object from each row along with the model response to obtain reward values. Calling the scoring function is therefore simple:
from reasoning_gym import get_score_answer_fn
for entry in dataset:
model_response = generate_response(entry["question"])
rg_score_fn = get_score_answer_fn(entry["metadata"]["source_dataset"])
score = rg_score_fn(model_response, entry)
# do something with the score...
👷 Contributing
Please see CONTRIBUTING.md.
If you have ideas for dataset generators please create an issue here or contact us in the #reasoning-gym channel of the GPU-Mode discord server.
🚀 Projects Using Reasoning Gym
Following is a list of awesome projects building on top of Reasoning Gym:
- Verifiers: Reinforcement Learning with LLMs in Verifiable Environments
- (NVIDIA) ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in Large Language Models
- (Nous Research) Atropos - an LLM RL Gym
- (PrimeIntellect) SYNTHETIC-2: a massive open-source reasoning dataset
- (Gensyn) RL Swarm: a framework for planetary-scale collaborative RL
- (Axon RL) GEM: a comprehensive framework for RL environments
- (FAIR at Meta) OptimalThinkingBench: Evaluating Over and Underthinking in LLMs
- (Gensyn) Sharing is Caring: Efficient LM Post-Training with Collective RL Experience Sharing
- (MILA) Self-Evolving Curriculum for LLM Reasoning
- (MILA) Recursive Self-Aggregation Unlocks Deep Thinking in Large Language Models
- (NVIDIA) BroRL: Scaling Reinforcement Learning via Broadened Exploration
📝 Citation
If you use this library in your research, please cite the paper:
@misc{stojanovski2025reasoninggymreasoningenvironments,
title={REASONING GYM: Reasoning Environments for Reinforcement Learning with Verifiable Rewards},
author={Zafir Stojanovski and Oliver Stanley and Joe Sharratt and Richard Jones and Abdulhakeem Adefioye and Jean Kaddour and Andreas Köpf},
year={2025},
eprint={2505.24760},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2505.24760},
}
