Skip to content

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

303 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

NeMo Gym

RequirementsQuick StartAvailable EnvironmentsDocumentation & ResourcesCommunity & SupportCitations

NeMo Gym is a library for building reinforcement learning (RL) training environments for large language models (LLMs). It provides infrastructure to develop environments, scale rollout collection, and integrate seamlessly with your preferred training framework.

🏆 Why NeMo Gym?

  • Scaffolding and patterns to accelerate environment development: multi-step, multi-turn, and user modeling scenarios
  • Contribute environments without expert knowledge of the entire RL training loop
  • Test environments and throughput end-to-end, independent of the RL training loop
  • Interoperable with existing environments, systems, and RL training frameworks
  • Growing collection of training environments and datasets for Reinforcement Learning from Verifiable Reward (RLVR)

Important

NeMo Gym is currently in early development. You should expect evolving APIs, incomplete documentation, and occasional bugs. We welcome contributions and feedback - for any changes, please open an issue first to kick off discussion!

🔗 Ecosystem

NeMo Gym is part of NVIDIA NeMo, NVIDIA's GPU-accelerated platform for building and training generative AI models. NeMo Gym integrates with a growing number of RL training frameworks and environment libraries; see the Ecosystem page for full details and tutorials.

Training Frameworks: NeMo RLOpenRLHFTRLUnslothmore →

Environment Libraries: Reasoning GymAviarymore →

📋 Requirements

NeMo Gym is designed to run on standard development machines:

Hardware Requirements Software Requirements
GPU: Not required for NeMo Gym library operation
• GPU may be needed for specific resource servers or model inference (see individual server documentation)
Operating System:
• Linux (Ubuntu 20.04+, or equivalent)
• macOS (11.0+ for x86_64, 12.0+ for Apple Silicon)
• Windows (via WSL2)
CPU: Any modern x86_64 or ARM64 processor (e.g., Intel, AMD, Apple Silicon) Python: 3.12 or higher
RAM: Minimum 8 GB (16 GB+ recommended for larger environments) Git: For cloning the repository
Storage: Minimum 5 GB free disk space for installation and basic usage Internet Connection: Required for downloading dependencies and API access

Additional Requirements

  • API Keys: OpenAI API key with available credits (for the quickstart examples)
    • Other model providers supported (Azure OpenAI, self-hosted models via vLLM)
  • Ray: Automatically installed as a dependency (no separate setup required)

🚀 Quick Start

Install NeMo Gym, start the servers, and collect your first verified rollouts for RL training.

Setup

# Clone the repository
git clone git@github.com:NVIDIA-NeMo/Gym.git
cd Gym

# Install UV (Python package manager)
curl -LsSf https://astral.sh/uv/install.sh | sh
source $HOME/.local/bin/env

# Create virtual environment
uv venv --python 3.12
source .venv/bin/activate

# Install NeMo Gym
uv sync --extra dev --group docs

Configure Your API Key

Create an env.yaml file that contains your OpenAI API key and the policy model you want to use. Replace your-openai-api-key with your actual key. This file helps keep your secrets out of version control while still making them available to NeMo Gym.

echo "policy_base_url: https://api.openai.com/v1
policy_api_key: your-openai-api-key
policy_model_name: gpt-4.1-2025-04-14" > env.yaml

Note

We use GPT-4.1 in this quickstart because it provides low latency (no reasoning step) and works reliably out-of-the-box. NeMo Gym is not limited to OpenAI models—you can use self-hosted models via vLLM or any OpenAI-compatible inference server. See the documentation for details.

Start Servers

Terminal 1 (start servers):

# Start servers (this will keep running)
config_paths="resources_servers/example_single_tool_call/configs/example_single_tool_call.yaml,\
responses_api_models/openai_model/configs/openai_model.yaml"
ng_run "+config_paths=[${config_paths}]"

Terminal 2 (interact with agent):

# In a NEW terminal, activate environment
source .venv/bin/activate

# Interact with your agent
python responses_api_agents/simple_agent/client.py

Collect Rollouts

Terminal 2 (keep servers running in Terminal 1):

# Create a simple dataset with one query
echo '{"responses_create_params":{"input":[{"role":"developer","content":"You are a helpful assistant."},{"role":"user","content":"What is the weather in Seattle?"}]}}' > weather_query.jsonl

# Collect verified rollouts
ng_collect_rollouts \
    +agent_name=example_single_tool_call_simple_agent \
    +input_jsonl_fpath=weather_query.jsonl \
    +output_jsonl_fpath=weather_rollouts.jsonl

# View the result
cat weather_rollouts.jsonl | python -m json.tool

This generates training data with verification scores!

Clean Up Servers

Terminal 1 with the running servers: Ctrl+C to stop the ng_run process.

Next Steps

Now that you can generate rollouts, choose your path:

  • Start training — Train models using NeMo Gym with your preferred RL framework. See the Training Tutorials.

  • Use an existing environment — Browse the Available Environments below to find an environment that matches your goals.

  • Build a custom environment — Implement or integrate existing tools and define task verification logic. Get started with the Creating a Training Environment tutorial.

📦 Available Environments

NeMo Gym includes a curated collection of environments for training and evaluation across multiple domains:

Example Environment Patterns

Purpose: Demonstrate NeMo Gym patterns and concepts.

Name Demonstrates Config README
Multi Step Multi-step tool calling example_multi_step.yaml README
Session State Mgmt Session state management (in-memory) example_session_state_mgmt.yaml README
Single Tool Call Basic single-step tool calling example_single_tool_call.yaml README

Environments for Training & Evaluation

Purpose: Training-ready environments with curated datasets.

Tip

Each resource server includes example data, configuration files, and tests. See each server's README for details.

Resource Server Config Domain Dataset Description Value Train Validation License
Arc Agi arc_agi.yaml knowledge - - - - -
Aviary aviary.yaml math - - - Apache 2.0
Aviary bixbench_aviary.yaml coding - - - - - -
Aviary gsm8k_aviary.yaml math - - - Apache 2.0
Aviary hotpotqa_aviary.yaml agent - - - Apache 2.0
Calendar calendar.yaml agent Nemotron-RL-agent-calendar_scheduling - - Apache 2.0
Code Gen code_gen.yaml coding nemotron-RL-coding-competitive_coding - - Apache 2.0
Equivalence Llm Judge equivalence_llm_judge.yaml knowledge - Short answer questions with LLM-as-a-judge Improve knowledge-related benchmarks like GPQA / HLE - - -
Equivalence Llm Judge lc.yaml knowledge - - - - - -
Equivalence Llm Judge lc_judge.yaml knowledge - - - - - -
Equivalence Llm Judge nl2bash-equivalency.yaml agent - Short bash command generation questions with LLM-as-a-judge Improve foundational bash and IF capabilities GNU General Public License v3.0
Google Search google_search.yaml agent Nemotron-RL-knowledge-web_search-mcqa Multi-choice question answering problems with search tools integrated Improve knowledge-related benchmarks with search tools - Apache 2.0
Instruction Following instruction_following.yaml instruction_following Nemotron-RL-instruction_following Instruction following datasets targeting IFEval and IFBench style instruction following capabilities Improve IFEval and IFBench - Apache 2.0
Math Advanced Calculations math_advanced_calculations.yaml agent Nemotron-RL-math-advanced_calculations An instruction following math environment with counter-intuitive calculators Improve instruction following capabilities in specific math environments - Apache 2.0
Math Formal Lean math_formal_lean.yaml math - Lean4 formal proof verification environment Improve formal theorem proving capabilities - MIT
Math Formal Lean math_formal_lean_multi_turn.yaml math - Lean4 formal proof verification environment with multi-turn self-correction Improve formal theorem proving capabilities - MIT
Math Formal Lean nemotron_clean_easy.yaml math - Lean4 formal proof verification environment Improve formal theorem proving capabilities - Apache 2.0
Math Formal Lean nemotron_first_try_hard.yaml math - Lean4 formal proof verification environment Improve formal theorem proving capabilities - Apache 2.0
Math Formal Lean nemotron_medium_500.yaml math - Lean4 formal proof verification environment Improve formal theorem proving capabilities - Apache 2.0
Math Formal Lean nemotron_very_easy.yaml math - Lean4 formal proof verification environment Improve formal theorem proving capabilities - Apache 2.0
Math With Code math_with_code.yaml math - - - - Apache 2.0
Math With Judge bytedtsinghua_dapo17k.yaml math - - - Apache 2.0
Math With Judge dapo17k.yaml math - - - Apache 2.0
Math With Judge dapo17k_filtered_qwen330ba3binstruct.yaml math - - - Apache 2.0
Math With Judge dapo17k_trajectory_collection.yaml math - - - - -
Math With Judge math_stack_overflow.yaml math Nemotron-RL-math-stack_overflow - - Creative Commons Attribution-ShareAlike 4.0 International
Math With Judge math_with_judge.yaml math Nemotron-RL-math-OpenMathReasoning Math dataset with math-verify and LLM-as-a-judge Improve math capabilities including AIME 24 / 25 Creative Commons Attribution 4.0 International
Math With Judge math_with_local_judge.yaml math - - - - - -
Mcqa mcqa.yaml knowledge Nemotron-RL-knowledge-mcqa Multi-choice question answering problems Improve benchmarks like MMLU / GPQA / HLE - Apache 2.0
Mini Swe Agent mini_swe_agent.yaml coding SWE-Gym A software development with mini-swe-agent orchestration Improve software development capabilities, like SWE-bench MIT
Multichallenge multichallenge.yaml knowledge - MultiChallenge benchmark evaluation with LLM judge - - TBD
Multichallenge multichallenge_nrl.yaml knowledge - MultiChallenge benchmark evaluation with LLM judge - - TBD
Newton Bench newton_bench.yaml math - - - - Apache 2.0
Ns Tools ns_tools.yaml agent - NeMo Skills tool execution with math verification - - - -
Reasoning Gym reasoning_gym.yaml knowledge - - - - Apache 2.0
Reasoning Gym resources_only.yaml knowledge - - - - - -
Structured Outputs structured_outputs_json.yaml instruction_following Nemotron-RL-instruction_following-structured_outputs Check if responses are following structured output requirements in prompts Improve instruction following capabilities Apache 2.0
Swerl Gen swerl_gen.yaml coding - Running sandboxed evaluation for SWE-style tasks (either patch generation or reproduction test generation) Improve SWE capabilities useful for benchmarks like SWE-bench Apache 2.0
Swerl Llm Judge swerl_llm_judge.yaml coding - SWE-style multiple-choice LLM-judge tasks scored via ... choice. Improve SWE capabilities useful for benchmarks like SWE-bench MIT
Terminus Judge terminus_judge.yaml agent - single-step terminal based task Improve on terminal-style tasks Apache 2.0
Text To Sql text_to_sql.yaml coding - Text-to-SQL generation with LLM-as-a-judge equivalence checking Improve text-to-SQL capabilities across multiple dialects - - -
Workplace Assistant workplace_assistant.yaml agent Nemotron-RL-agent-workplace_assistant Workplace assistant multi-step tool-using environment Improve multi-step tool use capability Apache 2.0
Xlam Fc xlam_fc.yaml agent - - - Apache 2.0

📖 Documentation & Resources

🤝 Community & Support

We'd love your contributions! Here's how to get involved:

📚 Citations

If you use NeMo Gym in your research, please cite it using the following BibTeX entry:

@misc{nemo-gym,
  title = {NeMo Gym: An Open Source Library for Scaling Reinforcement Learning Environments for LLM},
  howpublished = {\url{https://github.com/NVIDIA-NeMo/Gym}},
  author={NVIDIA},
  year = {2025},
  note = {GitHub repository},
}