Skip to content

Adds RSL-RL symmetry example for cartpole and ANYmal locomotion #3057

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 16 commits into from
Aug 16, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion docs/source/api/lab_rl/isaaclab_rl.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
isaaclab_rl
.. _api-isaaclab-rl:

isaaclab_rl
===========

.. automodule:: isaaclab_rl
Expand Down
13 changes: 11 additions & 2 deletions docs/source/how-to/add_own_library.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ Isaac Lab, you will first need to make a wrapper for the library, as explained i

The following steps can be followed to integrate a new library with Isaac Lab:

1. Add your library as an extra-dependency in the ``setup.py`` for the extension ``isaaclab_tasks``.
1. Add your library as an extra-dependency in the ``setup.py`` for the extension ``isaaclab_rl``.
This will ensure that the library is installed when you install Isaac Lab or it will complain if the library is not
installed or available.
2. Install your library in the Python environment used by Isaac Lab. You can do this by following the steps mentioned
Expand All @@ -86,6 +86,15 @@ works as expected and can guide users on how to use the wrapper.
* Add some tests to ensure that the wrapper works as expected and remains compatible with the library.
These tests can be added to the ``source/isaaclab_rl/test`` directory.
* Add some documentation for the wrapper. You can add the API documentation to the
``docs/source/api/lab_tasks/isaaclab_rl.rst`` file.
:ref:`API documentation<api-isaaclab-rl>` for the ``isaaclab_rl`` module.


Configuring an RL Agent
-----------------------

Once you have integrated a new library with Isaac Lab, you can configure the example environment to use the new library.
You can check the :ref:`tutorial-configure-rl-training` for an example of how to configure the training process to use a
different library.


.. _rsl-rl: https://github.com/leggedrobotics/rsl_rl
12 changes: 6 additions & 6 deletions docs/source/refs/release_notes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ Improvements
------------

Core API
^^^^^^^^
~~~~~~~~

* **Actuator Interfaces**
* Fixes implicit actuator limits configs for assets by @ooctipus
Expand Down Expand Up @@ -198,7 +198,7 @@ Core API
* Allows slicing from list values in dicts by @LinghengMeng @kellyguo11

Tasks API
^^^^^^^^^
~~~~~~~~~

* Adds support for ``module:task`` and gymnasium >=1.0 by @kellyguo11
* Adds RL library error hints by @Toni-SM
Expand All @@ -212,7 +212,7 @@ Tasks API
* Pre-processes SB3 env image obs-space for CNN pipeline by @ooctipus

Infrastructure
^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~

* **Dependencies**
* Updates torch to 2.7.0 with CUDA 12.8 by @kellyguo11
Expand All @@ -239,7 +239,7 @@ Bug Fixes
---------

Core API
^^^^^^^^
~~~~~~~~

* **Actuator Interfaces**
* Fixes DCMotor clipping for negative power by @jtigue-bdai
Expand Down Expand Up @@ -267,12 +267,12 @@ Core API
* Fixes ``quat_inv()`` implementation by @ozhanozen

Tasks API
^^^^^^^^^
~~~~~~~~~

* Fixes LSTM to ONNX export by @jtigue-bdai

Example Tasks
^^^^^^^^^^^^^
~~~~~~~~~~~~~

* Removes contact termination redundancy by @louislelay
* Fixes memory leak in SDF by @leondavi
Expand Down
2 changes: 1 addition & 1 deletion docs/source/setup/walkthrough/project_setup.rst
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ used as the default output directories for tasks run by this project.


Project Structure
------------------------------
-----------------

There are four nested structures you need to be aware of when working in the direct workflow with an Isaac Lab template
project: the **Project**, the **Extension**, the **Modules**, and the **Task**.
Expand Down
140 changes: 140 additions & 0 deletions docs/source/tutorials/03_envs/configuring_rl_training.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,140 @@
.. _tutorial-configure-rl-training:

Configuring an RL Agent
=======================

.. currentmodule:: isaaclab

In the previous tutorial, we saw how to train an RL agent to solve the cartpole balancing task
using the `Stable-Baselines3`_ library. In this tutorial, we will see how to configure the
training process to use different RL libraries and different training algorithms.

In the directory ``scripts/reinforcement_learning``, you will find the scripts for
different RL libraries. These are organized into subdirectories named after the library name.
Each subdirectory contains the training and playing scripts for the library.

To configure a learning library with a specific task, you need to create a configuration file
for the learning agent. This configuration file is used to create an instance of the learning agent
and is used to configure the training process. Similar to the environment registration shown in
the :ref:`tutorial-register-rl-env-gym` tutorial, you can register the learning agent with the
``gymnasium.register`` method.

The Code
--------

As an example, we will look at the configuration included for the task ``Isaac-Cartpole-v0``
in the ``isaaclab_tasks`` package. This is the same task that we used in the
:ref:`tutorial-run-rl-training` tutorial.

.. literalinclude:: ../../../../source/isaaclab_tasks/isaaclab_tasks/manager_based/classic/cartpole/__init__.py
:language: python
:lines: 18-29

The Code Explained
------------------

Under the attribute ``kwargs``, we can see the configuration for the different learning libraries.
The key is the name of the library and the value is the path to the configuration instance.
This configuration instance can be a string, a class, or an instance of the class.
For example, the value of the key ``"rl_games_cfg_entry_point"`` is a string that points to the
configuration YAML file for the RL-Games library. Meanwhile, the value of the key
``"rsl_rl_cfg_entry_point"`` points to the configuration class for the RSL-RL library.

The pattern used for specifying an agent configuration class follows closely to that used for
specifying the environment configuration entry point. This means that while the following
are equivalent:


.. dropdown:: Specifying the configuration entry point as a string
:icon: code

.. code-block:: python

from . import agents

gym.register(
id="Isaac-Cartpole-v0",
entry_point="isaaclab.envs:ManagerBasedRLEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": f"{__name__}.cartpole_env_cfg:CartpoleEnvCfg",
"rsl_rl_cfg_entry_point": f"{agents.__name__}.rsl_rl_ppo_cfg:CartpolePPORunnerCfg",
},
)

.. dropdown:: Specifying the configuration entry point as a class
:icon: code

.. code-block:: python

from . import agents

gym.register(
id="Isaac-Cartpole-v0",
entry_point="isaaclab.envs:ManagerBasedRLEnv",
disable_env_checker=True,
kwargs={
"env_cfg_entry_point": f"{__name__}.cartpole_env_cfg:CartpoleEnvCfg",
"rsl_rl_cfg_entry_point": agents.rsl_rl_ppo_cfg.CartpolePPORunnerCfg,
},
)

The first code block is the preferred way to specify the configuration entry point.
The second code block is equivalent to the first one, but it leads to import of the configuration
class which slows down the import time. This is why we recommend using strings for the configuration
entry point.

All the scripts in the ``scripts/reinforcement_learning`` directory are configured by default to read the
``<library_name>_cfg_entry_point`` from the ``kwargs`` dictionary to retrieve the configuration instance.

For instance, the following code block shows how the ``train.py`` script reads the configuration
instance for the Stable-Baselines3 library:

.. dropdown:: Code for train.py with SB3
:icon: code

.. literalinclude:: ../../../../scripts/reinforcement_learning/sb3/train.py
:language: python
:emphasize-lines: 26-28, 102-103
:linenos:

The argument ``--agent`` is used to specify the learning library to use. This is used to
retrieve the configuration instance from the ``kwargs`` dictionary. You can manually specify
alternate configuration instances by passing the ``--agent`` argument.

The Code Execution
------------------

Since for the cartpole balancing task, RSL-RL library offers two configuration instances,
we can use the ``--agent`` argument to specify the configuration instance to use.

* Training with the standard PPO configuration:

.. code-block:: bash

# standard PPO training
./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Cartpole-v0 --headless \
--run_name ppo

* Training with the PPO configuration with symmetry augmentation:

.. code-block:: bash

# PPO training with symmetry augmentation
./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Cartpole-v0 --headless \
--agent rsl_rl_with_symmetry_cfg_entry_point \
--run_name ppo_with_symmetry_data_augmentation

# you can use hydra to disable symmetry augmentation but enable mirror loss computation
./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Cartpole-v0 --headless \
--agent rsl_rl_with_symmetry_cfg_entry_point \
--run_name ppo_without_symmetry_data_augmentation \
agent.algorithm.symmetry_cfg.use_data_augmentation=false

The ``--run_name`` argument is used to specify the name of the run. This is used to
create a directory for the run in the ``logs/rsl_rl/cartpole`` directory.

.. _Stable-Baselines3: https://stable-baselines3.readthedocs.io/en/master/
.. _RL-Games: https://github.com/Denys88/rl_games
.. _RSL-RL: https://github.com/leggedrobotics/rsl_rl
.. _SKRL: https://skrl.readthedocs.io
1 change: 1 addition & 0 deletions docs/source/tutorials/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,7 @@ different aspects of the framework to create a simulation environment for agent
03_envs/create_direct_rl_env
03_envs/register_rl_env_gym
03_envs/run_rl_training
03_envs/configuring_rl_training
03_envs/modify_direct_rl_env
03_envs/policy_inference_in_usd

Expand Down
5 changes: 4 additions & 1 deletion scripts/reinforcement_learning/rl_games/play.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,9 @@
)
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument(
"--agent", type=str, default="rl_games_cfg_entry_point", help="Name of the RL agent configuration entry point."
)
parser.add_argument("--checkpoint", type=str, default=None, help="Path to model checkpoint.")
parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
parser.add_argument(
Expand Down Expand Up @@ -82,7 +85,7 @@
# PLACEHOLDER: Extension template (do not remove this comment)


@hydra_task_config(args_cli.task, "rl_games_cfg_entry_point")
@hydra_task_config(args_cli.task, args_cli.agent)
def main(env_cfg: ManagerBasedRLEnvCfg | DirectRLEnvCfg | DirectMARLEnvCfg, agent_cfg: dict):
"""Play with RL-Games agent."""
# grab task name for checkpoint path
Expand Down
5 changes: 4 additions & 1 deletion scripts/reinforcement_learning/rl_games/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,9 @@
parser.add_argument("--video_interval", type=int, default=2000, help="Interval between video recordings (in steps).")
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument(
"--agent", type=str, default="rl_games_cfg_entry_point", help="Name of the RL agent configuration entry point."
)
parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
parser.add_argument(
"--distributed", action="store_true", default=False, help="Run training with multiple GPUs or nodes."
Expand Down Expand Up @@ -84,7 +87,7 @@
# PLACEHOLDER: Extension template (do not remove this comment)


@hydra_task_config(args_cli.task, "rl_games_cfg_entry_point")
@hydra_task_config(args_cli.task, args_cli.agent)
def main(env_cfg: ManagerBasedRLEnvCfg | DirectRLEnvCfg | DirectMARLEnvCfg, agent_cfg: dict):
"""Train with RL-Games agent."""
# override configurations with non-hydra CLI arguments
Expand Down
5 changes: 4 additions & 1 deletion scripts/reinforcement_learning/rsl_rl/play.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,9 @@
)
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument(
"--agent", type=str, default="rsl_rl_cfg_entry_point", help="Name of the RL agent configuration entry point."
)
parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
parser.add_argument(
"--use_pretrained_checkpoint",
Expand Down Expand Up @@ -77,7 +80,7 @@
# PLACEHOLDER: Extension template (do not remove this comment)


@hydra_task_config(args_cli.task, "rsl_rl_cfg_entry_point")
@hydra_task_config(args_cli.task, args_cli.agent)
def main(env_cfg: ManagerBasedRLEnvCfg | DirectRLEnvCfg | DirectMARLEnvCfg, agent_cfg: RslRlOnPolicyRunnerCfg):
"""Play with RSL-RL agent."""
# grab task name for checkpoint path
Expand Down
5 changes: 4 additions & 1 deletion scripts/reinforcement_learning/rsl_rl/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,9 @@
parser.add_argument("--video_interval", type=int, default=2000, help="Interval between video recordings (in steps).")
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument(
"--agent", type=str, default="rsl_rl_cfg_entry_point", help="Name of the RL agent configuration entry point."
)
parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
parser.add_argument("--max_iterations", type=int, default=None, help="RL Policy training iterations.")
parser.add_argument(
Expand Down Expand Up @@ -100,7 +103,7 @@
torch.backends.cudnn.benchmark = False


@hydra_task_config(args_cli.task, "rsl_rl_cfg_entry_point")
@hydra_task_config(args_cli.task, args_cli.agent)
def main(env_cfg: ManagerBasedRLEnvCfg | DirectRLEnvCfg | DirectMARLEnvCfg, agent_cfg: RslRlOnPolicyRunnerCfg):
"""Train with RSL-RL agent."""
# override configurations with non-hydra CLI arguments
Expand Down
5 changes: 4 additions & 1 deletion scripts/reinforcement_learning/sb3/play.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,9 @@
)
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument(
"--agent", type=str, default="sb3_cfg_entry_point", help="Name of the RL agent configuration entry point."
)
parser.add_argument("--checkpoint", type=str, default=None, help="Path to model checkpoint.")
parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
parser.add_argument(
Expand Down Expand Up @@ -86,7 +89,7 @@
# PLACEHOLDER: Extension template (do not remove this comment)


@hydra_task_config(args_cli.task, "sb3_cfg_entry_point")
@hydra_task_config(args_cli.task, args_cli.agent)
def main(env_cfg: ManagerBasedRLEnvCfg | DirectRLEnvCfg | DirectMARLEnvCfg, agent_cfg: dict):
"""Play with stable-baselines agent."""
# grab task name for checkpoint path
Expand Down
5 changes: 4 additions & 1 deletion scripts/reinforcement_learning/sb3/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,9 @@
parser.add_argument("--video_interval", type=int, default=2000, help="Interval between video recordings (in steps).")
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument(
"--agent", type=str, default="sb3_cfg_entry_point", help="Name of the RL agent configuration entry point."
)
parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
parser.add_argument("--log_interval", type=int, default=100_000, help="Log data every n timesteps.")
parser.add_argument("--checkpoint", type=str, default=None, help="Continue the training from checkpoint.")
Expand Down Expand Up @@ -96,7 +99,7 @@ def cleanup_pbar(*args):
# PLACEHOLDER: Extension template (do not remove this comment)


@hydra_task_config(args_cli.task, "sb3_cfg_entry_point")
@hydra_task_config(args_cli.task, args_cli.agent)
def main(env_cfg: ManagerBasedRLEnvCfg | DirectRLEnvCfg | DirectMARLEnvCfg, agent_cfg: dict):
"""Train with stable-baselines agent."""
# randomly sample a seed if seed = -1
Expand Down
16 changes: 14 additions & 2 deletions scripts/reinforcement_learning/skrl/play.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,15 @@
)
parser.add_argument("--num_envs", type=int, default=None, help="Number of environments to simulate.")
parser.add_argument("--task", type=str, default=None, help="Name of the task.")
parser.add_argument(
"--agent",
type=str,
default=None,
help=(
"Name of the RL agent configuration entry point. Defaults to None, in which case the argument "
"--algorithm is used to determine the default agent configuration entry point."
),
)
parser.add_argument("--checkpoint", type=str, default=None, help="Path to model checkpoint.")
parser.add_argument("--seed", type=int, default=None, help="Seed used for the environment")
parser.add_argument(
Expand Down Expand Up @@ -107,8 +116,11 @@
# PLACEHOLDER: Extension template (do not remove this comment)

# config shortcuts
algorithm = args_cli.algorithm.lower()
agent_cfg_entry_point = "skrl_cfg_entry_point" if algorithm in ["ppo"] else f"skrl_{algorithm}_cfg_entry_point"
if args_cli.agent is None:
algorithm = args_cli.algorithm.lower()
agent_cfg_entry_point = "skrl_cfg_entry_point" if algorithm in ["ppo"] else f"skrl_{algorithm}_cfg_entry_point"
else:
agent_cfg_entry_point = args_cli.agent


@hydra_task_config(args_cli.task, agent_cfg_entry_point)
Expand Down
Loading
Loading