GRID supports the training and evaluation of reinforcement learning agents in Isaac Sim for the supported quadruped, bipeds, arms, and humanoid robots.

Training

GRID supports training reinforcement learning agents using the RSL-RL training methodology. Agents can be trained by modifying the agent_cfg.yaml file as follows:

- rsl_rl: 
    train: true 
    video: false
    resume: false
    seed: 0
    max_iterations: 1000
    run_name: go2_rough_train_rlagent
    experiment_name: go2_rough
    load_run: .*
    load_checkpoint: model_.*.pth
    max_episode_length: 100
    logger: tensorboard

The training environment name specifying the task along with the number of parallel agents also need to be specified in the custom_cfg.yaml

num_envs: 100
task: Isaac-Velocity-Rough-Unitree-Go2-v0

The mdp_cfg.yaml would also be filled with the relevant values for the MDP components. A sample for it is provided below:

commands:
  - base_velocity:
      type: VelocityCommand
      config:
        asset_name: robot

actions:
  - base:
      type: JointPosition
      config:
        asset_name: robot
        joint_names: [".*"]

observations:
  - policy:
      - base_lin_vel: 
          type: base_lin_vel
      - base_ang_vel: 
          type: base_ang_vel
      - projected_gravity:
          type: projected_gravity
      - velocity_commands:
          type: velocity_commands
      - joint_pos:
          type: joint_pos
      - joint_vel:
          type: joint_vel
      - actions:
          type: actions
          config:
            params:
              action_name: base
      - height_scan:
          type: height_scan

terminations: null

events: null

rewards: null

curriculum: null

Was this page helpful?