Validation

The evaluation of the trained RL policy can be performed by setting the train parameter to false in agent_cfg.yaml. This would run the agent in the training environment by utilizing the trained policy.

Inference

Once the trained policy is validated on the training environment, it can be deployed in all the supported as well as custom environments. Setting the task as GRID-CustomEnv-v0 and specifying the environment in the scene_cfg.yaml enables users to use the trained policy in diverse environments. A sample agent_cfg.yaml file for inference is shown below:

- rsl_rl: 
    train: false
    experiment_name: go2_rough
    load_run: .*
    load_checkpoint: model_.*.pt

Was this page helpful?