Quick Start Guide
This guide will help you get started with RLRoverLab quickly. Follow these steps to set up the environment and run your first training or evaluation.
Prerequisites
Before starting, ensure you have:
- NVIDIA GPU with at least 8GB VRAM
- Ubuntu 20.04 or 22.04
- Docker and NVIDIA Container Toolkit installed (see Docker Installation)
Quick Setup with Docker
-
Clone the repository:
git clone https://github.com/abmoRobotics/RLRoverLab cd RLRoverLab
-
Download terrain assets:
pip3 install gdown python3 download_usd.py
-
Start the Docker container:
cd docker ./run.sh docker exec -it rover-lab-base bash
Running Your First Example
1. Train a Simple Agent
Train a PPO agent on the simple AAU rover environment:
cd examples/02_training
/workspace/isaac_lab/isaaclab.sh -p train.py --task="AAURoverEnvSimple-v0" --num_envs=128
2. Evaluate a Pre-trained Model
If you have a trained model, evaluate it:
cd examples/03_inference
/workspace/isaac_lab/isaaclab.sh -p eval.py --task="AAURoverEnvSimple-v0" --num_envs=32 --checkpoint=path/to/your/model.pt
3. Demo with Zero Agent
Run a basic demo to see the environment:
cd examples/01_demos
/workspace/isaac_lab/isaaclab.sh -p 01_zero_agent.py
Available Environments
The suite provides several pre-configured environments:
Environment ID | Robot | Description |
---|---|---|
AAURoverEnvSimple-v0 | AAU Rover (Simple) | Simplified rover with basic sensors |
AAURoverEnv-v0 | AAU Rover | Full rover with advanced sensors |
ExomyEnv-v0 | Exomy | ESA's Exomy rover |
What's Next?
- Explore more examples
- Learn about available environments
- Understand the training process
- Add your own robot
Troubleshooting
Common Issues
- GPU Memory Issues: Reduce
--num_envs
parameter - Docker Permission Issues: Ensure your user is in the docker group
- Display Issues: Run
xhost +local:docker
before starting the container
For more detailed troubleshooting, see the Installation Guide.