RL in Practice: Tips and Tricks and Practical Session With Stable-Baselines3🔗

Abstract🔗

The aim of the session is to help you do reinforcement learning experiments. The first part covers general advice about RL, tips and tricks and details three examples where RL was applied on real robots. The second part will be a practical session using the Stable-Baselines3 library.

Speaker🔗

Antonin Raffin

Pre-requisites🔗

Python programming, RL basics, (recommended: Google account for the practical session in order to use Google Colab).

Additional material🔗

Stable Baselines 3 website
Stable Baselines 3 documentation
Presentation slides
Hands-on session: presentation, repo and notebook on colab

Outline🔗

  1. Part I: RL Tips and Tricks / The Challenges of Applying RL to Real Robots

    1. Introduction (3 minutes)

    2. RL Tips and tricks (45 minutes)

      1. General Nuts and Bolts of RL experimentation (10 minutes)
      2. RL in practice on a custom task (custom environment) (30 minutes)
      3. Questions? (5 minutes)
    3. The Challenges of Applying RL to Real Robots (45 minutes)

      1. Learning to control an elastic robot - DLR David Neck Example (15 minutes)
      2. Learning to drive in minutes and learning to race in hours - Virtual and real racing car (15 minutes)
      3. Learning to walk with an elastic quadruped robot - DLR bert example (10 minutes)
      4. Questions? (5 minutes+)
  2. Part II: Practical Session with Stable-Baselines3

    1. Stable-Baselines3 Overview (20 minutes)
    2. Questions? (5 minutes)
    3. Practical Session - Code along (1h+)

Class material🔗

Stable Baselines 3
Stable Baselines 3 Documentation