Deep RL Course documentation
The “Deep” in Reinforcement Learning
Unit 0. Welcome to the course
Unit 1. Introduction to Deep Reinforcement Learning
IntroductionWhat is Reinforcement Learning?The Reinforcement Learning FrameworkThe type of tasksThe Exploration/ Exploitation tradeoffThe two main approaches for solving RL problemsThe “Deep” in Deep Reinforcement LearningSummaryGlossaryHands-onQuizConclusionAdditional Readings
Bonus Unit 1. Introduction to Deep Reinforcement Learning with Huggy
Live 1. How the course work, Q&A, and playing with Huggy
Unit 2. Introduction to Q-Learning
Unit 3. Deep Q-Learning with Atari Games
Bonus Unit 2. Automatic Hyperparameter Tuning with Optuna
Unit 4. Policy Gradient with PyTorch
Unit 5. Introduction to Unity ML-Agents
Unit 6. Actor Critic methods with Robotics environments
Unit 7. Introduction to Multi-Agents and AI vs AI
Unit 8. Part 1 Proximal Policy Optimization (PPO)
Unit 8. Part 2 Proximal Policy Optimization (PPO) with Doom
Bonus Unit 3. Advanced Topics in Reinforcement Learning
Bonus Unit 5. Imitation Learning with Godot RL Agents
Certification and congratulations
The “Deep” in Reinforcement Learning
What we've talked about so far is Reinforcement Learning. But where does the "Deep" come into play?
Deep Reinforcement Learning introduces deep neural networks to solve Reinforcement Learning problems — hence the name “deep”.
For instance, in the next unit, we’ll learn about two value-based algorithms: Q-Learning (classic Reinforcement Learning) and then Deep Q-Learning.
You’ll see the difference is that, in the first approach, we use a traditional algorithm to create a Q table that helps us find what action to take for each state.
In the second approach, we will use a Neural Network (to approximate the Q value).
If you are not familiar with Deep Learning you should definitely watch the FastAI Practical Deep Learning for Coders (Free).
Update on GitHub