~Get Your Files Here !/12 - Soft Actor-Critic (SAC)/003 Create the robotics task.mp4 74MB
~Get Your Files Here !/13 - Hindsight Experience Replay/004 Implement Hindsight Experience Replay (HER) - Part 3.mp4 74MB
~Get Your Files Here !/12 - Soft Actor-Critic (SAC)/007 Implement the Soft Actor-Critic algorithm - Part 2.mp4 67MB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/011 Define the training step.mp4 58MB
~Get Your Files Here !/06 - PyTorch Lightning/008 Define the class for the Deep Q-Learning algorithm.mp4 55MB
~Get Your Files Here !/12 - Soft Actor-Critic (SAC)/005 Create the gradient policy.mp4 54MB
~Get Your Files Here !/04 - Refresher Brief introduction to Neural Networks/006 Stochastic Gradient Descent.mp4 50MB
~Get Your Files Here !/06 - PyTorch Lightning/011 Define the train_step() method.mp4 50MB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/010 Creating the (NAF) Deep Q-Network 4.mp4 48MB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/006 Create the gradient policy.mp4 43MB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/015 Create the (NAF) Deep Q-Learning algorithm.mp4 43MB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/007 Creating the (NAF) Deep Q-Network 1.mp4 41MB
~Get Your Files Here !/12 - Soft Actor-Critic (SAC)/006 Implement the Soft Actor-Critic algorithm - Part 1.mp4 40MB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/008 Create the DDPG class.mp4 39MB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/002 Elements common to all control tasks.mp4 39MB
~Get Your Files Here !/04 - Refresher Brief introduction to Neural Networks/005 How to represent a Neural Network.mp4 38MB
~Get Your Files Here !/04 - Refresher Brief introduction to Neural Networks/002 Function approximators.mp4 36MB
~Get Your Files Here !/06 - PyTorch Lightning/014 Train the Deep Q-Learning algorithm.mp4 35MB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/012 Launch the training process.mp4 34MB
~Get Your Files Here !/13 - Hindsight Experience Replay/002 Implement Hindsight Experience Replay (HER) - Part 1.mp4 34MB
~Get Your Files Here !/11 - Twin Delayed DDPG (TD3)/001 Twin Delayed DDPG (TD3).mp4 34MB
~Get Your Files Here !/07 - Hyperparameter tuning with Optuna/003 Log average return.mp4 34MB
~Get Your Files Here !/07 - Hyperparameter tuning with Optuna/001 Hyperparameter tuning with Optuna.mp4 32MB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/002 Deep Deterministic Policy Gradient (DDPG).mp4 32MB
~Get Your Files Here !/06 - PyTorch Lightning/007 Create the environment.mp4 32MB
~Get Your Files Here !/06 - PyTorch Lightning/012 Define the train_epoch_end() method.mp4 32MB
~Get Your Files Here !/06 - PyTorch Lightning/001 PyTorch Lightning.mp4 32MB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/005 Deep Deterministic Policy Gradient (DDPG).mp4 32MB
~Get Your Files Here !/11 - Twin Delayed DDPG (TD3)/005 Clipped double Q-Learning.mp4 32MB
~Get Your Files Here !/11 - Twin Delayed DDPG (TD3)/008 Check the resulting agent.mp4 31MB
~Get Your Files Here !/11 - Twin Delayed DDPG (TD3)/007 Target policy smoothing.mp4 31MB
~Get Your Files Here !/06 - PyTorch Lightning/003 Introduction to PyTorch Lightning.mp4 31MB
~Get Your Files Here !/06 - PyTorch Lightning/010 Prepare the data loader and the optimizer.mp4 30MB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/013 Check the resulting agent.mp4 30MB
~Get Your Files Here !/07 - Hyperparameter tuning with Optuna/004 Define the objective function.mp4 30MB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/001 Continuous action spaces.mp4 30MB
~Get Your Files Here !/06 - PyTorch Lightning/009 Define the play_episode() function.mp4 29MB
~Get Your Files Here !/09 - Refresher Policy gradient methods/003 Representing policies using neural networks.mp4 28MB
~Get Your Files Here !/04 - Refresher Brief introduction to Neural Networks/004 Artificial Neurons.mp4 26MB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/003 The Markov decision process (MDP).mp4 25MB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/011 Creating the policy.mp4 25MB
~Get Your Files Here !/04 - Refresher Brief introduction to Neural Networks/003 Artificial Neural Networks.mp4 24MB
~Get Your Files Here !/01 - Introduction/001 Introduction.mp4 24MB
~Get Your Files Here !/12 - Soft Actor-Critic (SAC)/001 Soft Actor-Critic (SAC).mp4 24MB
~Get Your Files Here !/04 - Refresher Brief introduction to Neural Networks/007 Neural Network optimization.mp4 23MB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/004 Normalized Advantage Function pseudocode.mp4 23MB
~Get Your Files Here !/09 - Refresher Policy gradient methods/005 Entropy Regularization.mp4 23MB
~Get Your Files Here !/06 - PyTorch Lightning/006 Create the replay buffer.mp4 23MB
~Get Your Files Here !/06 - PyTorch Lightning/004 Create the Deep Q-Network.mp4 23MB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/007 Create the Deep Q-Network.mp4 23MB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/012 Create the environment.mp4 23MB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/010 Setup the optimizers and dataloader.mp4 22MB
~Get Your Files Here !/13 - Hindsight Experience Replay/003 Implement Hindsight Experience Replay (HER) - Part 2.mp4 22MB
~Get Your Files Here !/09 - Refresher Policy gradient methods/001 Policy gradient methods.mp4 22MB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/003 DDPG pseudocode.mp4 21MB
~Get Your Files Here !/06 - PyTorch Lightning/015 Explore the resulting agent.mp4 20MB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/001 The Brax Physics engine.mp4 20MB
~Get Your Files Here !/11 - Twin Delayed DDPG (TD3)/002 TD3 pseudocode.mp4 20MB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/018 Debugging and launching the algorithm.mp4 20MB
~Get Your Files Here !/11 - Twin Delayed DDPG (TD3)/004 Twin Delayed DDPG (TD3).mp4 20MB
~Get Your Files Here !/07 - Hyperparameter tuning with Optuna/006 Explore the best trial.mp4 19MB
~Get Your Files Here !/12 - Soft Actor-Critic (SAC)/004 Create the Deep Q-Network.mp4 19MB
~Get Your Files Here !/07 - Hyperparameter tuning with Optuna/005 Create and launch the hyperparameter tuning job.mp4 19MB
~Get Your Files Here !/06 - PyTorch Lightning/005 Create the policy.mp4 18MB
~Get Your Files Here !/14 - Final steps/001 Next steps.mp4 17MB
~Get Your Files Here !/13 - Hindsight Experience Replay/001 Hindsight Experience Replay (HER).mp4 17MB
~Get Your Files Here !/05 - Refresher Deep Q-Learning/004 Target Network.mp4 17MB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/019 Checking the resulting agent.mp4 16MB
~Get Your Files Here !/05 - Refresher Deep Q-Learning/002 Deep Q-Learning.mp4 16MB
~Get Your Files Here !/09 - Refresher Policy gradient methods/004 The policy gradient theorem.mp4 16MB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/008 Creating the (NAF) Deep Q-Network 2.mp4 15MB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/007 Discount factor.mp4 15MB
~Get Your Files Here !/03 - Refresher Q-Learning/003 Solving control tasks with temporal difference methods.mp4 15MB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/011 Solving a Markov decision process.mp4 14MB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/002 The advantage function.mp4 13MB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/016 Implement the training step.mp4 13MB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/009 Define the play method.mp4 13MB
~Get Your Files Here !/03 - Refresher Q-Learning/002 Temporal difference methods.mp4 13MB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/017 Implement the end-of-epoch logic.mp4 12MB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/010 Bellman equations.mp4 12MB
~Get Your Files Here !/12 - Soft Actor-Critic (SAC)/008 Check the results.mp4 12MB
~Get Your Files Here !/11 - Twin Delayed DDPG (TD3)/006 Delayed policy updates.mp4 12MB
~Get Your Files Here !/03 - Refresher Q-Learning/004 Q-Learning.mp4 11MB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/014 Implementing Polyak averaging.mp4 10MB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/003 Normalized Advantage Function (NAF).mp4 10MB
~Get Your Files Here !/12 - Soft Actor-Critic (SAC)/002 SAC pseudocode.mp4 10MB
~Get Your Files Here !/05 - Refresher Deep Q-Learning/003 Experience Replay.mp4 9MB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/004 Types of Markov decision process.mp4 9MB
~Get Your Files Here !/09 - Refresher Policy gradient methods/002 Policy performance.mp4 9MB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/008 Policy.mp4 7MB
~Get Your Files Here !/13 - Hindsight Experience Replay/005 Check the results.mp4 7MB
~Get Your Files Here !/01 - Introduction/003 Google Colab.mp4 6MB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/009 Creating the (NAF) Deep Q-Network 3.mp4 5MB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/006 Reward vs Return.mp4 5MB
~Get Your Files Here !/01 - Introduction/004 Where to begin.mp4 5MB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/005 Trajectory vs episode.mp4 5MB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/013 Polyak averaging.mp4 5MB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/006 Hyperbolic tangent.mp4 5MB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/009 State values v(s) and action values q(s,a).mp4 4MB
~Get Your Files Here !/03 - Refresher Q-Learning/005 Advantages of temporal difference methods.mp4 4MB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/001 Module Overview.mp4 3MB
~Get Your Files Here !/04 - Refresher Brief introduction to Neural Networks/001 Module overview.mp4 2MB
~Get Your Files Here !/03 - Refresher Q-Learning/001 Module overview.mp4 1MB
~Get Your Files Here !/05 - Refresher Deep Q-Learning/001 Module overview.mp4 1MB
~Get Your Files Here !/12 - Soft Actor-Critic (SAC)/005 Create the gradient policy_en.vtt 13KB
~Get Your Files Here !/06 - PyTorch Lightning/008 Define the class for the Deep Q-Learning algorithm_en.vtt 12KB
~Get Your Files Here !/12 - Soft Actor-Critic (SAC)/003 Create the robotics task_en.vtt 11KB
~Get Your Files Here !/11 - Twin Delayed DDPG (TD3)/001 Twin Delayed DDPG (TD3)_en.vtt 11KB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/002 Deep Deterministic Policy Gradient (DDPG)_en.vtt 10KB
~Get Your Files Here !/13 - Hindsight Experience Replay/004 Implement Hindsight Experience Replay (HER) - Part 3_en.vtt 10KB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/011 Define the training step_en.vtt 10KB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/006 Create the gradient policy_en.vtt 10KB
~Get Your Files Here !/07 - Hyperparameter tuning with Optuna/001 Hyperparameter tuning with Optuna_en.vtt 10KB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/010 Creating the (NAF) Deep Q-Network 4_en.vtt 9KB
~Get Your Files Here !/06 - PyTorch Lightning/011 Define the train_step() method_en.vtt 9KB
~Get Your Files Here !/12 - Soft Actor-Critic (SAC)/007 Implement the Soft Actor-Critic algorithm - Part 2_en.vtt 9KB
~Get Your Files Here !/06 - PyTorch Lightning/001 PyTorch Lightning_en.vtt 9KB
~Get Your Files Here !/04 - Refresher Brief introduction to Neural Networks/002 Function approximators_en.vtt 8KB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/015 Create the (NAF) Deep Q-Learning algorithm_en.vtt 8KB
~Get Your Files Here !/12 - Soft Actor-Critic (SAC)/001 Soft Actor-Critic (SAC)_en.vtt 7KB
~Get Your Files Here !/06 - PyTorch Lightning/007 Create the environment_en.vtt 7KB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/007 Creating the (NAF) Deep Q-Network 1_en.vtt 7KB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/008 Create the DDPG class_en.vtt 7KB
~Get Your Files Here !/04 - Refresher Brief introduction to Neural Networks/005 How to represent a Neural Network_en.vtt 7KB
~Get Your Files Here !/12 - Soft Actor-Critic (SAC)/006 Implement the Soft Actor-Critic algorithm - Part 1_en.vtt 7KB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/001 Continuous action spaces_en.vtt 7KB
~Get Your Files Here !/09 - Refresher Policy gradient methods/005 Entropy Regularization_en.vtt 7KB
~Get Your Files Here !/06 - PyTorch Lightning/014 Train the Deep Q-Learning algorithm_en.vtt 6KB
~Get Your Files Here !/04 - Refresher Brief introduction to Neural Networks/006 Stochastic Gradient Descent_en.vtt 6KB
~Get Your Files Here !/06 - PyTorch Lightning/003 Introduction to PyTorch Lightning_en.vtt 6KB
~Get Your Files Here !/01 - Introduction/001 Introduction_en.vtt 6KB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/002 Elements common to all control tasks_en.vtt 6KB
~Get Your Files Here !/04 - Refresher Brief introduction to Neural Networks/004 Artificial Neurons_en.vtt 6KB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/004 Normalized Advantage Function pseudocode_en.vtt 6KB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/005 Deep Deterministic Policy Gradient (DDPG)_en.vtt 6KB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/003 The Markov decision process (MDP)_en.vtt 6KB
~Get Your Files Here !/06 - PyTorch Lightning/006 Create the replay buffer_en.vtt 6KB
~Get Your Files Here !/09 - Refresher Policy gradient methods/003 Representing policies using neural networks_en.vtt 5KB
~Get Your Files Here !/07 - Hyperparameter tuning with Optuna/004 Define the objective function_en.vtt 5KB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/011 Creating the policy_en.vtt 5KB
~Get Your Files Here !/13 - Hindsight Experience Replay/002 Implement Hindsight Experience Replay (HER) - Part 1_en.vtt 5KB
~Get Your Files Here !/06 - PyTorch Lightning/004 Create the Deep Q-Network_en.vtt 5KB
~Get Your Files Here !/06 - PyTorch Lightning/005 Create the policy_en.vtt 5KB
~Get Your Files Here !/06 - PyTorch Lightning/009 Define the play_episode() function_en.vtt 5KB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/002 The advantage function_en.vtt 5KB
~Get Your Files Here !/07 - Hyperparameter tuning with Optuna/003 Log average return_en.vtt 5KB
~Get Your Files Here !/09 - Refresher Policy gradient methods/001 Policy gradient methods_en.vtt 5KB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/012 Create the environment_en.vtt 5KB
~Get Your Files Here !/04 - Refresher Brief introduction to Neural Networks/007 Neural Network optimization_en.vtt 4KB
~Get Your Files Here !/13 - Hindsight Experience Replay/001 Hindsight Experience Replay (HER)_en.vtt 4KB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/007 Create the Deep Q-Network_en.vtt 4KB
~Get Your Files Here !/11 - Twin Delayed DDPG (TD3)/002 TD3 pseudocode_en.vtt 4KB
~Get Your Files Here !/06 - PyTorch Lightning/010 Prepare the data loader and the optimizer_en.vtt 4KB
~Get Your Files Here !/11 - Twin Delayed DDPG (TD3)/007 Target policy smoothing_en.vtt 4KB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/007 Discount factor_en.vtt 4KB
~Get Your Files Here !/06 - PyTorch Lightning/012 Define the train_epoch_end() method_en.vtt 4KB
~Get Your Files Here !/05 - Refresher Deep Q-Learning/004 Target Network_en.vtt 4KB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/003 DDPG pseudocode_en.vtt 4KB
~Get Your Files Here !/11 - Twin Delayed DDPG (TD3)/005 Clipped double Q-Learning_en.vtt 4KB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/012 Launch the training process_en.vtt 4KB
~Get Your Files Here !/09 - Refresher Policy gradient methods/004 The policy gradient theorem_en.vtt 4KB
~Get Your Files Here !/04 - Refresher Brief introduction to Neural Networks/003 Artificial Neural Networks_en.vtt 4KB
~Get Your Files Here !/03 - Refresher Q-Learning/003 Solving control tasks with temporal difference methods_en.vtt 4KB
~Get Your Files Here !/12 - Soft Actor-Critic (SAC)/004 Create the Deep Q-Network_en.vtt 4KB
~Get Your Files Here !/03 - Refresher Q-Learning/002 Temporal difference methods_en.vtt 3KB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/001 The Brax Physics engine_en.vtt 3KB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/003 Normalized Advantage Function (NAF)_en.vtt 3KB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/008 Creating the (NAF) Deep Q-Network 2_en.vtt 3KB
~Get Your Files Here !/11 - Twin Delayed DDPG (TD3)/004 Twin Delayed DDPG (TD3)_en.vtt 3KB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/010 Setup the optimizers and dataloader_en.vtt 3KB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/011 Solving a Markov decision process_en.vtt 3KB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/010 Bellman equations_en.vtt 3KB
~Get Your Files Here !/13 - Hindsight Experience Replay/003 Implement Hindsight Experience Replay (HER) - Part 2_en.vtt 3KB
~Get Your Files Here !/05 - Refresher Deep Q-Learning/002 Deep Q-Learning_en.vtt 3KB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/018 Debugging and launching the algorithm_en.vtt 3KB
~Get Your Files Here !/06 - PyTorch Lightning/015 Explore the resulting agent_en.vtt 3KB
~Get Your Files Here !/07 - Hyperparameter tuning with Optuna/005 Create and launch the hyperparameter tuning job_en.vtt 3KB
~Get Your Files Here !/07 - Hyperparameter tuning with Optuna/006 Explore the best trial_en.vtt 3KB
~Get Your Files Here !/09 - Refresher Policy gradient methods/002 Policy performance_en.vtt 3KB
~Get Your Files Here !/03 - Refresher Q-Learning/004 Q-Learning_en.vtt 2KB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/016 Implement the training step_en.vtt 2KB
~Get Your Files Here !/11 - Twin Delayed DDPG (TD3)/008 Check the resulting agent_en.vtt 2KB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/004 Types of Markov decision process_en.vtt 2KB
~Get Your Files Here !/05 - Refresher Deep Q-Learning/003 Experience Replay_en.vtt 2KB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/017 Implement the end-of-epoch logic_en.vtt 2KB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/014 Implementing Polyak averaging_en.vtt 2KB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/008 Policy_en.vtt 2KB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/009 Define the play method_en.vtt 2KB
~Get Your Files Here !/14 - Final steps/001 Next steps_en.vtt 2KB
~Get Your Files Here !/12 - Soft Actor-Critic (SAC)/008 Check the results_en.vtt 2KB
~Get Your Files Here !/11 - Twin Delayed DDPG (TD3)/006 Delayed policy updates_en.vtt 2KB
~Get Your Files Here !/12 - Soft Actor-Critic (SAC)/002 SAC pseudocode_en.vtt 2KB
~Get Your Files Here !/01 - Introduction/004 Where to begin_en.vtt 2KB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/019 Checking the resulting agent_en.vtt 2KB
~Get Your Files Here !/01 - Introduction/003 Google Colab_en.vtt 2KB
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/013 Check the resulting agent_en.vtt 2KB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/006 Reward vs Return_en.vtt 2KB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/006 Hyperbolic tangent_en.vtt 2KB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/013 Polyak averaging_en.vtt 1KB
~Get Your Files Here !/03 - Refresher Q-Learning/005 Advantages of temporal difference methods_en.vtt 1KB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/009 State values v(s) and action values q(s,a)_en.vtt 1KB
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/009 Creating the (NAF) Deep Q-Network 3_en.vtt 1KB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/005 Trajectory vs episode_en.vtt 1KB
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/001 Module Overview_en.vtt 1KB
~Get Your Files Here !/13 - Hindsight Experience Replay/005 Check the results_en.vtt 1003B
~Get Your Files Here !/04 - Refresher Brief introduction to Neural Networks/001 Module overview_en.vtt 739B
~Get Your Files Here !/03 - Refresher Q-Learning/001 Module overview_en.vtt 720B
~Get Your Files Here !/06 - PyTorch Lightning/013 [Important] Lecture correction.html 613B
~Get Your Files Here !/05 - Refresher Deep Q-Learning/001 Module overview_en.vtt 551B
~Get Your Files Here !/01 - Introduction/002 Reinforcement Learning series.html 491B
~Get Your Files Here !/14 - Final steps/002 Next steps.html 480B
~Get Your Files Here !/Bonus Resources.txt 386B
~Get Your Files Here !/06 - PyTorch Lightning/002 Link to the code notebook.html 280B
~Get Your Files Here !/07 - Hyperparameter tuning with Optuna/002 Link to the code notebook.html 280B
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/005 Link to the code notebook.html 280B
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/004 Link to the code notebook.html 280B
~Get Your Files Here !/11 - Twin Delayed DDPG (TD3)/003 Link to code notebook.html 280B
Get Bonus Downloads Here.url 182B
~Get Your Files Here !/10 - Deep Deterministic Policy Gradient (DDPG)/external-assets-links.txt 153B
~Get Your Files Here !/08 - Deep Q-Learning for continuous action spaces (Normalized Advantage Function)/external-assets-links.txt 148B
~Get Your Files Here !/01 - Introduction/external-assets-links.txt 144B
~Get Your Files Here !/02 - Refresher The Markov Decision Process (MDP)/external-assets-links.txt 144B
~Get Your Files Here !/03 - Refresher Q-Learning/external-assets-links.txt 144B
~Get Your Files Here !/04 - Refresher Brief introduction to Neural Networks/external-assets-links.txt 144B
~Get Your Files Here !/05 - Refresher Deep Q-Learning/external-assets-links.txt 144B
~Get Your Files Here !/06 - PyTorch Lightning/external-assets-links.txt 140B
~Get Your Files Here !/07 - Hyperparameter tuning with Optuna/external-assets-links.txt 140B
~Get Your Files Here !/11 - Twin Delayed DDPG (TD3)/external-assets-links.txt 136B