589689.xyz

[ ] Udemy - Advanced Reinforcement Learning in Python - cutting-edge DQNs

  • 收录时间:2022-04-09 17:19:23
  • 文件大小:2GB
  • 下载次数:1
  • 最近下载:2022-04-09 17:19:23
  • 磁力链接:

文件列表

  1. ~Get Your Files Here !/10. Prioritized Experience Replay/3. DQN for visual inputs.mp4 69MB
  2. ~Get Your Files Here !/10. Prioritized Experience Replay/4. Prioritized Experience Repay Buffer.mp4 64MB
  3. ~Get Your Files Here !/10. Prioritized Experience Replay/6. Implement the Deep Q-Learning algorithm with Prioritized Experience Replay.mp4 63MB
  4. ~Get Your Files Here !/10. Prioritized Experience Replay/5. Create the environment.mp4 63MB
  5. ~Get Your Files Here !/6. PyTorch Lightning/8. Define the class for the Deep Q-Learning algorithm.mp4 55MB
  6. ~Get Your Files Here !/9. Dueling Deep Q-Networks/3. Create the dueling DQN.mp4 54MB
  7. ~Get Your Files Here !/8. Double Deep Q-Learning/3. Create the Double Deep Q-Learning algorithm.mp4 50MB
  8. ~Get Your Files Here !/4. Refresher Brief introduction to Neural Networks/6. Stochastic Gradient Descent.mp4 50MB
  9. ~Get Your Files Here !/6. PyTorch Lightning/11. Define the train_step() method.mp4 50MB
  10. ~Get Your Files Here !/10. Prioritized Experience Replay/7. Launch the training process.mp4 43MB
  11. ~Get Your Files Here !/9. Dueling Deep Q-Networks/4. Create the environment - Part 1.mp4 41MB
  12. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/2. Elements common to all control tasks.mp4 39MB
  13. ~Get Your Files Here !/4. Refresher Brief introduction to Neural Networks/5. How to represent a Neural Network.mp4 38MB
  14. ~Get Your Files Here !/9. Dueling Deep Q-Networks/5. Create the environment - Part 2.mp4 37MB
  15. ~Get Your Files Here !/9. Dueling Deep Q-Networks/6. Implement Deep Q-Learning.mp4 36MB
  16. ~Get Your Files Here !/4. Refresher Brief introduction to Neural Networks/2. Function approximators.mp4 36MB
  17. ~Get Your Files Here !/6. PyTorch Lightning/13. Train the Deep Q-Learning algorithm.mp4 35MB
  18. ~Get Your Files Here !/7. Hyperparameter tuning with Optuna/3. Log average return.mp4 34MB
  19. ~Get Your Files Here !/7. Hyperparameter tuning with Optuna/1. Hyperparameter tuning with Optuna.mp4 32MB
  20. ~Get Your Files Here !/1. Introduction/1. Introduction.mp4 32MB
  21. ~Get Your Files Here !/6. PyTorch Lightning/7. Create the environment.mp4 32MB
  22. ~Get Your Files Here !/6. PyTorch Lightning/12. Define the train_epoch_end() method.mp4 32MB
  23. ~Get Your Files Here !/6. PyTorch Lightning/1. PyTorch Lightning.mp4 32MB
  24. ~Get Your Files Here !/6. PyTorch Lightning/3. Introduction to PyTorch Lightning.mp4 31MB
  25. ~Get Your Files Here !/6. PyTorch Lightning/10. Prepare the data loader and the optimizer.mp4 30MB
  26. ~Get Your Files Here !/7. Hyperparameter tuning with Optuna/4. Define the objective function.mp4 30MB
  27. ~Get Your Files Here !/6. PyTorch Lightning/9. Define the play_episode() function.mp4 29MB
  28. ~Get Your Files Here !/4. Refresher Brief introduction to Neural Networks/4. Artificial Neurons.mp4 26MB
  29. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/3. The Markov decision process (MDP).mp4 25MB
  30. ~Get Your Files Here !/4. Refresher Brief introduction to Neural Networks/3. Artificial Neural Networks.mp4 24MB
  31. ~Get Your Files Here !/4. Refresher Brief introduction to Neural Networks/7. Neural Network optimization.mp4 23MB
  32. ~Get Your Files Here !/6. PyTorch Lightning/6. Create the replay buffer.mp4 23MB
  33. ~Get Your Files Here !/6. PyTorch Lightning/4. Create the Deep Q-Network.mp4 23MB
  34. ~Get Your Files Here !/9. Dueling Deep Q-Networks/7. Check the resulting agent.mp4 21MB
  35. ~Get Your Files Here !/6. PyTorch Lightning/14. Explore the resulting agent.mp4 20MB
  36. ~Get Your Files Here !/7. Hyperparameter tuning with Optuna/6. Explore the best trial.mp4 19MB
  37. ~Get Your Files Here !/7. Hyperparameter tuning with Optuna/5. Create and launch the hyperparameter tuning job.mp4 19MB
  38. ~Get Your Files Here !/6. PyTorch Lightning/5. Create the policy.mp4 18MB
  39. ~Get Your Files Here !/10. Prioritized Experience Replay/8. Check the resulting agent.mp4 17MB
  40. ~Get Your Files Here !/5. Refresher Deep Q-Learning/4. Target Network.mp4 17MB
  41. ~Get Your Files Here !/5. Refresher Deep Q-Learning/2. Deep Q-Learning.mp4 16MB
  42. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/7. Discount factor.mp4 15MB
  43. ~Get Your Files Here !/3. Refresher Q-Learning/3. Solving control tasks with temporal difference method.mp4 15MB
  44. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/11. Solving a Markov decision process.mp4 14MB
  45. ~Get Your Files Here !/8. Double Deep Q-Learning/1. Maximization bias and Double Deep Q-Learning.mp4 14MB
  46. ~Get Your Files Here !/3. Refresher Q-Learning/2. Temporal difference methods.mp4 13MB
  47. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/10. Bellman equations.mp4 12MB
  48. ~Get Your Files Here !/3. Refresher Q-Learning/4. Q-Learning.mp4 11MB
  49. ~Get Your Files Here !/8. Double Deep Q-Learning/4. Check the resulting agent.mp4 9MB
  50. ~Get Your Files Here !/5. Refresher Deep Q-Learning/3. Experience replay.mp4 9MB
  51. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/4. Types of Markov decision process.mp4 9MB
  52. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/8. Policy.mp4 7MB
  53. ~Get Your Files Here !/1. Introduction/3. Google Colab.mp4 6MB
  54. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/6. Reward vs Return.mp4 5MB
  55. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/5. Trajectory vs episode.mp4 5MB
  56. ~Get Your Files Here !/1. Introduction/4. Where to begin.mp4 5MB
  57. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/9. State values v(s) and action values q(s,a).mp4 4MB
  58. ~Get Your Files Here !/3. Refresher Q-Learning/5. Advantages of temporal difference methods.mp4 4MB
  59. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/1. Module overview.mp4 3MB
  60. ~Get Your Files Here !/4. Refresher Brief introduction to Neural Networks/1. Module overview.mp4 2MB
  61. ~Get Your Files Here !/3. Refresher Q-Learning/1. Module overview.mp4 1MB
  62. ~Get Your Files Here !/5. Refresher Deep Q-Learning/1. Module overview.mp4 1MB
  63. ~Get Your Files Here !/1. Introduction/1. Introduction.mp4.jpg 175KB
  64. ~Get Your Files Here !/10. Prioritized Experience Replay/3. DQN for visual inputs.srt 15KB
  65. ~Get Your Files Here !/10. Prioritized Experience Replay/4. Prioritized Experience Repay Buffer.srt 15KB
  66. ~Get Your Files Here !/10. Prioritized Experience Replay/5. Create the environment.srt 14KB
  67. ~Get Your Files Here !/6. PyTorch Lightning/8. Define the class for the Deep Q-Learning algorithm.srt 14KB
  68. ~Get Your Files Here !/10. Prioritized Experience Replay/6. Implement the Deep Q-Learning algorithm with Prioritized Experience Replay.srt 13KB
  69. ~Get Your Files Here !/9. Dueling Deep Q-Networks/3. Create the dueling DQN.srt 12KB
  70. ~Get Your Files Here !/7. Hyperparameter tuning with Optuna/1. Hyperparameter tuning with Optuna.srt 11KB
  71. ~Get Your Files Here !/6. PyTorch Lightning/11. Define the train_step() method.srt 11KB
  72. ~Get Your Files Here !/6. PyTorch Lightning/1. PyTorch Lightning.srt 10KB
  73. ~Get Your Files Here !/4. Refresher Brief introduction to Neural Networks/2. Function approximators.srt 10KB
  74. ~Get Your Files Here !/9. Dueling Deep Q-Networks/4. Create the environment - Part 1.srt 9KB
  75. ~Get Your Files Here !/6. PyTorch Lightning/7. Create the environment.srt 9KB
  76. ~Get Your Files Here !/8. Double Deep Q-Learning/3. Create the Double Deep Q-Learning algorithm.srt 8KB
  77. ~Get Your Files Here !/4. Refresher Brief introduction to Neural Networks/5. How to represent a Neural Network.srt 8KB
  78. ~Get Your Files Here !/6. PyTorch Lightning/13. Train the Deep Q-Learning algorithm.srt 8KB
  79. ~Get Your Files Here !/4. Refresher Brief introduction to Neural Networks/6. Stochastic Gradient Descent.srt 7KB
  80. ~Get Your Files Here !/6. PyTorch Lightning/3. Introduction to PyTorch Lightning.srt 7KB
  81. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/2. Elements common to all control tasks.srt 7KB
  82. ~Get Your Files Here !/9. Dueling Deep Q-Networks/6. Implement Deep Q-Learning.srt 7KB
  83. ~Get Your Files Here !/9. Dueling Deep Q-Networks/5. Create the environment - Part 2.srt 7KB
  84. ~Get Your Files Here !/6. PyTorch Lightning/6. Create the replay buffer.srt 7KB
  85. ~Get Your Files Here !/4. Refresher Brief introduction to Neural Networks/4. Artificial Neurons.srt 7KB
  86. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/3. The Markov decision process (MDP).srt 6KB
  87. ~Get Your Files Here !/7. Hyperparameter tuning with Optuna/4. Define the objective function.srt 6KB
  88. ~Get Your Files Here !/6. PyTorch Lightning/4. Create the Deep Q-Network.srt 6KB
  89. ~Get Your Files Here !/10. Prioritized Experience Replay/7. Launch the training process.srt 6KB
  90. ~Get Your Files Here !/6. PyTorch Lightning/5. Create the policy.srt 6KB
  91. ~Get Your Files Here !/7. Hyperparameter tuning with Optuna/3. Log average return.srt 6KB
  92. ~Get Your Files Here !/6. PyTorch Lightning/9. Define the play_episode() function.srt 5KB
  93. ~Get Your Files Here !/4. Refresher Brief introduction to Neural Networks/7. Neural Network optimization.srt 5KB
  94. ~Get Your Files Here !/6. PyTorch Lightning/10. Prepare the data loader and the optimizer.srt 5KB
  95. ~Get Your Files Here !/6. PyTorch Lightning/12. Define the train_epoch_end() method.srt 5KB
  96. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/7. Discount factor.srt 5KB
  97. ~Get Your Files Here !/5. Refresher Deep Q-Learning/4. Target Network.srt 5KB
  98. ~Get Your Files Here !/4. Refresher Brief introduction to Neural Networks/3. Artificial Neural Networks.srt 4KB
  99. ~Get Your Files Here !/3. Refresher Q-Learning/2. Temporal difference methods.srt 4KB
  100. ~Get Your Files Here !/3. Refresher Q-Learning/3. Solving control tasks with temporal difference method.srt 4KB
  101. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/11. Solving a Markov decision process.srt 4KB
  102. ~Get Your Files Here !/6. PyTorch Lightning/14. Explore the resulting agent.srt 4KB
  103. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/10. Bellman equations.srt 3KB
  104. ~Get Your Files Here !/5. Refresher Deep Q-Learning/2. Deep Q-Learning.srt 3KB
  105. ~Get Your Files Here !/7. Hyperparameter tuning with Optuna/5. Create and launch the hyperparameter tuning job.srt 3KB
  106. ~Get Your Files Here !/7. Hyperparameter tuning with Optuna/6. Explore the best trial.srt 3KB
  107. ~Get Your Files Here !/3. Refresher Q-Learning/4. Q-Learning.srt 3KB
  108. ~Get Your Files Here !/9. Dueling Deep Q-Networks/7. Check the resulting agent.srt 3KB
  109. ~Get Your Files Here !/5. Refresher Deep Q-Learning/3. Experience replay.srt 3KB
  110. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/4. Types of Markov decision process.srt 2KB
  111. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/8. Policy.srt 2KB
  112. ~Get Your Files Here !/1. Introduction/4. Where to begin.srt 2KB
  113. ~Get Your Files Here !/1. Introduction/3. Google Colab.srt 2KB
  114. ~Get Your Files Here !/10. Prioritized Experience Replay/8. Check the resulting agent.srt 2KB
  115. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/6. Reward vs Return.srt 2KB
  116. ~Get Your Files Here !/8. Double Deep Q-Learning/4. Check the resulting agent.srt 2KB
  117. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/9. State values v(s) and action values q(s,a).srt 1KB
  118. ~Get Your Files Here !/3. Refresher Q-Learning/5. Advantages of temporal difference methods.srt 1KB
  119. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/5. Trajectory vs episode.srt 1KB
  120. ~Get Your Files Here !/2. Refresher The Markov Decision Process (MDP)/1. Module overview.srt 1KB
  121. ~Get Your Files Here !/4. Refresher Brief introduction to Neural Networks/1. Module overview.srt 850B
  122. ~Get Your Files Here !/3. Refresher Q-Learning/1. Module overview.srt 798B
  123. ~Get Your Files Here !/5. Refresher Deep Q-Learning/1. Module overview.srt 602B
  124. ~Get Your Files Here !/Bonus Resources.txt 386B
  125. ~Get Your Files Here !/1. Introduction/2. Reinforcement Learning series.html 377B
  126. Get Bonus Downloads Here.url 182B
  127. ~Get Your Files Here !/6. PyTorch Lightning/2.1 Google colab.html 176B
  128. ~Get Your Files Here !/7. Hyperparameter tuning with Optuna/2.1 Google colab.html 176B
  129. ~Get Your Files Here !/6. PyTorch Lightning/2. Link to the code notebook.html 169B
  130. ~Get Your Files Here !/7. Hyperparameter tuning with Optuna/2. Link to the code notebook.html 169B
  131. ~Get Your Files Here !/8. Double Deep Q-Learning/2. Link to the code notebook.html 169B
  132. ~Get Your Files Here !/9. Dueling Deep Q-Networks/2.1 Google colab.html 166B
  133. ~Get Your Files Here !/8. Double Deep Q-Learning/2.1 Google colab.html 165B
  134. ~Get Your Files Here !/9. Dueling Deep Q-Networks/2. Link to the code notebook.html 159B
  135. ~Get Your Files Here !/1. Introduction/1.1 Advanced Reinforcement Learning in Python from DQN to SAC.html 147B
  136. ~Get Your Files Here !/1. Introduction/1.2 Reinforcement Learning beginner to master.html 145B
  137. ~Get Your Files Here !/10. Prioritized Experience Replay/1. Prioritized Experience Replay.html 79B
  138. ~Get Your Files Here !/10. Prioritized Experience Replay/2. Link to the code notebook.html 79B
  139. ~Get Your Files Here !/11. Noisy Deep Q-Networks/1. Noisy Deep Q-Networks.html 79B
  140. ~Get Your Files Here !/12. N-step Deep Q-Learning/1. N-step Deep Q-Learning.html 79B
  141. ~Get Your Files Here !/13. Distributional Deep Q-Networks/1. Distributional Deep Q-Networks.html 79B
  142. ~Get Your Files Here !/9. Dueling Deep Q-Networks/1. Dueling Deep Q-Networks.html 79B