589689.xyz

[] Udemy - Artificial Intelligence Reinforcement Learning in Python

  • 收录时间:2021-07-26 08:28:35
  • 文件大小:3GB
  • 下载次数:1
  • 最近下载:2021-07-26 08:28:35
  • 磁力链接:

文件列表

  1. 10/1. Windows-Focused Environment Setup 2018.mp4 186MB
  2. 4. Markov Decision Proccesses/11. Bellman Examples.mp4 87MB
  3. 11/3. Proof that using Jupyter Notebook is the same as not using it.mp4 78MB
  4. 2. Return of the Multi-Armed Bandit/16. Bayesian Bandits Thompson Sampling Theory (pt 2).mp4 75MB
  5. 5. Dynamic Programming/4. Iterative Policy Evaluation in Code.mp4 68MB
  6. 9. Stock Trading Project with Reinforcement Learning/6. Code pt 2.mp4 65MB
  7. 1. Welcome/5. Warmup.mp4 63MB
  8. 4. Markov Decision Proccesses/5. Markov Decision Processes (MDPs).mp4 62MB
  9. 5. Dynamic Programming/9. Policy Iteration in Code.mp4 56MB
  10. 4. Markov Decision Proccesses/12. Optimal Policy and Optimal Value Function (pt 1).mp4 56MB
  11. 2. Return of the Multi-Armed Bandit/15. Bayesian Bandits Thompson Sampling Theory (pt 1).mp4 56MB
  12. 2. Return of the Multi-Armed Bandit/12. UCB1 Theory.mp4 56MB
  13. 3. High Level Overview of Reinforcement Learning/1. What is Reinforcement Learning.mp4 55MB
  14. 4. Markov Decision Proccesses/2. Gridworld.mp4 54MB
  15. 9. Stock Trading Project with Reinforcement Learning/2. Data and Environment.mp4 52MB
  16. 2. Return of the Multi-Armed Bandit/1. Section Introduction The Explore-Exploit Dilemma.mp4 52MB
  17. 5. Dynamic Programming/10. Policy Iteration in Windy Gridworld.mp4 51MB
  18. 2. Return of the Multi-Armed Bandit/2. Applications of the Explore-Exploit Dilemma.mp4 51MB
  19. 2. Return of the Multi-Armed Bandit/24. (Optional) Alternative Bandit Designs.mp4 50MB
  20. 9. Stock Trading Project with Reinforcement Learning/5. Code pt 1.mp4 50MB
  21. 9. Stock Trading Project with Reinforcement Learning/8. Code pt 4.mp4 49MB
  22. 2. Return of the Multi-Armed Bandit/19. Thompson Sampling With Gaussian Reward Theory.mp4 49MB
  23. 5. Dynamic Programming/6. Iterative Policy Evaluation for Windy Gridworld in Code.mp4 47MB
  24. 5. Dynamic Programming/3. Gridworld in Code.mp4 47MB
  25. 5. Dynamic Programming/12. Value Iteration in Code.mp4 46MB
  26. 9. Stock Trading Project with Reinforcement Learning/3. How to Model Q for Q-Learning.mp4 45MB
  27. 10/2. How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow.mp4 44MB
  28. 2. Return of the Multi-Armed Bandit/8. Comparing Different Epsilons.mp4 44MB
  29. 2. Return of the Multi-Armed Bandit/20. Thompson Sampling With Gaussian Reward Code.mp4 43MB
  30. 5. Dynamic Programming/5. Windy Gridworld in Code.mp4 41MB
  31. 2. Return of the Multi-Armed Bandit/7. Epsilon-Greedy in Code.mp4 41MB
  32. 3. High Level Overview of Reinforcement Learning/3. From Bandits to Full Reinforcement Learning.mp4 41MB
  33. 1. Welcome/2. Course Outline and Big Picture.mp4 40MB
  34. 4. Markov Decision Proccesses/6. Future Rewards.mp4 40MB
  35. 12/2. Is this for Beginners or Experts Academic or Practical Fast or slow-paced.srt 39MB
  36. 12/2. Is this for Beginners or Experts Academic or Practical Fast or slow-paced.mp4 39MB
  37. 13. Appendix FAQ/2. BONUS Where to get discount coupons and FREE deep learning material.mp4 38MB
  38. 13. Appendix FAQ Finale/2. BONUS Where to get discount coupons and FREE deep learning material.mp4 38MB
  39. 12/4. Machine Learning and AI Prerequisite Roadmap (pt 2).mp4 38MB
  40. 4. Markov Decision Proccesses/1. MDP Section Introduction.mp4 37MB
  41. 3. High Level Overview of Reinforcement Learning/2. On Unusual or Unexpected Strategies of RL.mp4 37MB
  42. 2. Return of the Multi-Armed Bandit/23. Bandit Summary, Real Data, and Online Learning.mp4 35MB
  43. 1. Welcome/1. Introduction.mp4 34MB
  44. 9. Stock Trading Project with Reinforcement Learning/7. Code pt 3.mp4 34MB
  45. 2. Return of the Multi-Armed Bandit/18. Thompson Sampling Code.mp4 33MB
  46. 4. Markov Decision Proccesses/3. Choosing Rewards.mp4 32MB
  47. 2. Return of the Multi-Armed Bandit/22. Nonstationary Bandits.mp4 31MB
  48. 12/3. Machine Learning and AI Prerequisite Roadmap (pt 1).mp4 29MB
  49. 2. Return of the Multi-Armed Bandit/5. Epsilon-Greedy Beginner's Exercise Prompt.mp4 29MB
  50. 2. Return of the Multi-Armed Bandit/3. Epsilon-Greedy Theory.mp4 28MB
  51. 4. Markov Decision Proccesses/8. The Bellman Equation (pt 1).mp4 28MB
  52. 2. Return of the Multi-Armed Bandit/21. Why don't we just use a library.mp4 27MB
  53. 9. Stock Trading Project with Reinforcement Learning/1. Stock Trading Project Section Introduction.mp4 27MB
  54. 4. Markov Decision Proccesses/9. The Bellman Equation (pt 2).mp4 27MB
  55. 4. Markov Decision Proccesses/10. The Bellman Equation (pt 3).mp4 25MB
  56. 2. Return of the Multi-Armed Bandit/11. Optimistic Initial Values Code.mp4 25MB
  57. 11/1. How to Code by Yourself (part 1).mp4 25MB
  58. 2. Return of the Multi-Armed Bandit/6. Designing Your Bandit Program.mp4 25MB
  59. 2. Return of the Multi-Armed Bandit/9. Optimistic Initial Values Theory.mp4 24MB
  60. 9. Stock Trading Project with Reinforcement Learning/4. Design of the Program.mp4 23MB
  61. 2. Return of the Multi-Armed Bandit/4. Calculating a Sample Mean (pt 1).mp4 23MB
  62. 1. Welcome/3. Where to get the Code.mp4 23MB
  63. 5. Dynamic Programming/2. Designing Your RL Program.mp4 22MB
  64. 4. Markov Decision Proccesses/4. The Markov Property.mp4 22MB
  65. 2. Return of the Multi-Armed Bandit/14. UCB1 Code.mp4 21MB
  66. 4. Markov Decision Proccesses/7. Value Functions.srt 19MB
  67. 4. Markov Decision Proccesses/7. Value Functions.mp4 19MB
  68. 12/1. How to Succeed in this Course (Long Version).mp4 18MB
  69. 2. Return of the Multi-Armed Bandit/17. Thompson Sampling Beginner's Exercise Prompt.mp4 18MB
  70. 2. Return of the Multi-Armed Bandit/25. Suggestion Box.mp4 16MB
  71. 9. Stock Trading Project with Reinforcement Learning/9. Stock Trading Project Discussion.mp4 16MB
  72. 4. Markov Decision Proccesses/13. Optimal Policy and Optimal Value Function (pt 2).mp4 16MB
  73. 1. Welcome/4. How to Succeed in this Course.mp4 16MB
  74. 11/2. How to Code by Yourself (part 2).mp4 15MB
  75. 4. Markov Decision Proccesses/14. MDP Summary.mp4 14MB
  76. 2. Return of the Multi-Armed Bandit/10. Optimistic Initial Values Beginner's Exercise Prompt.mp4 14MB
  77. 8. Approximation Methods/9. Course Summary and Next Steps.mp4 13MB
  78. 2. Return of the Multi-Armed Bandit/13. UCB1 Beginner's Exercise Prompt.mp4 13MB
  79. 8. Approximation Methods/8. Semi-Gradient SARSA in Code.mp4 11MB
  80. 6. Monte Carlo/6. Monte Carlo Control in Code.mp4 10MB
  81. 6. Monte Carlo/5. Monte Carlo Control.mp4 9MB
  82. 7. Temporal Difference Learning/5. SARSA in Code.mp4 9MB
  83. 6. Monte Carlo/2. Monte Carlo Policy Evaluation.mp4 9MB
  84. 8. Approximation Methods/6. TD(0) Semi-Gradient Prediction.mp4 8MB
  85. 5. Dynamic Programming/13. Dynamic Programming Summary.mp4 8MB
  86. 7. Temporal Difference Learning/4. SARSA.mp4 8MB
  87. 6. Monte Carlo/8. Monte Carlo Control without Exploring Starts in Code.mp4 8MB
  88. 6. Monte Carlo/3. Monte Carlo Policy Evaluation in Code.mp4 8MB
  89. 11/4. Python 2 vs Python 3.mp4 8MB
  90. 6. Monte Carlo/4. Policy Evaluation in Windy Gridworld.mp4 8MB
  91. 8. Approximation Methods/5. Monte Carlo Prediction with Approximation in Code.mp4 7MB
  92. 8. Approximation Methods/2. Linear Models for Reinforcement Learning.mp4 6MB
  93. 8. Approximation Methods/1. Approximation Intro.mp4 6MB
  94. 8. Approximation Methods/3. Features.mp4 6MB
  95. 5. Dynamic Programming/11. Value Iteration.mp4 6MB
  96. 7. Temporal Difference Learning/2. TD(0) Prediction.mp4 6MB
  97. 6. Monte Carlo/9. Monte Carlo Summary.mp4 6MB
  98. 13. Appendix FAQ Finale/1. What is the Appendix.mp4 5MB
  99. 13. Appendix FAQ/1. What is the Appendix.mp4 5MB
  100. 7. Temporal Difference Learning/7. Q Learning in Code.mp4 5MB
  101. 7. Temporal Difference Learning/3. TD(0) Prediction in Code.mp4 5MB
  102. 6. Monte Carlo/1. Monte Carlo Intro.mp4 5MB
  103. 5. Dynamic Programming/1. Intro to Dynamic Programming and Iterative Policy Evaluation.mp4 5MB
  104. 7. Temporal Difference Learning/6. Q Learning.mp4 5MB
  105. 8. Approximation Methods/7. Semi-Gradient SARSA.mp4 5MB
  106. 6. Monte Carlo/7. Monte Carlo Control without Exploring Starts.mp4 5MB
  107. 5. Dynamic Programming/7. Policy Improvement.mp4 5MB
  108. 7. Temporal Difference Learning/8. TD Summary.mp4 4MB
  109. 5. Dynamic Programming/8. Policy Iteration.mp4 3MB
  110. 8. Approximation Methods/4. Monte Carlo Prediction with Approximation.mp4 3MB
  111. 7. Temporal Difference Learning/1. Temporal Difference Intro.mp4 3MB
  112. 11/1. How to Code by Yourself (part 1).srt 30KB
  113. 4. Markov Decision Proccesses/11. Bellman Examples.srt 29KB
  114. 2. Return of the Multi-Armed Bandit/16. Bayesian Bandits Thompson Sampling Theory (pt 2).srt 26KB
  115. 12/4. Machine Learning and AI Prerequisite Roadmap (pt 2).srt 23KB
  116. 2. Return of the Multi-Armed Bandit/12. UCB1 Theory.srt 22KB
  117. 4. Markov Decision Proccesses/5. Markov Decision Processes (MDPs).srt 22KB
  118. 10/1. Windows-Focused Environment Setup 2018.srt 20KB
  119. 1. Welcome/5. Warmup.srt 20KB
  120. 4. Markov Decision Proccesses/2. Gridworld.srt 19KB
  121. 11/2. How to Code by Yourself (part 2).srt 18KB
  122. 2. Return of the Multi-Armed Bandit/15. Bayesian Bandits Thompson Sampling Theory (pt 1).srt 18KB
  123. 10/2. How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow.srt 18KB
  124. 5. Dynamic Programming/4. Iterative Policy Evaluation in Code.srt 18KB
  125. 5. Dynamic Programming/3. Gridworld in Code.srt 18KB
  126. 9. Stock Trading Project with Reinforcement Learning/2. Data and Environment.srt 17KB
  127. 2. Return of the Multi-Armed Bandit/19. Thompson Sampling With Gaussian Reward Theory.srt 17KB
  128. 12/3. Machine Learning and AI Prerequisite Roadmap (pt 1).srt 16KB
  129. 8. Approximation Methods/9. Course Summary and Next Steps.srt 16KB
  130. 2. Return of the Multi-Armed Bandit/24. (Optional) Alternative Bandit Designs.srt 15KB
  131. 2. Return of the Multi-Armed Bandit/1. Section Introduction The Explore-Exploit Dilemma.srt 15KB
  132. 12/1. How to Succeed in this Course (Long Version).srt 15KB
  133. 4. Markov Decision Proccesses/6. Future Rewards.srt 14KB
  134. 11/3. Proof that using Jupyter Notebook is the same as not using it.srt 14KB
  135. 3. High Level Overview of Reinforcement Learning/3. From Bandits to Full Reinforcement Learning.srt 13KB
  136. 9. Stock Trading Project with Reinforcement Learning/3. How to Model Q for Q-Learning.srt 13KB
  137. 9. Stock Trading Project with Reinforcement Learning/6. Code pt 2.srt 13KB
  138. 4. Markov Decision Proccesses/12. Optimal Policy and Optimal Value Function (pt 1).srt 13KB
  139. 5. Dynamic Programming/10. Policy Iteration in Windy Gridworld.srt 12KB
  140. 4. Markov Decision Proccesses/8. The Bellman Equation (pt 1).srt 12KB
  141. 5. Dynamic Programming/9. Policy Iteration in Code.srt 12KB
  142. 3. High Level Overview of Reinforcement Learning/1. What is Reinforcement Learning.srt 12KB
  143. 2. Return of the Multi-Armed Bandit/2. Applications of the Explore-Exploit Dilemma.srt 12KB
  144. 1. Welcome/2. Course Outline and Big Picture.srt 11KB
  145. 5. Dynamic Programming/5. Windy Gridworld in Code.srt 11KB
  146. 5. Dynamic Programming/6. Iterative Policy Evaluation for Windy Gridworld in Code.srt 11KB
  147. 6. Monte Carlo/2. Monte Carlo Policy Evaluation.srt 11KB
  148. 2. Return of the Multi-Armed Bandit/3. Epsilon-Greedy Theory.srt 10KB
  149. 9. Stock Trading Project with Reinforcement Learning/5. Code pt 1.srt 10KB
  150. 6. Monte Carlo/5. Monte Carlo Control.srt 10KB
  151. 2. Return of the Multi-Armed Bandit/22. Nonstationary Bandits.srt 10KB
  152. 2. Return of the Multi-Armed Bandit/23. Bandit Summary, Real Data, and Online Learning.srt 10KB
  153. 5. Dynamic Programming/12. Value Iteration in Code.srt 10KB
  154. 7. Temporal Difference Learning/4. SARSA.srt 10KB
  155. 4. Markov Decision Proccesses/9. The Bellman Equation (pt 2).srt 9KB
  156. 5. Dynamic Programming/13. Dynamic Programming Summary.srt 9KB
  157. 2. Return of the Multi-Armed Bandit/7. Epsilon-Greedy in Code.srt 9KB
  158. 4. Markov Decision Proccesses/1. MDP Section Introduction.srt 9KB
  159. 9. Stock Trading Project with Reinforcement Learning/4. Design of the Program.srt 9KB
  160. 4. Markov Decision Proccesses/4. The Markov Property.srt 9KB
  161. 9. Stock Trading Project with Reinforcement Learning/8. Code pt 4.srt 9KB
  162. 4. Markov Decision Proccesses/10. The Bellman Equation (pt 3).srt 9KB
  163. 3. High Level Overview of Reinforcement Learning/2. On Unusual or Unexpected Strategies of RL.srt 9KB
  164. 2. Return of the Multi-Armed Bandit/4. Calculating a Sample Mean (pt 1).srt 8KB
  165. 2. Return of the Multi-Armed Bandit/21. Why don't we just use a library.srt 8KB
  166. 13. Appendix FAQ/2. BONUS Where to get discount coupons and FREE deep learning material.srt 8KB
  167. 2. Return of the Multi-Armed Bandit/20. Thompson Sampling With Gaussian Reward Code.srt 8KB
  168. 8. Approximation Methods/1. Approximation Intro.srt 8KB
  169. 2. Return of the Multi-Armed Bandit/9. Optimistic Initial Values Theory.srt 8KB
  170. 13. Appendix FAQ Finale/2. BONUS Where to get discount coupons and FREE deep learning material.srt 8KB
  171. 8. Approximation Methods/2. Linear Models for Reinforcement Learning.srt 7KB
  172. 9. Stock Trading Project with Reinforcement Learning/1. Stock Trading Project Section Introduction.srt 7KB
  173. 2. Return of the Multi-Armed Bandit/5. Epsilon-Greedy Beginner's Exercise Prompt.srt 7KB
  174. 6. Monte Carlo/9. Monte Carlo Summary.srt 7KB
  175. 5. Dynamic Programming/2. Designing Your RL Program.srt 7KB
  176. 2. Return of the Multi-Armed Bandit/8. Comparing Different Epsilons.srt 7KB
  177. 5. Dynamic Programming/11. Value Iteration.srt 7KB
  178. 1. Welcome/3. Where to get the Code.srt 7KB
  179. 8. Approximation Methods/3. Features.srt 7KB
  180. 7. Temporal Difference Learning/2. TD(0) Prediction.srt 6KB
  181. 8. Approximation Methods/6. TD(0) Semi-Gradient Prediction.srt 6KB
  182. 2. Return of the Multi-Armed Bandit/18. Thompson Sampling Code.srt 6KB
  183. 6. Monte Carlo/3. Monte Carlo Policy Evaluation in Code.srt 6KB
  184. 11/4. Python 2 vs Python 3.srt 6KB
  185. 2. Return of the Multi-Armed Bandit/6. Designing Your Bandit Program.srt 6KB
  186. 6. Monte Carlo/1. Monte Carlo Intro.srt 6KB
  187. 4. Markov Decision Proccesses/3. Choosing Rewards.srt 6KB
  188. 9. Stock Trading Project with Reinforcement Learning/7. Code pt 3.srt 6KB
  189. 6. Monte Carlo/6. Monte Carlo Control in Code.srt 6KB
  190. 7. Temporal Difference Learning/6. Q Learning.srt 6KB
  191. 2. Return of the Multi-Armed Bandit/11. Optimistic Initial Values Code.srt 6KB
  192. 7. Temporal Difference Learning/5. SARSA in Code.srt 6KB
  193. 6. Monte Carlo/7. Monte Carlo Control without Exploring Starts.srt 6KB
  194. 8. Approximation Methods/7. Semi-Gradient SARSA.srt 5KB
  195. 4. Markov Decision Proccesses/13. Optimal Policy and Optimal Value Function (pt 2).srt 5KB
  196. 8. Approximation Methods/8. Semi-Gradient SARSA in Code.srt 5KB
  197. 5. Dynamic Programming/1. Intro to Dynamic Programming and Iterative Policy Evaluation.srt 5KB
  198. 6. Monte Carlo/4. Policy Evaluation in Windy Gridworld.srt 5KB
  199. 5. Dynamic Programming/7. Policy Improvement.srt 5KB
  200. 2. Return of the Multi-Armed Bandit/25. Suggestion Box.srt 5KB
  201. 7. Temporal Difference Learning/8. TD Summary.srt 5KB
  202. 9. Stock Trading Project with Reinforcement Learning/9. Stock Trading Project Discussion.srt 5KB
  203. 1. Welcome/1. Introduction.srt 4KB
  204. 1. Welcome/4. How to Succeed in this Course.srt 4KB
  205. 2. Return of the Multi-Armed Bandit/14. UCB1 Code.srt 4KB
  206. 8. Approximation Methods/5. Monte Carlo Prediction with Approximation in Code.srt 4KB
  207. 4. Markov Decision Proccesses/14. MDP Summary.srt 4KB
  208. 7. Temporal Difference Learning/3. TD(0) Prediction in Code.srt 4KB
  209. 13. Appendix FAQ/1. What is the Appendix.srt 4KB
  210. 2. Return of the Multi-Armed Bandit/17. Thompson Sampling Beginner's Exercise Prompt.srt 4KB
  211. 13. Appendix FAQ Finale/1. What is the Appendix.srt 4KB
  212. 6. Monte Carlo/8. Monte Carlo Control without Exploring Starts in Code.srt 4KB
  213. 5. Dynamic Programming/8. Policy Iteration.srt 3KB
  214. 7. Temporal Difference Learning/7. Q Learning in Code.srt 3KB
  215. 7. Temporal Difference Learning/1. Temporal Difference Intro.srt 3KB
  216. 2. Return of the Multi-Armed Bandit/10. Optimistic Initial Values Beginner's Exercise Prompt.srt 3KB
  217. 2. Return of the Multi-Armed Bandit/13. UCB1 Beginner's Exercise Prompt.srt 3KB
  218. 8. Approximation Methods/4. Monte Carlo Prediction with Approximation.srt 2KB
  219. 1. Welcome/[Tutorialsplanet.NET].url 128B
  220. 13. Appendix FAQ Finale/[Tutorialsplanet.NET].url 128B
  221. 2. Return of the Multi-Armed Bandit/[Tutorialsplanet.NET].url 128B
  222. 5. Dynamic Programming/[Tutorialsplanet.NET].url 128B
  223. 8. Approximation Methods/[Tutorialsplanet.NET].url 128B
  224. [Tutorialsplanet.NET].url 128B