589689.xyz

[] Udemy - Artificial Intelligence Reinforcement Learning in Python

  • 收录时间:2019-06-12 05:03:43
  • 文件大小:2GB
  • 下载次数:84
  • 最近下载:2021-01-20 14:13:02
  • 磁力链接:

文件列表

  1. 10. Appendix/2. Windows-Focused Environment Setup 2018.mp4 186MB
  2. 4. Build an Intelligent Tic-Tac-Toe Agent/4. The Value Function and Your First Reinforcement Learning Algorithm.mp4 104MB
  3. 5. Markov Decision Proccesses/7. Bellman Examples.mp4 87MB
  4. 10. Appendix/8. Proof that using Jupyter Notebook is the same as not using it.mp4 78MB
  5. 2. High Level Overview of Reinforcement Learning and Course Outline/1. What is Reinforcement Learning.mp4 55MB
  6. 3. Return of the Multi-Armed Bandit/9. Bayesian Thompson Sampling.mp4 52MB
  7. 3. Return of the Multi-Armed Bandit/2. Applications of the Explore-Exploit Dilemma.mp4 51MB
  8. 10. Appendix/3. How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow.mp4 44MB
  9. 2. High Level Overview of Reinforcement Learning and Course Outline/1. What is Reinforcement Learning.vtt 43MB
  10. 2. High Level Overview of Reinforcement Learning and Course Outline/4. Defining Some Terms.mp4 42MB
  11. 10. Appendix/7. Is this for Beginners or Experts Academic or Practical Fast or slow-paced.mp4 39MB
  12. 10. Appendix/11. What order should I take your courses in (part 2).mp4 38MB
  13. 2. High Level Overview of Reinforcement Learning and Course Outline/2. On Unusual or Unexpected Strategies of RL.mp4 37MB
  14. 1. Welcome/1. Introduction.mp4 34MB
  15. 2. High Level Overview of Reinforcement Learning and Course Outline/3. Course Outline.mp4 31MB
  16. 10. Appendix/10. What order should I take your courses in (part 1).mp4 29MB
  17. 10. Appendix/4. How to Code by Yourself (part 1).mp4 25MB
  18. 3. Return of the Multi-Armed Bandit/5. Designing Your Bandit Program.mp4 25MB
  19. 6. Dynamic Programming/3. Designing Your RL Program.mp4 22MB
  20. 4. Build an Intelligent Tic-Tac-Toe Agent/12. Tic Tac Toe Exercise.mp4 20MB
  21. 5. Markov Decision Proccesses/5. Value Function Introduction.mp4 20MB
  22. 10. Appendix/6. How to Succeed in this Course (Long Version).mp4 18MB
  23. 10. Appendix/5. How to Code by Yourself (part 2).mp4 15MB
  24. 9. Approximation Methods/9. Course Summary and Next Steps.mp4 13MB
  25. 4. Build an Intelligent Tic-Tac-Toe Agent/2. Components of a Reinforcement Learning System.mp4 13MB
  26. 6. Dynamic Programming/4. Iterative Policy Evaluation in Code.mp4 12MB
  27. 6. Dynamic Programming/2. Gridworld in Code.mp4 11MB
  28. 9. Approximation Methods/8. Semi-Gradient SARSA in Code.mp4 11MB
  29. 3. Return of the Multi-Armed Bandit/10. Thompson Sampling vs. Epsilon-Greedy vs. Optimistic Initial Values vs. UCB1.mp4 11MB
  30. 7. Monte Carlo/6. Monte Carlo Control in Code.mp4 10MB
  31. 4. Build an Intelligent Tic-Tac-Toe Agent/8. Tic Tac Toe Code The Environment.mp4 10MB
  32. 4. Build an Intelligent Tic-Tac-Toe Agent/7. Tic Tac Toe Code Enumerating States Recursively.mp4 10MB
  33. 1. Welcome/3. Strategy for Passing the Course.mp4 9MB
  34. 4. Build an Intelligent Tic-Tac-Toe Agent/10. Tic Tac Toe Code Main Loop and Demo.mp4 9MB
  35. 7. Monte Carlo/5. Monte Carlo Control.mp4 9MB
  36. 6. Dynamic Programming/8. Policy Iteration in Windy Gridworld.mp4 9MB
  37. 4. Build an Intelligent Tic-Tac-Toe Agent/9. Tic Tac Toe Code The Agent.mp4 9MB
  38. 8. Temporal Difference Learning/5. SARSA in Code.mp4 9MB
  39. 7. Monte Carlo/2. Monte Carlo Policy Evaluation.mp4 9MB
  40. 9. Approximation Methods/6. TD(0) Semi-Gradient Prediction.mp4 8MB
  41. 6. Dynamic Programming/11. Dynamic Programming Summary.mp4 8MB
  42. 4. Build an Intelligent Tic-Tac-Toe Agent/11. Tic Tac Toe Summary.mp4 8MB
  43. 5. Markov Decision Proccesses/6. Value Functions.mp4 8MB
  44. 3. Return of the Multi-Armed Bandit/8. UCB1.mp4 8MB
  45. 8. Temporal Difference Learning/4. SARSA.mp4 8MB
  46. 7. Monte Carlo/8. Monte Carlo Control without Exploring Starts in Code.mp4 8MB
  47. 3. Return of the Multi-Armed Bandit/6. Comparing Different Epsilons.mp4 8MB
  48. 7. Monte Carlo/3. Monte Carlo Policy Evaluation in Code.mp4 8MB
  49. 10. Appendix/9. Python 2 vs Python 3.mp4 8MB
  50. 7. Monte Carlo/4. Policy Evaluation in Windy Gridworld.mp4 8MB
  51. 6. Dynamic Programming/7. Policy Iteration in Code.mp4 8MB
  52. 3. Return of the Multi-Armed Bandit/11. Nonstationary Bandits.mp4 7MB
  53. 5. Markov Decision Proccesses/2. The Markov Property.mp4 7MB
  54. 5. Markov Decision Proccesses/3. Defining and Formalizing the MDP.mp4 7MB
  55. 9. Approximation Methods/5. Monte Carlo Prediction with Approximation in Code.mp4 7MB
  56. 3. Return of the Multi-Armed Bandit/1. Problem Setup and The Explore-Exploit Dilemma.mp4 6MB
  57. 9. Approximation Methods/2. Linear Models for Reinforcement Learning.mp4 6MB
  58. 9. Approximation Methods/1. Approximation Intro.mp4 6MB
  59. 9. Approximation Methods/3. Features.mp4 6MB
  60. 6. Dynamic Programming/9. Value Iteration.mp4 6MB
  61. 4. Build an Intelligent Tic-Tac-Toe Agent/1. Naive Solution to Tic-Tac-Toe.mp4 6MB
  62. 8. Temporal Difference Learning/2. TD(0) Prediction.mp4 6MB
  63. 7. Monte Carlo/9. Monte Carlo Summary.mp4 6MB
  64. 10. Appendix/1. What is the Appendix.mp4 5MB
  65. 8. Temporal Difference Learning/7. Q Learning in Code.mp4 5MB
  66. 8. Temporal Difference Learning/3. TD(0) Prediction in Code.mp4 5MB
  67. 5. Markov Decision Proccesses/4. Future Rewards.mp4 5MB
  68. 3. Return of the Multi-Armed Bandit/7. Optimistic Initial Values.mp4 5MB
  69. 4. Build an Intelligent Tic-Tac-Toe Agent/5. Tic Tac Toe Code Outline.mp4 5MB
  70. 7. Monte Carlo/1. Monte Carlo Intro.mp4 5MB
  71. 6. Dynamic Programming/10. Value Iteration in Code.mp4 5MB
  72. 8. Temporal Difference Learning/6. Q Learning.mp4 5MB
  73. 6. Dynamic Programming/1. Intro to Dynamic Programming and Iterative Policy Evaluation.mp4 5MB
  74. 9. Approximation Methods/7. Semi-Gradient SARSA.mp4 5MB
  75. 7. Monte Carlo/7. Monte Carlo Control without Exploring Starts.mp4 5MB
  76. 6. Dynamic Programming/5. Policy Improvement.mp4 5MB
  77. 1. Welcome/2. Where to get the Code.mp4 4MB
  78. 4. Build an Intelligent Tic-Tac-Toe Agent/6. Tic Tac Toe Code Representing States.mp4 4MB
  79. 4. Build an Intelligent Tic-Tac-Toe Agent/3. Notes on Assigning Rewards.mp4 4MB
  80. 10. Appendix/12. Where to get discount coupons and FREE deep learning material.mp4 4MB
  81. 8. Temporal Difference Learning/8. TD Summary.mp4 4MB
  82. 5. Markov Decision Proccesses/1. Gridworld.mp4 3MB
  83. 5. Markov Decision Proccesses/8. Optimal Policy and Optimal Value Function.mp4 3MB
  84. 6. Dynamic Programming/6. Policy Iteration.mp4 3MB
  85. 9. Approximation Methods/4. Monte Carlo Prediction with Approximation.mp4 3MB
  86. 3. Return of the Multi-Armed Bandit/3. Epsilon-Greedy.mp4 3MB
  87. 8. Temporal Difference Learning/1. Temporal Difference Intro.mp4 3MB
  88. 5. Markov Decision Proccesses/9. MDP Summary.mp4 2MB
  89. 3. Return of the Multi-Armed Bandit/4. Updating a Sample Mean.mp4 2MB
  90. 10. Appendix/7. Is this for Beginners or Experts Academic or Practical Fast or slow-paced.vtt 30KB
  91. 10. Appendix/4. How to Code by Yourself (part 1).vtt 27KB
  92. 5. Markov Decision Proccesses/7. Bellman Examples.vtt 26KB
  93. 10. Appendix/11. What order should I take your courses in (part 2).vtt 22KB
  94. 4. Build an Intelligent Tic-Tac-Toe Agent/4. The Value Function and Your First Reinforcement Learning Algorithm.vtt 22KB
  95. 10. Appendix/2. Windows-Focused Environment Setup 2018.vtt 19KB
  96. 10. Appendix/5. How to Code by Yourself (part 2).vtt 17KB
  97. 10. Appendix/3. How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow.vtt 17KB
  98. 10. Appendix/10. What order should I take your courses in (part 1).vtt 15KB
  99. 5. Markov Decision Proccesses/5. Value Function Introduction.vtt 14KB
  100. 9. Approximation Methods/9. Course Summary and Next Steps.vtt 14KB
  101. 10. Appendix/6. How to Succeed in this Course (Long Version).vtt 14KB
  102. 4. Build an Intelligent Tic-Tac-Toe Agent/2. Components of a Reinforcement Learning System.vtt 13KB
  103. 10. Appendix/8. Proof that using Jupyter Notebook is the same as not using it.vtt 13KB
  104. 3. Return of the Multi-Armed Bandit/9. Bayesian Thompson Sampling.vtt 11KB
  105. 5. Markov Decision Proccesses/6. Value Functions.vtt 11KB
  106. 4. Build an Intelligent Tic-Tac-Toe Agent/8. Tic Tac Toe Code The Environment.vtt 11KB
  107. 1. Welcome/3. Strategy for Passing the Course.vtt 11KB
  108. 4. Build an Intelligent Tic-Tac-Toe Agent/7. Tic Tac Toe Code Enumerating States Recursively.vtt 10KB
  109. 3. Return of the Multi-Armed Bandit/2. Applications of the Explore-Exploit Dilemma.vtt 10KB
  110. 6. Dynamic Programming/2. Gridworld in Code.vtt 10KB
  111. 4. Build an Intelligent Tic-Tac-Toe Agent/9. Tic Tac Toe Code The Agent.vtt 10KB
  112. 7. Monte Carlo/2. Monte Carlo Policy Evaluation.vtt 10KB
  113. 7. Monte Carlo/5. Monte Carlo Control.vtt 9KB
  114. 4. Build an Intelligent Tic-Tac-Toe Agent/11. Tic Tac Toe Summary.vtt 9KB
  115. 6. Dynamic Programming/4. Iterative Policy Evaluation in Code.vtt 9KB
  116. 8. Temporal Difference Learning/4. SARSA.vtt 9KB
  117. 2. High Level Overview of Reinforcement Learning and Course Outline/4. Defining Some Terms.vtt 9KB
  118. 6. Dynamic Programming/11. Dynamic Programming Summary.vtt 9KB
  119. 4. Build an Intelligent Tic-Tac-Toe Agent/10. Tic Tac Toe Code Main Loop and Demo.vtt 8KB
  120. 5. Markov Decision Proccesses/2. The Markov Property.vtt 8KB
  121. 2. High Level Overview of Reinforcement Learning and Course Outline/2. On Unusual or Unexpected Strategies of RL.vtt 8KB
  122. 6. Dynamic Programming/8. Policy Iteration in Windy Gridworld.vtt 7KB
  123. 3. Return of the Multi-Armed Bandit/8. UCB1.vtt 7KB
  124. 9. Approximation Methods/1. Approximation Intro.vtt 7KB
  125. 5. Markov Decision Proccesses/3. Defining and Formalizing the MDP.vtt 7KB
  126. 3. Return of the Multi-Armed Bandit/1. Problem Setup and The Explore-Exploit Dilemma.vtt 7KB
  127. 3. Return of the Multi-Armed Bandit/11. Nonstationary Bandits.vtt 7KB
  128. 9. Approximation Methods/2. Linear Models for Reinforcement Learning.vtt 7KB
  129. 4. Build an Intelligent Tic-Tac-Toe Agent/1. Naive Solution to Tic-Tac-Toe.vtt 7KB
  130. 7. Monte Carlo/9. Monte Carlo Summary.vtt 6KB
  131. 6. Dynamic Programming/9. Value Iteration.vtt 6KB
  132. 9. Approximation Methods/3. Features.vtt 6KB
  133. 6. Dynamic Programming/3. Designing Your RL Program.vtt 6KB
  134. 2. High Level Overview of Reinforcement Learning and Course Outline/3. Course Outline.vtt 6KB
  135. 10. Appendix/9. Python 2 vs Python 3.vtt 6KB
  136. 4. Build an Intelligent Tic-Tac-Toe Agent/5. Tic Tac Toe Code Outline.vtt 6KB
  137. 8. Temporal Difference Learning/2. TD(0) Prediction.vtt 6KB
  138. 9. Approximation Methods/6. TD(0) Semi-Gradient Prediction.vtt 6KB
  139. 7. Monte Carlo/3. Monte Carlo Policy Evaluation in Code.vtt 6KB
  140. 6. Dynamic Programming/7. Policy Iteration in Code.vtt 6KB
  141. 3. Return of the Multi-Armed Bandit/10. Thompson Sampling vs. Epsilon-Greedy vs. Optimistic Initial Values vs. UCB1.vtt 6KB
  142. 5. Markov Decision Proccesses/4. Future Rewards.vtt 5KB
  143. 7. Monte Carlo/1. Monte Carlo Intro.vtt 5KB
  144. 3. Return of the Multi-Armed Bandit/5. Designing Your Bandit Program.vtt 5KB
  145. 8. Temporal Difference Learning/6. Q Learning.vtt 5KB
  146. 7. Monte Carlo/6. Monte Carlo Control in Code.vtt 5KB
  147. 8. Temporal Difference Learning/5. SARSA in Code.vtt 5KB
  148. 7. Monte Carlo/7. Monte Carlo Control without Exploring Starts.vtt 5KB
  149. 9. Approximation Methods/7. Semi-Gradient SARSA.vtt 5KB
  150. 9. Approximation Methods/8. Semi-Gradient SARSA in Code.vtt 5KB
  151. 1. Welcome/2. Where to get the Code.vtt 5KB
  152. 6. Dynamic Programming/1. Intro to Dynamic Programming and Iterative Policy Evaluation.vtt 5KB
  153. 3. Return of the Multi-Armed Bandit/6. Comparing Different Epsilons.vtt 5KB
  154. 7. Monte Carlo/4. Policy Evaluation in Windy Gridworld.vtt 5KB
  155. 6. Dynamic Programming/5. Policy Improvement.vtt 5KB
  156. 5. Markov Decision Proccesses/8. Optimal Policy and Optimal Value Function.vtt 5KB
  157. 4. Build an Intelligent Tic-Tac-Toe Agent/3. Notes on Assigning Rewards.vtt 5KB
  158. 4. Build an Intelligent Tic-Tac-Toe Agent/6. Tic Tac Toe Code Representing States.vtt 5KB
  159. 8. Temporal Difference Learning/8. TD Summary.vtt 4KB
  160. 4. Build an Intelligent Tic-Tac-Toe Agent/12. Tic Tac Toe Exercise.vtt 4KB
  161. 1. Welcome/1. Introduction.vtt 4KB
  162. 5. Markov Decision Proccesses/1. Gridworld.vtt 4KB
  163. 9. Approximation Methods/5. Monte Carlo Prediction with Approximation in Code.vtt 4KB
  164. 8. Temporal Difference Learning/3. TD(0) Prediction in Code.vtt 4KB
  165. 10. Appendix/1. What is the Appendix.vtt 3KB
  166. 7. Monte Carlo/8. Monte Carlo Control without Exploring Starts in Code.vtt 3KB
  167. 10. Appendix/12. Where to get discount coupons and FREE deep learning material.vtt 3KB
  168. 6. Dynamic Programming/6. Policy Iteration.vtt 3KB
  169. 8. Temporal Difference Learning/7. Q Learning in Code.vtt 3KB
  170. 8. Temporal Difference Learning/1. Temporal Difference Intro.vtt 3KB
  171. 6. Dynamic Programming/10. Value Iteration in Code.vtt 3KB
  172. 3. Return of the Multi-Armed Bandit/7. Optimistic Initial Values.vtt 3KB
  173. 3. Return of the Multi-Armed Bandit/3. Epsilon-Greedy.vtt 3KB
  174. 5. Markov Decision Proccesses/9. MDP Summary.vtt 2KB
  175. 9. Approximation Methods/4. Monte Carlo Prediction with Approximation.vtt 2KB
  176. 3. Return of the Multi-Armed Bandit/4. Updating a Sample Mean.vtt 2KB
  177. [FreeCourseLab.com].url 126B