[] Udemy - Artificial Intelligence Reinforcement Learning in Python
- 收录时间:2021-07-26 08:28:35
- 文件大小:3GB
- 下载次数:1
- 最近下载:2021-07-26 08:28:35
- 磁力链接:
-
文件列表
- 10/1. Windows-Focused Environment Setup 2018.mp4 186MB
- 4. Markov Decision Proccesses/11. Bellman Examples.mp4 87MB
- 11/3. Proof that using Jupyter Notebook is the same as not using it.mp4 78MB
- 2. Return of the Multi-Armed Bandit/16. Bayesian Bandits Thompson Sampling Theory (pt 2).mp4 75MB
- 5. Dynamic Programming/4. Iterative Policy Evaluation in Code.mp4 68MB
- 9. Stock Trading Project with Reinforcement Learning/6. Code pt 2.mp4 65MB
- 1. Welcome/5. Warmup.mp4 63MB
- 4. Markov Decision Proccesses/5. Markov Decision Processes (MDPs).mp4 62MB
- 5. Dynamic Programming/9. Policy Iteration in Code.mp4 56MB
- 4. Markov Decision Proccesses/12. Optimal Policy and Optimal Value Function (pt 1).mp4 56MB
- 2. Return of the Multi-Armed Bandit/15. Bayesian Bandits Thompson Sampling Theory (pt 1).mp4 56MB
- 2. Return of the Multi-Armed Bandit/12. UCB1 Theory.mp4 56MB
- 3. High Level Overview of Reinforcement Learning/1. What is Reinforcement Learning.mp4 55MB
- 4. Markov Decision Proccesses/2. Gridworld.mp4 54MB
- 9. Stock Trading Project with Reinforcement Learning/2. Data and Environment.mp4 52MB
- 2. Return of the Multi-Armed Bandit/1. Section Introduction The Explore-Exploit Dilemma.mp4 52MB
- 5. Dynamic Programming/10. Policy Iteration in Windy Gridworld.mp4 51MB
- 2. Return of the Multi-Armed Bandit/2. Applications of the Explore-Exploit Dilemma.mp4 51MB
- 2. Return of the Multi-Armed Bandit/24. (Optional) Alternative Bandit Designs.mp4 50MB
- 9. Stock Trading Project with Reinforcement Learning/5. Code pt 1.mp4 50MB
- 9. Stock Trading Project with Reinforcement Learning/8. Code pt 4.mp4 49MB
- 2. Return of the Multi-Armed Bandit/19. Thompson Sampling With Gaussian Reward Theory.mp4 49MB
- 5. Dynamic Programming/6. Iterative Policy Evaluation for Windy Gridworld in Code.mp4 47MB
- 5. Dynamic Programming/3. Gridworld in Code.mp4 47MB
- 5. Dynamic Programming/12. Value Iteration in Code.mp4 46MB
- 9. Stock Trading Project with Reinforcement Learning/3. How to Model Q for Q-Learning.mp4 45MB
- 10/2. How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow.mp4 44MB
- 2. Return of the Multi-Armed Bandit/8. Comparing Different Epsilons.mp4 44MB
- 2. Return of the Multi-Armed Bandit/20. Thompson Sampling With Gaussian Reward Code.mp4 43MB
- 5. Dynamic Programming/5. Windy Gridworld in Code.mp4 41MB
- 2. Return of the Multi-Armed Bandit/7. Epsilon-Greedy in Code.mp4 41MB
- 3. High Level Overview of Reinforcement Learning/3. From Bandits to Full Reinforcement Learning.mp4 41MB
- 1. Welcome/2. Course Outline and Big Picture.mp4 40MB
- 4. Markov Decision Proccesses/6. Future Rewards.mp4 40MB
- 12/2. Is this for Beginners or Experts Academic or Practical Fast or slow-paced.srt 39MB
- 12/2. Is this for Beginners or Experts Academic or Practical Fast or slow-paced.mp4 39MB
- 13. Appendix FAQ/2. BONUS Where to get discount coupons and FREE deep learning material.mp4 38MB
- 13. Appendix FAQ Finale/2. BONUS Where to get discount coupons and FREE deep learning material.mp4 38MB
- 12/4. Machine Learning and AI Prerequisite Roadmap (pt 2).mp4 38MB
- 4. Markov Decision Proccesses/1. MDP Section Introduction.mp4 37MB
- 3. High Level Overview of Reinforcement Learning/2. On Unusual or Unexpected Strategies of RL.mp4 37MB
- 2. Return of the Multi-Armed Bandit/23. Bandit Summary, Real Data, and Online Learning.mp4 35MB
- 1. Welcome/1. Introduction.mp4 34MB
- 9. Stock Trading Project with Reinforcement Learning/7. Code pt 3.mp4 34MB
- 2. Return of the Multi-Armed Bandit/18. Thompson Sampling Code.mp4 33MB
- 4. Markov Decision Proccesses/3. Choosing Rewards.mp4 32MB
- 2. Return of the Multi-Armed Bandit/22. Nonstationary Bandits.mp4 31MB
- 12/3. Machine Learning and AI Prerequisite Roadmap (pt 1).mp4 29MB
- 2. Return of the Multi-Armed Bandit/5. Epsilon-Greedy Beginner's Exercise Prompt.mp4 29MB
- 2. Return of the Multi-Armed Bandit/3. Epsilon-Greedy Theory.mp4 28MB
- 4. Markov Decision Proccesses/8. The Bellman Equation (pt 1).mp4 28MB
- 2. Return of the Multi-Armed Bandit/21. Why don't we just use a library.mp4 27MB
- 9. Stock Trading Project with Reinforcement Learning/1. Stock Trading Project Section Introduction.mp4 27MB
- 4. Markov Decision Proccesses/9. The Bellman Equation (pt 2).mp4 27MB
- 4. Markov Decision Proccesses/10. The Bellman Equation (pt 3).mp4 25MB
- 2. Return of the Multi-Armed Bandit/11. Optimistic Initial Values Code.mp4 25MB
- 11/1. How to Code by Yourself (part 1).mp4 25MB
- 2. Return of the Multi-Armed Bandit/6. Designing Your Bandit Program.mp4 25MB
- 2. Return of the Multi-Armed Bandit/9. Optimistic Initial Values Theory.mp4 24MB
- 9. Stock Trading Project with Reinforcement Learning/4. Design of the Program.mp4 23MB
- 2. Return of the Multi-Armed Bandit/4. Calculating a Sample Mean (pt 1).mp4 23MB
- 1. Welcome/3. Where to get the Code.mp4 23MB
- 5. Dynamic Programming/2. Designing Your RL Program.mp4 22MB
- 4. Markov Decision Proccesses/4. The Markov Property.mp4 22MB
- 2. Return of the Multi-Armed Bandit/14. UCB1 Code.mp4 21MB
- 4. Markov Decision Proccesses/7. Value Functions.srt 19MB
- 4. Markov Decision Proccesses/7. Value Functions.mp4 19MB
- 12/1. How to Succeed in this Course (Long Version).mp4 18MB
- 2. Return of the Multi-Armed Bandit/17. Thompson Sampling Beginner's Exercise Prompt.mp4 18MB
- 2. Return of the Multi-Armed Bandit/25. Suggestion Box.mp4 16MB
- 9. Stock Trading Project with Reinforcement Learning/9. Stock Trading Project Discussion.mp4 16MB
- 4. Markov Decision Proccesses/13. Optimal Policy and Optimal Value Function (pt 2).mp4 16MB
- 1. Welcome/4. How to Succeed in this Course.mp4 16MB
- 11/2. How to Code by Yourself (part 2).mp4 15MB
- 4. Markov Decision Proccesses/14. MDP Summary.mp4 14MB
- 2. Return of the Multi-Armed Bandit/10. Optimistic Initial Values Beginner's Exercise Prompt.mp4 14MB
- 8. Approximation Methods/9. Course Summary and Next Steps.mp4 13MB
- 2. Return of the Multi-Armed Bandit/13. UCB1 Beginner's Exercise Prompt.mp4 13MB
- 8. Approximation Methods/8. Semi-Gradient SARSA in Code.mp4 11MB
- 6. Monte Carlo/6. Monte Carlo Control in Code.mp4 10MB
- 6. Monte Carlo/5. Monte Carlo Control.mp4 9MB
- 7. Temporal Difference Learning/5. SARSA in Code.mp4 9MB
- 6. Monte Carlo/2. Monte Carlo Policy Evaluation.mp4 9MB
- 8. Approximation Methods/6. TD(0) Semi-Gradient Prediction.mp4 8MB
- 5. Dynamic Programming/13. Dynamic Programming Summary.mp4 8MB
- 7. Temporal Difference Learning/4. SARSA.mp4 8MB
- 6. Monte Carlo/8. Monte Carlo Control without Exploring Starts in Code.mp4 8MB
- 6. Monte Carlo/3. Monte Carlo Policy Evaluation in Code.mp4 8MB
- 11/4. Python 2 vs Python 3.mp4 8MB
- 6. Monte Carlo/4. Policy Evaluation in Windy Gridworld.mp4 8MB
- 8. Approximation Methods/5. Monte Carlo Prediction with Approximation in Code.mp4 7MB
- 8. Approximation Methods/2. Linear Models for Reinforcement Learning.mp4 6MB
- 8. Approximation Methods/1. Approximation Intro.mp4 6MB
- 8. Approximation Methods/3. Features.mp4 6MB
- 5. Dynamic Programming/11. Value Iteration.mp4 6MB
- 7. Temporal Difference Learning/2. TD(0) Prediction.mp4 6MB
- 6. Monte Carlo/9. Monte Carlo Summary.mp4 6MB
- 13. Appendix FAQ Finale/1. What is the Appendix.mp4 5MB
- 13. Appendix FAQ/1. What is the Appendix.mp4 5MB
- 7. Temporal Difference Learning/7. Q Learning in Code.mp4 5MB
- 7. Temporal Difference Learning/3. TD(0) Prediction in Code.mp4 5MB
- 6. Monte Carlo/1. Monte Carlo Intro.mp4 5MB
- 5. Dynamic Programming/1. Intro to Dynamic Programming and Iterative Policy Evaluation.mp4 5MB
- 7. Temporal Difference Learning/6. Q Learning.mp4 5MB
- 8. Approximation Methods/7. Semi-Gradient SARSA.mp4 5MB
- 6. Monte Carlo/7. Monte Carlo Control without Exploring Starts.mp4 5MB
- 5. Dynamic Programming/7. Policy Improvement.mp4 5MB
- 7. Temporal Difference Learning/8. TD Summary.mp4 4MB
- 5. Dynamic Programming/8. Policy Iteration.mp4 3MB
- 8. Approximation Methods/4. Monte Carlo Prediction with Approximation.mp4 3MB
- 7. Temporal Difference Learning/1. Temporal Difference Intro.mp4 3MB
- 11/1. How to Code by Yourself (part 1).srt 30KB
- 4. Markov Decision Proccesses/11. Bellman Examples.srt 29KB
- 2. Return of the Multi-Armed Bandit/16. Bayesian Bandits Thompson Sampling Theory (pt 2).srt 26KB
- 12/4. Machine Learning and AI Prerequisite Roadmap (pt 2).srt 23KB
- 2. Return of the Multi-Armed Bandit/12. UCB1 Theory.srt 22KB
- 4. Markov Decision Proccesses/5. Markov Decision Processes (MDPs).srt 22KB
- 10/1. Windows-Focused Environment Setup 2018.srt 20KB
- 1. Welcome/5. Warmup.srt 20KB
- 4. Markov Decision Proccesses/2. Gridworld.srt 19KB
- 11/2. How to Code by Yourself (part 2).srt 18KB
- 2. Return of the Multi-Armed Bandit/15. Bayesian Bandits Thompson Sampling Theory (pt 1).srt 18KB
- 10/2. How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow.srt 18KB
- 5. Dynamic Programming/4. Iterative Policy Evaluation in Code.srt 18KB
- 5. Dynamic Programming/3. Gridworld in Code.srt 18KB
- 9. Stock Trading Project with Reinforcement Learning/2. Data and Environment.srt 17KB
- 2. Return of the Multi-Armed Bandit/19. Thompson Sampling With Gaussian Reward Theory.srt 17KB
- 12/3. Machine Learning and AI Prerequisite Roadmap (pt 1).srt 16KB
- 8. Approximation Methods/9. Course Summary and Next Steps.srt 16KB
- 2. Return of the Multi-Armed Bandit/24. (Optional) Alternative Bandit Designs.srt 15KB
- 2. Return of the Multi-Armed Bandit/1. Section Introduction The Explore-Exploit Dilemma.srt 15KB
- 12/1. How to Succeed in this Course (Long Version).srt 15KB
- 4. Markov Decision Proccesses/6. Future Rewards.srt 14KB
- 11/3. Proof that using Jupyter Notebook is the same as not using it.srt 14KB
- 3. High Level Overview of Reinforcement Learning/3. From Bandits to Full Reinforcement Learning.srt 13KB
- 9. Stock Trading Project with Reinforcement Learning/3. How to Model Q for Q-Learning.srt 13KB
- 9. Stock Trading Project with Reinforcement Learning/6. Code pt 2.srt 13KB
- 4. Markov Decision Proccesses/12. Optimal Policy and Optimal Value Function (pt 1).srt 13KB
- 5. Dynamic Programming/10. Policy Iteration in Windy Gridworld.srt 12KB
- 4. Markov Decision Proccesses/8. The Bellman Equation (pt 1).srt 12KB
- 5. Dynamic Programming/9. Policy Iteration in Code.srt 12KB
- 3. High Level Overview of Reinforcement Learning/1. What is Reinforcement Learning.srt 12KB
- 2. Return of the Multi-Armed Bandit/2. Applications of the Explore-Exploit Dilemma.srt 12KB
- 1. Welcome/2. Course Outline and Big Picture.srt 11KB
- 5. Dynamic Programming/5. Windy Gridworld in Code.srt 11KB
- 5. Dynamic Programming/6. Iterative Policy Evaluation for Windy Gridworld in Code.srt 11KB
- 6. Monte Carlo/2. Monte Carlo Policy Evaluation.srt 11KB
- 2. Return of the Multi-Armed Bandit/3. Epsilon-Greedy Theory.srt 10KB
- 9. Stock Trading Project with Reinforcement Learning/5. Code pt 1.srt 10KB
- 6. Monte Carlo/5. Monte Carlo Control.srt 10KB
- 2. Return of the Multi-Armed Bandit/22. Nonstationary Bandits.srt 10KB
- 2. Return of the Multi-Armed Bandit/23. Bandit Summary, Real Data, and Online Learning.srt 10KB
- 5. Dynamic Programming/12. Value Iteration in Code.srt 10KB
- 7. Temporal Difference Learning/4. SARSA.srt 10KB
- 4. Markov Decision Proccesses/9. The Bellman Equation (pt 2).srt 9KB
- 5. Dynamic Programming/13. Dynamic Programming Summary.srt 9KB
- 2. Return of the Multi-Armed Bandit/7. Epsilon-Greedy in Code.srt 9KB
- 4. Markov Decision Proccesses/1. MDP Section Introduction.srt 9KB
- 9. Stock Trading Project with Reinforcement Learning/4. Design of the Program.srt 9KB
- 4. Markov Decision Proccesses/4. The Markov Property.srt 9KB
- 9. Stock Trading Project with Reinforcement Learning/8. Code pt 4.srt 9KB
- 4. Markov Decision Proccesses/10. The Bellman Equation (pt 3).srt 9KB
- 3. High Level Overview of Reinforcement Learning/2. On Unusual or Unexpected Strategies of RL.srt 9KB
- 2. Return of the Multi-Armed Bandit/4. Calculating a Sample Mean (pt 1).srt 8KB
- 2. Return of the Multi-Armed Bandit/21. Why don't we just use a library.srt 8KB
- 13. Appendix FAQ/2. BONUS Where to get discount coupons and FREE deep learning material.srt 8KB
- 2. Return of the Multi-Armed Bandit/20. Thompson Sampling With Gaussian Reward Code.srt 8KB
- 8. Approximation Methods/1. Approximation Intro.srt 8KB
- 2. Return of the Multi-Armed Bandit/9. Optimistic Initial Values Theory.srt 8KB
- 13. Appendix FAQ Finale/2. BONUS Where to get discount coupons and FREE deep learning material.srt 8KB
- 8. Approximation Methods/2. Linear Models for Reinforcement Learning.srt 7KB
- 9. Stock Trading Project with Reinforcement Learning/1. Stock Trading Project Section Introduction.srt 7KB
- 2. Return of the Multi-Armed Bandit/5. Epsilon-Greedy Beginner's Exercise Prompt.srt 7KB
- 6. Monte Carlo/9. Monte Carlo Summary.srt 7KB
- 5. Dynamic Programming/2. Designing Your RL Program.srt 7KB
- 2. Return of the Multi-Armed Bandit/8. Comparing Different Epsilons.srt 7KB
- 5. Dynamic Programming/11. Value Iteration.srt 7KB
- 1. Welcome/3. Where to get the Code.srt 7KB
- 8. Approximation Methods/3. Features.srt 7KB
- 7. Temporal Difference Learning/2. TD(0) Prediction.srt 6KB
- 8. Approximation Methods/6. TD(0) Semi-Gradient Prediction.srt 6KB
- 2. Return of the Multi-Armed Bandit/18. Thompson Sampling Code.srt 6KB
- 6. Monte Carlo/3. Monte Carlo Policy Evaluation in Code.srt 6KB
- 11/4. Python 2 vs Python 3.srt 6KB
- 2. Return of the Multi-Armed Bandit/6. Designing Your Bandit Program.srt 6KB
- 6. Monte Carlo/1. Monte Carlo Intro.srt 6KB
- 4. Markov Decision Proccesses/3. Choosing Rewards.srt 6KB
- 9. Stock Trading Project with Reinforcement Learning/7. Code pt 3.srt 6KB
- 6. Monte Carlo/6. Monte Carlo Control in Code.srt 6KB
- 7. Temporal Difference Learning/6. Q Learning.srt 6KB
- 2. Return of the Multi-Armed Bandit/11. Optimistic Initial Values Code.srt 6KB
- 7. Temporal Difference Learning/5. SARSA in Code.srt 6KB
- 6. Monte Carlo/7. Monte Carlo Control without Exploring Starts.srt 6KB
- 8. Approximation Methods/7. Semi-Gradient SARSA.srt 5KB
- 4. Markov Decision Proccesses/13. Optimal Policy and Optimal Value Function (pt 2).srt 5KB
- 8. Approximation Methods/8. Semi-Gradient SARSA in Code.srt 5KB
- 5. Dynamic Programming/1. Intro to Dynamic Programming and Iterative Policy Evaluation.srt 5KB
- 6. Monte Carlo/4. Policy Evaluation in Windy Gridworld.srt 5KB
- 5. Dynamic Programming/7. Policy Improvement.srt 5KB
- 2. Return of the Multi-Armed Bandit/25. Suggestion Box.srt 5KB
- 7. Temporal Difference Learning/8. TD Summary.srt 5KB
- 9. Stock Trading Project with Reinforcement Learning/9. Stock Trading Project Discussion.srt 5KB
- 1. Welcome/1. Introduction.srt 4KB
- 1. Welcome/4. How to Succeed in this Course.srt 4KB
- 2. Return of the Multi-Armed Bandit/14. UCB1 Code.srt 4KB
- 8. Approximation Methods/5. Monte Carlo Prediction with Approximation in Code.srt 4KB
- 4. Markov Decision Proccesses/14. MDP Summary.srt 4KB
- 7. Temporal Difference Learning/3. TD(0) Prediction in Code.srt 4KB
- 13. Appendix FAQ/1. What is the Appendix.srt 4KB
- 2. Return of the Multi-Armed Bandit/17. Thompson Sampling Beginner's Exercise Prompt.srt 4KB
- 13. Appendix FAQ Finale/1. What is the Appendix.srt 4KB
- 6. Monte Carlo/8. Monte Carlo Control without Exploring Starts in Code.srt 4KB
- 5. Dynamic Programming/8. Policy Iteration.srt 3KB
- 7. Temporal Difference Learning/7. Q Learning in Code.srt 3KB
- 7. Temporal Difference Learning/1. Temporal Difference Intro.srt 3KB
- 2. Return of the Multi-Armed Bandit/10. Optimistic Initial Values Beginner's Exercise Prompt.srt 3KB
- 2. Return of the Multi-Armed Bandit/13. UCB1 Beginner's Exercise Prompt.srt 3KB
- 8. Approximation Methods/4. Monte Carlo Prediction with Approximation.srt 2KB
- 1. Welcome/[Tutorialsplanet.NET].url 128B
- 13. Appendix FAQ Finale/[Tutorialsplanet.NET].url 128B
- 2. Return of the Multi-Armed Bandit/[Tutorialsplanet.NET].url 128B
- 5. Dynamic Programming/[Tutorialsplanet.NET].url 128B
- 8. Approximation Methods/[Tutorialsplanet.NET].url 128B
- [Tutorialsplanet.NET].url 128B