[] Udemy - Artificial Intelligence Reinforcement Learning in Python
- 收录时间:2019-06-12 05:03:43
- 文件大小:2GB
- 下载次数:84
- 最近下载:2021-01-20 14:13:02
- 磁力链接:
-
文件列表
- 10. Appendix/2. Windows-Focused Environment Setup 2018.mp4 186MB
- 4. Build an Intelligent Tic-Tac-Toe Agent/4. The Value Function and Your First Reinforcement Learning Algorithm.mp4 104MB
- 5. Markov Decision Proccesses/7. Bellman Examples.mp4 87MB
- 10. Appendix/8. Proof that using Jupyter Notebook is the same as not using it.mp4 78MB
- 2. High Level Overview of Reinforcement Learning and Course Outline/1. What is Reinforcement Learning.mp4 55MB
- 3. Return of the Multi-Armed Bandit/9. Bayesian Thompson Sampling.mp4 52MB
- 3. Return of the Multi-Armed Bandit/2. Applications of the Explore-Exploit Dilemma.mp4 51MB
- 10. Appendix/3. How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow.mp4 44MB
- 2. High Level Overview of Reinforcement Learning and Course Outline/1. What is Reinforcement Learning.vtt 43MB
- 2. High Level Overview of Reinforcement Learning and Course Outline/4. Defining Some Terms.mp4 42MB
- 10. Appendix/7. Is this for Beginners or Experts Academic or Practical Fast or slow-paced.mp4 39MB
- 10. Appendix/11. What order should I take your courses in (part 2).mp4 38MB
- 2. High Level Overview of Reinforcement Learning and Course Outline/2. On Unusual or Unexpected Strategies of RL.mp4 37MB
- 1. Welcome/1. Introduction.mp4 34MB
- 2. High Level Overview of Reinforcement Learning and Course Outline/3. Course Outline.mp4 31MB
- 10. Appendix/10. What order should I take your courses in (part 1).mp4 29MB
- 10. Appendix/4. How to Code by Yourself (part 1).mp4 25MB
- 3. Return of the Multi-Armed Bandit/5. Designing Your Bandit Program.mp4 25MB
- 6. Dynamic Programming/3. Designing Your RL Program.mp4 22MB
- 4. Build an Intelligent Tic-Tac-Toe Agent/12. Tic Tac Toe Exercise.mp4 20MB
- 5. Markov Decision Proccesses/5. Value Function Introduction.mp4 20MB
- 10. Appendix/6. How to Succeed in this Course (Long Version).mp4 18MB
- 10. Appendix/5. How to Code by Yourself (part 2).mp4 15MB
- 9. Approximation Methods/9. Course Summary and Next Steps.mp4 13MB
- 4. Build an Intelligent Tic-Tac-Toe Agent/2. Components of a Reinforcement Learning System.mp4 13MB
- 6. Dynamic Programming/4. Iterative Policy Evaluation in Code.mp4 12MB
- 6. Dynamic Programming/2. Gridworld in Code.mp4 11MB
- 9. Approximation Methods/8. Semi-Gradient SARSA in Code.mp4 11MB
- 3. Return of the Multi-Armed Bandit/10. Thompson Sampling vs. Epsilon-Greedy vs. Optimistic Initial Values vs. UCB1.mp4 11MB
- 7. Monte Carlo/6. Monte Carlo Control in Code.mp4 10MB
- 4. Build an Intelligent Tic-Tac-Toe Agent/8. Tic Tac Toe Code The Environment.mp4 10MB
- 4. Build an Intelligent Tic-Tac-Toe Agent/7. Tic Tac Toe Code Enumerating States Recursively.mp4 10MB
- 1. Welcome/3. Strategy for Passing the Course.mp4 9MB
- 4. Build an Intelligent Tic-Tac-Toe Agent/10. Tic Tac Toe Code Main Loop and Demo.mp4 9MB
- 7. Monte Carlo/5. Monte Carlo Control.mp4 9MB
- 6. Dynamic Programming/8. Policy Iteration in Windy Gridworld.mp4 9MB
- 4. Build an Intelligent Tic-Tac-Toe Agent/9. Tic Tac Toe Code The Agent.mp4 9MB
- 8. Temporal Difference Learning/5. SARSA in Code.mp4 9MB
- 7. Monte Carlo/2. Monte Carlo Policy Evaluation.mp4 9MB
- 9. Approximation Methods/6. TD(0) Semi-Gradient Prediction.mp4 8MB
- 6. Dynamic Programming/11. Dynamic Programming Summary.mp4 8MB
- 4. Build an Intelligent Tic-Tac-Toe Agent/11. Tic Tac Toe Summary.mp4 8MB
- 5. Markov Decision Proccesses/6. Value Functions.mp4 8MB
- 3. Return of the Multi-Armed Bandit/8. UCB1.mp4 8MB
- 8. Temporal Difference Learning/4. SARSA.mp4 8MB
- 7. Monte Carlo/8. Monte Carlo Control without Exploring Starts in Code.mp4 8MB
- 3. Return of the Multi-Armed Bandit/6. Comparing Different Epsilons.mp4 8MB
- 7. Monte Carlo/3. Monte Carlo Policy Evaluation in Code.mp4 8MB
- 10. Appendix/9. Python 2 vs Python 3.mp4 8MB
- 7. Monte Carlo/4. Policy Evaluation in Windy Gridworld.mp4 8MB
- 6. Dynamic Programming/7. Policy Iteration in Code.mp4 8MB
- 3. Return of the Multi-Armed Bandit/11. Nonstationary Bandits.mp4 7MB
- 5. Markov Decision Proccesses/2. The Markov Property.mp4 7MB
- 5. Markov Decision Proccesses/3. Defining and Formalizing the MDP.mp4 7MB
- 9. Approximation Methods/5. Monte Carlo Prediction with Approximation in Code.mp4 7MB
- 3. Return of the Multi-Armed Bandit/1. Problem Setup and The Explore-Exploit Dilemma.mp4 6MB
- 9. Approximation Methods/2. Linear Models for Reinforcement Learning.mp4 6MB
- 9. Approximation Methods/1. Approximation Intro.mp4 6MB
- 9. Approximation Methods/3. Features.mp4 6MB
- 6. Dynamic Programming/9. Value Iteration.mp4 6MB
- 4. Build an Intelligent Tic-Tac-Toe Agent/1. Naive Solution to Tic-Tac-Toe.mp4 6MB
- 8. Temporal Difference Learning/2. TD(0) Prediction.mp4 6MB
- 7. Monte Carlo/9. Monte Carlo Summary.mp4 6MB
- 10. Appendix/1. What is the Appendix.mp4 5MB
- 8. Temporal Difference Learning/7. Q Learning in Code.mp4 5MB
- 8. Temporal Difference Learning/3. TD(0) Prediction in Code.mp4 5MB
- 5. Markov Decision Proccesses/4. Future Rewards.mp4 5MB
- 3. Return of the Multi-Armed Bandit/7. Optimistic Initial Values.mp4 5MB
- 4. Build an Intelligent Tic-Tac-Toe Agent/5. Tic Tac Toe Code Outline.mp4 5MB
- 7. Monte Carlo/1. Monte Carlo Intro.mp4 5MB
- 6. Dynamic Programming/10. Value Iteration in Code.mp4 5MB
- 8. Temporal Difference Learning/6. Q Learning.mp4 5MB
- 6. Dynamic Programming/1. Intro to Dynamic Programming and Iterative Policy Evaluation.mp4 5MB
- 9. Approximation Methods/7. Semi-Gradient SARSA.mp4 5MB
- 7. Monte Carlo/7. Monte Carlo Control without Exploring Starts.mp4 5MB
- 6. Dynamic Programming/5. Policy Improvement.mp4 5MB
- 1. Welcome/2. Where to get the Code.mp4 4MB
- 4. Build an Intelligent Tic-Tac-Toe Agent/6. Tic Tac Toe Code Representing States.mp4 4MB
- 4. Build an Intelligent Tic-Tac-Toe Agent/3. Notes on Assigning Rewards.mp4 4MB
- 10. Appendix/12. Where to get discount coupons and FREE deep learning material.mp4 4MB
- 8. Temporal Difference Learning/8. TD Summary.mp4 4MB
- 5. Markov Decision Proccesses/1. Gridworld.mp4 3MB
- 5. Markov Decision Proccesses/8. Optimal Policy and Optimal Value Function.mp4 3MB
- 6. Dynamic Programming/6. Policy Iteration.mp4 3MB
- 9. Approximation Methods/4. Monte Carlo Prediction with Approximation.mp4 3MB
- 3. Return of the Multi-Armed Bandit/3. Epsilon-Greedy.mp4 3MB
- 8. Temporal Difference Learning/1. Temporal Difference Intro.mp4 3MB
- 5. Markov Decision Proccesses/9. MDP Summary.mp4 2MB
- 3. Return of the Multi-Armed Bandit/4. Updating a Sample Mean.mp4 2MB
- 10. Appendix/7. Is this for Beginners or Experts Academic or Practical Fast or slow-paced.vtt 30KB
- 10. Appendix/4. How to Code by Yourself (part 1).vtt 27KB
- 5. Markov Decision Proccesses/7. Bellman Examples.vtt 26KB
- 10. Appendix/11. What order should I take your courses in (part 2).vtt 22KB
- 4. Build an Intelligent Tic-Tac-Toe Agent/4. The Value Function and Your First Reinforcement Learning Algorithm.vtt 22KB
- 10. Appendix/2. Windows-Focused Environment Setup 2018.vtt 19KB
- 10. Appendix/5. How to Code by Yourself (part 2).vtt 17KB
- 10. Appendix/3. How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow.vtt 17KB
- 10. Appendix/10. What order should I take your courses in (part 1).vtt 15KB
- 5. Markov Decision Proccesses/5. Value Function Introduction.vtt 14KB
- 9. Approximation Methods/9. Course Summary and Next Steps.vtt 14KB
- 10. Appendix/6. How to Succeed in this Course (Long Version).vtt 14KB
- 4. Build an Intelligent Tic-Tac-Toe Agent/2. Components of a Reinforcement Learning System.vtt 13KB
- 10. Appendix/8. Proof that using Jupyter Notebook is the same as not using it.vtt 13KB
- 3. Return of the Multi-Armed Bandit/9. Bayesian Thompson Sampling.vtt 11KB
- 5. Markov Decision Proccesses/6. Value Functions.vtt 11KB
- 4. Build an Intelligent Tic-Tac-Toe Agent/8. Tic Tac Toe Code The Environment.vtt 11KB
- 1. Welcome/3. Strategy for Passing the Course.vtt 11KB
- 4. Build an Intelligent Tic-Tac-Toe Agent/7. Tic Tac Toe Code Enumerating States Recursively.vtt 10KB
- 3. Return of the Multi-Armed Bandit/2. Applications of the Explore-Exploit Dilemma.vtt 10KB
- 6. Dynamic Programming/2. Gridworld in Code.vtt 10KB
- 4. Build an Intelligent Tic-Tac-Toe Agent/9. Tic Tac Toe Code The Agent.vtt 10KB
- 7. Monte Carlo/2. Monte Carlo Policy Evaluation.vtt 10KB
- 7. Monte Carlo/5. Monte Carlo Control.vtt 9KB
- 4. Build an Intelligent Tic-Tac-Toe Agent/11. Tic Tac Toe Summary.vtt 9KB
- 6. Dynamic Programming/4. Iterative Policy Evaluation in Code.vtt 9KB
- 8. Temporal Difference Learning/4. SARSA.vtt 9KB
- 2. High Level Overview of Reinforcement Learning and Course Outline/4. Defining Some Terms.vtt 9KB
- 6. Dynamic Programming/11. Dynamic Programming Summary.vtt 9KB
- 4. Build an Intelligent Tic-Tac-Toe Agent/10. Tic Tac Toe Code Main Loop and Demo.vtt 8KB
- 5. Markov Decision Proccesses/2. The Markov Property.vtt 8KB
- 2. High Level Overview of Reinforcement Learning and Course Outline/2. On Unusual or Unexpected Strategies of RL.vtt 8KB
- 6. Dynamic Programming/8. Policy Iteration in Windy Gridworld.vtt 7KB
- 3. Return of the Multi-Armed Bandit/8. UCB1.vtt 7KB
- 9. Approximation Methods/1. Approximation Intro.vtt 7KB
- 5. Markov Decision Proccesses/3. Defining and Formalizing the MDP.vtt 7KB
- 3. Return of the Multi-Armed Bandit/1. Problem Setup and The Explore-Exploit Dilemma.vtt 7KB
- 3. Return of the Multi-Armed Bandit/11. Nonstationary Bandits.vtt 7KB
- 9. Approximation Methods/2. Linear Models for Reinforcement Learning.vtt 7KB
- 4. Build an Intelligent Tic-Tac-Toe Agent/1. Naive Solution to Tic-Tac-Toe.vtt 7KB
- 7. Monte Carlo/9. Monte Carlo Summary.vtt 6KB
- 6. Dynamic Programming/9. Value Iteration.vtt 6KB
- 9. Approximation Methods/3. Features.vtt 6KB
- 6. Dynamic Programming/3. Designing Your RL Program.vtt 6KB
- 2. High Level Overview of Reinforcement Learning and Course Outline/3. Course Outline.vtt 6KB
- 10. Appendix/9. Python 2 vs Python 3.vtt 6KB
- 4. Build an Intelligent Tic-Tac-Toe Agent/5. Tic Tac Toe Code Outline.vtt 6KB
- 8. Temporal Difference Learning/2. TD(0) Prediction.vtt 6KB
- 9. Approximation Methods/6. TD(0) Semi-Gradient Prediction.vtt 6KB
- 7. Monte Carlo/3. Monte Carlo Policy Evaluation in Code.vtt 6KB
- 6. Dynamic Programming/7. Policy Iteration in Code.vtt 6KB
- 3. Return of the Multi-Armed Bandit/10. Thompson Sampling vs. Epsilon-Greedy vs. Optimistic Initial Values vs. UCB1.vtt 6KB
- 5. Markov Decision Proccesses/4. Future Rewards.vtt 5KB
- 7. Monte Carlo/1. Monte Carlo Intro.vtt 5KB
- 3. Return of the Multi-Armed Bandit/5. Designing Your Bandit Program.vtt 5KB
- 8. Temporal Difference Learning/6. Q Learning.vtt 5KB
- 7. Monte Carlo/6. Monte Carlo Control in Code.vtt 5KB
- 8. Temporal Difference Learning/5. SARSA in Code.vtt 5KB
- 7. Monte Carlo/7. Monte Carlo Control without Exploring Starts.vtt 5KB
- 9. Approximation Methods/7. Semi-Gradient SARSA.vtt 5KB
- 9. Approximation Methods/8. Semi-Gradient SARSA in Code.vtt 5KB
- 1. Welcome/2. Where to get the Code.vtt 5KB
- 6. Dynamic Programming/1. Intro to Dynamic Programming and Iterative Policy Evaluation.vtt 5KB
- 3. Return of the Multi-Armed Bandit/6. Comparing Different Epsilons.vtt 5KB
- 7. Monte Carlo/4. Policy Evaluation in Windy Gridworld.vtt 5KB
- 6. Dynamic Programming/5. Policy Improvement.vtt 5KB
- 5. Markov Decision Proccesses/8. Optimal Policy and Optimal Value Function.vtt 5KB
- 4. Build an Intelligent Tic-Tac-Toe Agent/3. Notes on Assigning Rewards.vtt 5KB
- 4. Build an Intelligent Tic-Tac-Toe Agent/6. Tic Tac Toe Code Representing States.vtt 5KB
- 8. Temporal Difference Learning/8. TD Summary.vtt 4KB
- 4. Build an Intelligent Tic-Tac-Toe Agent/12. Tic Tac Toe Exercise.vtt 4KB
- 1. Welcome/1. Introduction.vtt 4KB
- 5. Markov Decision Proccesses/1. Gridworld.vtt 4KB
- 9. Approximation Methods/5. Monte Carlo Prediction with Approximation in Code.vtt 4KB
- 8. Temporal Difference Learning/3. TD(0) Prediction in Code.vtt 4KB
- 10. Appendix/1. What is the Appendix.vtt 3KB
- 7. Monte Carlo/8. Monte Carlo Control without Exploring Starts in Code.vtt 3KB
- 10. Appendix/12. Where to get discount coupons and FREE deep learning material.vtt 3KB
- 6. Dynamic Programming/6. Policy Iteration.vtt 3KB
- 8. Temporal Difference Learning/7. Q Learning in Code.vtt 3KB
- 8. Temporal Difference Learning/1. Temporal Difference Intro.vtt 3KB
- 6. Dynamic Programming/10. Value Iteration in Code.vtt 3KB
- 3. Return of the Multi-Armed Bandit/7. Optimistic Initial Values.vtt 3KB
- 3. Return of the Multi-Armed Bandit/3. Epsilon-Greedy.vtt 3KB
- 5. Markov Decision Proccesses/9. MDP Summary.vtt 2KB
- 9. Approximation Methods/4. Monte Carlo Prediction with Approximation.vtt 2KB
- 3. Return of the Multi-Armed Bandit/4. Updating a Sample Mean.vtt 2KB
- [FreeCourseLab.com].url 126B