[] Coursera - Practical Reinforcement Learning 收录时间:2018-12-09 01:31:25 文件大小:1GB 下载次数:151 最近下载:2020-12-29 18:15:15 磁力链接: magnet:?xt=urn:btih:31b47a1285df93a33f1c80a563fd43b322fc434d 立即下载 复制链接 文件列表 013.Honor/033. Partial observability.mp4 57MB 019.Planning with Monte Carlo Tree Search/053. Introduction to planning.mp4 52MB 011.Limitations of Tabular Methods/025. Supervised & Reinforcement Learning.mp4 51MB 005.Striving for reward/014. Reward design.mp4 50MB 011.Limitations of Tabular Methods/027. Difficulties with Approximate Methods.mp4 47MB 018.Uncertainty-based exploration/052. Bayesian UCB.mp4 41MB 009.On-policy vs off-policy/023. Accounting for exploration. Expected Value SARSA..mp4 38MB 006.Bellman equations/015. State and Action Value Functions.mp4 37MB 003.Black box optimization/006. Crossentropy method.mp4 36MB 014.Policy-based RL vs Value-based RL/034. Intuition.mp4 35MB 013.Honor/032. More DQN tricks.mp4 34MB 011.Limitations of Tabular Methods/026. Loss functions in value based RL.mp4 34MB 001.Welcome/001. Why should you care.mp4 32MB 007.Generalized Policy Iteration/017. Policy evaluation & improvement.mp4 32MB 014.Policy-based RL vs Value-based RL/036. Policy gradient formalism.mp4 32MB 015.REINFORCE/038. REINFORCE.mp4 31MB 019.Planning with Monte Carlo Tree Search/054. Monte Carlo Tree Search.mp4 31MB 008.Model-free learning/020. Monte-Carlo & Temporal Difference; Q-learning.mp4 30MB 012.Case Study Deep Q-Network/029. DQN the internals.mp4 30MB 008.Model-free learning/019. Model-based vs model-free.mp4 29MB 008.Model-free learning/021. Exploration vs Exploitation.mp4 28MB 004.All the cool stuff that isn't in the base track/011. Evolution strategies log-derivative trick.mp4 28MB 012.Case Study Deep Q-Network/028. DQN bird's eye view.mp4 28MB 010.Experience Replay/024. On-policy vs off-policy; Experience replay.mp4 27MB 016.Actor-critic/042. Case study A3C.mp4 26MB 017.Measuting exploration/045. Recap bandits.mp4 25MB 016.Actor-critic/039. Advantage actor-critic.mp4 25MB 007.Generalized Policy Iteration/018. Policy and value iteration.mp4 24MB 016.Actor-critic/044. Combining supervised & reinforcement learning.mp4 24MB 002.Reinforcement Learning/004. Decision process & applications.mp4 23MB 003.Black box optimization/008. More on approximate crossentropy method.mp4 23MB 018.Uncertainty-based exploration/048. Intuitive explanation.mp4 22MB 018.Uncertainty-based exploration/051. UCB-1.mp4 22MB 017.Measuting exploration/046. Regret measuring the quality of exploration.mp4 21MB 004.All the cool stuff that isn't in the base track/012. Evolution strategies duct tape.mp4 21MB 004.All the cool stuff that isn't in the base track/009. Evolution strategies core idea.mp4 21MB 013.Honor/031. Double Q-learning.mp4 20MB 003.Black box optimization/007. Approximate crossentropy method.mp4 19MB 013.Honor/030. DQN statistical issues.mp4 19MB 017.Measuting exploration/047. The message just repeats. 'Regret, Regret, Regret.'.mp4 18MB 006.Bellman equations/016. Measuring Policy Optimality.mp4 18MB 003.Black box optimization/005. Markov Decision Process.mp4 18MB 002.Reinforcement Learning/003. Multi-armed bandit.mp4 18MB 004.All the cool stuff that isn't in the base track/010. Evolution strategies math problems.mp4 18MB 016.Actor-critic/040. Duct tape zone.mp4 18MB 018.Uncertainty-based exploration/049. Thompson Sampling.mp4 17MB 016.Actor-critic/041. Policy-based vs Value-based.mp4 17MB 018.Uncertainty-based exploration/050. Optimism in face of uncertainty.mp4 17MB 014.Policy-based RL vs Value-based RL/035. All Kinds of Policies.mp4 16MB 004.All the cool stuff that isn't in the base track/013. Blackbox optimization drawbacks.mp4 15MB 016.Actor-critic/043. A3C case study (2 2).mp4 15MB 014.Policy-based RL vs Value-based RL/037. The log-derivative trick.mp4 13MB 001.Welcome/002. Reinforcement learning vs all.mp4 11MB 008.Model-free learning/022. Footnote Monte-Carlo vs Temporal Difference.mp4 10MB 013.Honor/033. Partial observability.srt 28KB 019.Planning with Monte Carlo Tree Search/053. Introduction to planning.srt 25KB 011.Limitations of Tabular Methods/025. Supervised & Reinforcement Learning.srt 25KB 005.Striving for reward/014. Reward design.srt 23KB 011.Limitations of Tabular Methods/027. Difficulties with Approximate Methods.srt 22KB 018.Uncertainty-based exploration/052. Bayesian UCB.srt 19KB 006.Bellman equations/015. State and Action Value Functions.srt 18KB 009.On-policy vs off-policy/023. Accounting for exploration. Expected Value SARSA..srt 17KB 013.Honor/032. More DQN tricks.srt 16KB 014.Policy-based RL vs Value-based RL/034. Intuition.srt 16KB 003.Black box optimization/006. Crossentropy method.srt 16KB 001.Welcome/001. Why should you care.srt 15KB 011.Limitations of Tabular Methods/026. Loss functions in value based RL.srt 15KB 019.Planning with Monte Carlo Tree Search/054. Monte Carlo Tree Search.srt 15KB 008.Model-free learning/020. Monte-Carlo & Temporal Difference; Q-learning.srt 15KB 007.Generalized Policy Iteration/017. Policy evaluation & improvement.srt 14KB 008.Model-free learning/019. Model-based vs model-free.srt 14KB 015.REINFORCE/038. REINFORCE.srt 14KB 008.Model-free learning/021. Exploration vs Exploitation.srt 14KB 014.Policy-based RL vs Value-based RL/036. Policy gradient formalism.srt 13KB 004.All the cool stuff that isn't in the base track/011. Evolution strategies log-derivative trick.srt 13KB 012.Case Study Deep Q-Network/029. DQN the internals.srt 12KB 007.Generalized Policy Iteration/018. Policy and value iteration.srt 12KB 017.Measuting exploration/045. Recap bandits.srt 12KB 016.Actor-critic/044. Combining supervised & reinforcement learning.srt 12KB 016.Actor-critic/039. Advantage actor-critic.srt 12KB 012.Case Study Deep Q-Network/028. DQN bird's eye view.srt 11KB 010.Experience Replay/024. On-policy vs off-policy; Experience replay.srt 11KB 016.Actor-critic/042. Case study A3C.srt 11KB 018.Uncertainty-based exploration/048. Intuitive explanation.srt 11KB 003.Black box optimization/008. More on approximate crossentropy method.srt 10KB 018.Uncertainty-based exploration/051. UCB-1.srt 10KB 017.Measuting exploration/046. Regret measuring the quality of exploration.srt 10KB 002.Reinforcement Learning/004. Decision process & applications.srt 10KB 004.All the cool stuff that isn't in the base track/012. Evolution strategies duct tape.srt 10KB 013.Honor/031. Double Q-learning.srt 9KB 013.Honor/030. DQN statistical issues.srt 9KB 017.Measuting exploration/047. The message just repeats. 'Regret, Regret, Regret.'.srt 9KB 004.All the cool stuff that isn't in the base track/010. Evolution strategies math problems.srt 9KB 006.Bellman equations/016. Measuring Policy Optimality.srt 9KB 003.Black box optimization/005. Markov Decision Process.srt 8KB 003.Black box optimization/007. Approximate crossentropy method.srt 8KB 018.Uncertainty-based exploration/049. Thompson Sampling.srt 8KB 018.Uncertainty-based exploration/050. Optimism in face of uncertainty.srt 8KB 016.Actor-critic/040. Duct tape zone.srt 8KB 014.Policy-based RL vs Value-based RL/035. All Kinds of Policies.srt 7KB 004.All the cool stuff that isn't in the base track/009. Evolution strategies core idea.srt 7KB 004.All the cool stuff that isn't in the base track/013. Blackbox optimization drawbacks.srt 7KB 002.Reinforcement Learning/003. Multi-armed bandit.srt 7KB 016.Actor-critic/041. Policy-based vs Value-based.srt 7KB 016.Actor-critic/043. A3C case study (2 2).srt 6KB 014.Policy-based RL vs Value-based RL/037. The log-derivative trick.srt 6KB 001.Welcome/002. Reinforcement learning vs all.srt 5KB 008.Model-free learning/022. Footnote Monte-Carlo vs Temporal Difference.srt 5KB [FTU Forum].url 252B [FreeCoursesOnline.Me].url 133B [FreeTutorials.Us].url 119B