Pattern Recognition and Machine Learning 课后习题完整答案! 与其他的不完全答案是有区别的哈! 大家可以仔细的看下,这个是1.5M!
0.
+
p(arp(r)+plabip(b)+plalgiplg
0.2+-×0.2+×0.6=0.34
p(glo
lgpl)
po
0)= polyp(r)+plolb)p(b)+p(olg)plg)
0.2+×0.2+×0.6=0.36
30.61
p(90)=10×0
f"(t)=0
y
f(⑨)=f(9(0)g'(0)=0
g(y)≠0
f(g()=0
p2(x)
x=9(y)
P2(9
g(y)=89()8∈{-1,+1}
P2/(y)=p2(9(y)9g(y)
p()=8p(0(){9()}2+p(9(y)g()
g(y
P:r(a
Py(y)
6
N=50.000
g(y)=ln(y)-1(1-y)+5
+exp(a+5
(y
1(x)
par
p2(9(y)
50,000
E(()-EIf()=Elf()-2f()Ef(+Elf(el
Ef(a)-2EIf (E[f(c)+Elf(a)
o{x,y-Exy-Ex」Elyl
p(a,)=p(a)p(y
y
=∑∑m(,yzy
∑()∑0y
ElEY
cov, y=0
y
rcos e
y
r sin
0:c0x
os6-rsin e
sing r cos e
2丌
Bo2 rdr de
0
l
0
丌exp
2
)(-2)1
0
w(alp
y=/=(2
(2丌σ
2
=/(am)
1)
d
The authoritative textbook for reinforcement learning by Richard Sutton and Andrew Barto.
Contents
Preface
Series Forward
Summary of Notation
I. The Problem
1. Introduction
1.1 Reinforcement Learning
1.2 Examples
1.3 Elements of Reinforcement Learning
1.4 An Extended Example: Tic-Tac-Toe
1.5 Summary
1.6 History of Reinforcement Learning
1.7 Bibliographical Remarks
2. Evaluative Feedback
2.1 An -Armed Bandit Problem
2.2 Action-Value Methods
2.3 Softmax Action Selection
2.4 Evaluation Versus Instruction
2.5 Incremental Implementation
2.6 Tracking a Nonstationary Problem
2.7 Optimistic Initial Values
2.8 Reinforcement Comparison
2.9 Pursuit Methods
2.10 Associative Search
2.11 Conclusions
2.12 Bibliographical and Historical Remarks
3. The Reinforcement Learning Problem
3.1 The Agent-Environment Interface
3.2 Goals and Rewards
3.3 Returns
3.4 Unified Notation for Episodic and Continuing Tasks
3.5 The Markov Property
3.6 Markov Decision Processes
3.7 Value Functions
3.8 Optimal Value Functions
3.9 Optimality and Approximation
3.10 Summary
3.11 Bibliographical and Historical Remarks
II. Elementary Solution Methods
4. Dynamic Programming
4.1 Policy Evaluation
4.2 Policy Improvement
4.3 Policy Iteration
4.4 Value Iteration
4.5 Asynchronous Dynamic Programming
4.6 Generalized Policy Iteration
4.7 Efficiency of Dynamic Programming
4.8 Summary
4.9 Bibliographical and Historical Remarks
5. Monte Carlo Methods
5.1 Monte Carlo Policy Evaluation
5.2 Monte Carlo Estimation of Action Values
5.3 Monte Carlo Control
5.4 On-Policy Monte Carlo Control
5.5 Evaluating One Policy While Following Another
5.6 Off-Policy Monte Carlo Control
5.7 Incremental Implementation
5.8 Summary
5.9 Bibliographical and Historical Remarks
6. Temporal-Difference Learning
6.1 TD Prediction
6.2 Advantages of TD Prediction Methods
6.3 Optimality of TD(0)
6.4 Sarsa: On-Policy TD Control
6.5 Q-Learning: Off-Policy TD Control
6.6 Actor-Critic Methods
6.7 R-Learning for Undiscounted Continuing Tasks
6.8 Gam