【已解決】TD算法超詳細解釋和實現(xiàn)(Sarsa,n-step Sarsa,Q-learning)一篇文章看透徹!
鄭重聲明:本系列內(nèi)容來源 趙世鈺(Shiyu Zhao)教授的強化學習數(shù)學原理系列,本推文出于非商業(yè)目的分享個人學習筆記和心得。如有侵權,將刪除帖子。原文鏈接:https://github.com/MathFoundationRL/Book-Mathmatical-Foundation-of-Reinforcement-Learning
上一節(jié)我們講到,Robbins-Monro Algorithm算法解決了下面的這個求期望的問題,本節(jié)我們把問題稍微復雜化一點。
看下邊這個期望的計算:
假設我們可以獲得隨機變量 R , X R,X R,X的樣本那么可以定義下面的函數(shù):
其實,也就是是把之前的一個隨機變量變成了一個多元隨機變量的函數(shù)。下面我們展示,這個例子其實和TD算法的表達很相似了。
TD learning of state values
本節(jié)所指的TD算法特指用于估測狀態(tài)價值的經(jīng)典TD算法。
算法
這個狀態(tài)價值的期望形式的表示,有時候被稱為貝爾曼期望等式,其實是貝爾曼方程的另一種表達,它是設計和分析TD算法的重要工具。那怎么去解這個方程呢?這就用到了我們前面小節(jié)講到的,RM算法以及隨機梯度下降的思想。這里需要注意,每次更新是一次更新一個狀態(tài)的價值。
推導
通過應用RM算法求解貝爾曼方程,我們可以導出TD算法。
在了解了RM算法的前提下,上面的算法并不難理解。另外,RM算法關于狀態(tài)價值的解,7.5式,有兩個假設值得考慮:
顯然,假設1)和2)在實踐中都無法實現(xiàn),對于1)是因為實踐中的數(shù)據(jù)都是一個個的episodes,s在其中只是被抽樣到的一個,不能確保每一個episodes都是從s開始的;對于2)我們怎么可以在事先知道其他狀態(tài)的價值呢,要知道每一個狀態(tài)的價值都是被算法估測出來的。
如何在RM算法中去除這兩個假設的約束呢?
也就是說,RM算法中用的數(shù)據(jù)以s為起點,把第k次抽樣 s ′ s^\prime s′ 得到的 s k ′ s_k^\prime sk′? 作為下一個狀態(tài),而這樣并不能充分利用全局的狀態(tài)來進行價值的估測,要知道我們得到的數(shù)據(jù)是一個個episodes,每次只用episodes的開頭部分的數(shù)據(jù)是不可行的。
下面這個地方說明了k和t 的不同:
所以,我們修改了算法使用的序列樣本數(shù)據(jù)的形式為: { ( s t , r t + 1 , s t + 1 ) } \left\{\left(s_t, r_{t+1}, s_{t+1}\right)\right\} {(st?,rt+1?,st+1?)}
另一個關于狀態(tài)價值的改動就不必說了,因為算法更新過程中的狀態(tài)價值并不是策略的狀態(tài)價值函數(shù),一個是進行時態(tài),一個是完成時態(tài)。
性質(zhì)
首先,我們要深入分析一下TD算法的表達式:
第二,TD算法也有局限性,它只能估測狀態(tài)的價值,為了找到最優(yōu)的策略,我們還需要進一步計算動作價值,然后做策略改進。(當然啦,在系統(tǒng)模型不可知的時候,我們可以直接去估測動作價值)
第三,TD算法和MC算法都是無模型的,TD算法和MC相比的優(yōu)勢和劣勢是什么呢?我們可以從下表看到:
(PS,個人覺得這個表意義重大,值得反復觀看?。?/p>
- 區(qū)分了online learning 和offline learning
- 區(qū)分了持續(xù)性任務和周期性任務
- 區(qū)分了自舉和非自舉
- 分析了MC和TD算法的方差比較和成因,趙世鈺老師牛逼!
收斂性分析
略了
TD learning of action values: Sarsa
算法
sarsa算法的作用是去估計動作的價值。
在第t步,只有當前的狀態(tài)-動作對被更新。
-
SARSA的名字來源?
- Sarsa is the abbreviation of state-action-reward-state-action.
-
SARSA 和 TD的關系?
- We can obtain Sarsa by replacing the state value estimate v(s) in the TD algorithm with the action value estimate q(s,a). As a result, Sarsa is nothing but an action-value version of the TD algorithm.
-
Sarsa is a stochastic approximation algorithm solving the following equation:
-
SARSA算法是貝爾曼方程的狀態(tài)-動作價值形式。
-
SARSA算法的收斂性對學習率 α \alpha α 有要求:
實現(xiàn)
先捋一下思路,RL算法的目的是找最優(yōu)的策略。而SARSA算法目前只能求解最優(yōu)的狀態(tài)-動作價值。收基于模型的策略迭代算法的啟發(fā),我們可以把策略迭代引入SARSA,分成兩步來找最優(yōu)策略:The first is to update the q-value of the visited state-action pair. The second is to update the policy as an ε-greedy one.有兩點值得注意:
第一點:之前值迭代和一些基于模型的策略迭代在每一次迭代之后是更新了所有的狀態(tài)的價值的,而SARSA在每一步,僅僅更新一個狀態(tài)-動作對的價值,然后!立馬!狀態(tài) s t s_t st?的策略就被隨之更新。盡管啊,這樣的更新策略可能是不夠精確的。
第二點: policy 更新過程是 ε-greedy 的,從而保證所有的狀態(tài)-動作對都被訪問到。
我們來看一個例子,無圖無真相!
實驗設定如下:
下圖顯示的是SARSA最終算出來的策略,注意,this policy can successfully reach the target state from the starting state. However, the policies for other states may not be optimal. That is because the other states are not of interest and every episode ends when the target state is reached. If we need to find the optimal policies for all states, we should ensure all the states are visited sufficiently many times by, for example, starting episodes from different states. (說白了,就是初始的那個狀態(tài)唯我獨尊,但是對其他狀態(tài)作為初始狀態(tài)的情況,這個生成的策略并不是最優(yōu)的。要想對所有狀態(tài)找最優(yōu)策略,那么我們需要把每一個狀態(tài)作為開始狀態(tài),得到充分多的狀態(tài)動作對作為episods訓練數(shù)據(jù)。)
右上圖可以看到,每一個episode的sum reward在逐漸增高。從右下圖可以看到,episodes的長度在逐漸變短。
TD learning of action values: Expected Sarsa
原理
實現(xiàn)
在算法實現(xiàn)上面,除了q-value的更新過程不一樣之外,其他的和SARSA差不多。
實驗
TD learning of action values: n-step Sarsa
和Sarsa的區(qū)別主要是discounted return項的展開的長度(the different decomposition structure )的不一樣:
實現(xiàn)
注意要點:
However, we can wait until time t + n to update the q-value of (St,At). To that end, (7.14) can be modified to
區(qū)別就是,我們通過等待n步再計算q值,從而讓它更準確一點。
總結
- Specifically, if n is selected to be a large number, its performance is close to MC learning and hence has a large variance but a small bias. If n is selected to be small, its performance is close to Sarsa and hence has a relatively large bias due to the initial guess and relatively low variance.
- Finally, n-step Sarsa is also for policy evaluation. It can be combined with the policy improvement step to search for optimal policies.
TD learning of optimal action values: Q-learning 放大招!
SARSA只能是估測每個狀態(tài)動作價值,必須要借助策略迭代方法來求最優(yōu)狀態(tài)動作價值,而By contrast, Q-learning can directly estimate optimal action values.
算法
Q: Q-learning的更新過程需要的數(shù)據(jù)是什么呢?
A:反正不需要 a t + 1 a_{t+1} at+1?
Q-learning其實是在使用隨機估測方法在計算下面的方程:
很簡單可以證明這個方程其實就是貝爾曼最優(yōu)方程:
展開上式得到:
實現(xiàn)
雖然Q-learning 是off-policy(這是由算法本身決定的),但是它可以Work online 或者 offline,這是和實現(xiàn)有關系的。
The on-policy version is the same as Sarsa except for the TD target in the q-value update step. In this case, the behavior policy is the same as the target policy, which is an ε-greedy policy.
In the off-policy version, the episode is generated by the behavior policy πb which can be any policy. If we would like to sample experiences for every state-action pair, πb should be selected to be exploratory.(意思是把學習率設置得比較大)
Here, the target policy πT is greedy rather than ε-greedy. That is because the target policy is not used to generate episodes and hence is not required to be exploratory.
Moreover, the off-policy version of Q-learning presented here is offline. That is because all the experience samples are collected first and then processed to estimate an optimal target policy. Of course, it can be modified to become online. Once a sample is collected, the sample can be used to update the q-value and target policy immediately. Nevertheless, the updated target policy is not used to generate new samples. (online版本的Q-learning,沒有使用更新后的策略去生成樣本進行迭代,否則的話就是on-policy了)
例子(圖示)
Figure 7.2: Examples to demonstrate off-policy learning by Q-learning. The optimal policy and optimal state values are shown in (a) and (b), respectively. The behavior policy and the generated single episode are shown in ? and (d), respectively. The estimated policy and the estimation error evolution are shown in (e) and (f), respectively. The cases with different initial q-value are shown in (g) and (h), respectively.
由于最優(yōu)策略有多個,所以Q-learning學到的e和原始的最優(yōu)策略設定b并不一樣。
由于Q-learning的自舉特性,初始狀態(tài)價值的設定和收斂的速度是息息相關的。(g)和(h)的區(qū)別就可以看出來。
Figure 7.3: Performance of Q-learning given different non-exploratory behavior policies. The figures in the left column show the behavior policies. The figures in the middle column show the generated episodes following the corresponding behavior policies. The episode in every example has 100,000 steps. The figures in the right column show the evolution of the root-mean-squared error of the estimated state values, which are calculated from the estimated action values.(上面這些圖,中間的那列是基于右邊的策略生成的軌跡。
可以看出啊,當行為策略不是探索的時候,(在7.3中是固定的),學習的效果變得很差。比如,狀態(tài)值的誤差對 ? \epsilon ?的縮小很敏感。
A unified viewpoint
It is worth mentioning that there are some methods to convert an on-policy learning algorithm to off-policy. Importance sampling is a widely used method1.
Moreover, there are various extensions of the TD algorithms introduced in this chapter. For example, the TD(λ) method provides a more general and unified framework for TD learning. The TD algorithm in Section 7.6 is simply TD(0), a special case of TD(λ). Details of TD(λ) can be found in 2.
注意:Q-learning is off-policy!?。?/strong>
答疑環(huán)節(jié)
-
時間差分TD這個詞在TD learning中是什么意思?
- Every TD algorithm has a TD error, which represents the deficiency between the new sample and the current estimates. This deficiency is between two different time steps and hence called temporal difference.
-
TD learning 算法只是在估計動作的價值,它怎么可以用來找尋最優(yōu)策略呢?
- To obtain an optimal policy, the value estimation process should interact with the policy update process. That is, after a value is updated, the corresponding policy should be updated. Then, a new sample generated by the updated policy is used to update values again. This is the idea of generalized policy iteration.
-
TD learning 算法的收斂要求學習率逐漸收斂到0,為什么實踐中學習率被設置為常量呢?
- The fundamental reason is the policy to be evaluated is nonstationary. A TD learning algorithm like Sarsa aims to estimate the action values of a given fixed policy. In other words, it aims to evaluate the given policy. If the policy is stationary, the learning rate can decrease to zero gradually to ensure convergence with probability 1 as stated in Theorem 7.1. (在策略評估場景下,我們假定策略不變
- However, when we put Sarsa in the context of optimal policy searching, the value estimation process keeps interacting with the policy update process. Therefore, the policy that Sarsa aims to evaluate keeps changing. In this case, we need to use a constant learning rate because, every time we have a new policy to evaluate, the learning rate cannot be too small. The drawback of a constant learning rate is that the value estimate may still fluctuate eventually. However, as long as the constant learning rate is sufficiently small, the fluctuation would not jeopardize the convergence.(在最優(yōu)策略尋找場景下,策略在不斷地更新,面對新的策略,我們不能讓學習率太小了。缺點呢,就是狀態(tài)價值的估計值會一直震蕩。這個情況可以通過把學習率設置得比較小來實現(xiàn)。
-
Should we estimate the optimal policies for all states or a subset of states?
-
It depends on the task. One may have noticed that some tasks in the illustrative examples in this chapter are not to find the optimal policies for all states. Instead, they only need to find a good path from a specific starting state to the target state. Such a task is not data demanding because we do not need to visit every state-action pair sufficiently many times. As shown in the examples in Figure 7.1, even if some states are not visited, we can still obtain a good path.
-
It, however, must be noted that the path is only good but not guaranteed to be optimal. That is because not all state-action pairs have been explored and there may be better paths unexplored. Nevertheless, given sufficient data, there is a high probability for us to find a good or locally optimal path. By locally optimal, we mean the path is optimal within a neighborhood of the path.(重點理解一下,locally optimal這個概念!
-
為啥 Q-learning是off-policy的,而別的TD算法都是 on-policy的,而根本的原因是什么!
- Q-learning aims to solve the Bellman optimality equation. The other TD algorithms are on-policy because they aim to solve the Bellman equation of a given policy.
- Since the Bellman optimality equation does not depend on any specific policy, it can use the experience samples generated by any other policies to estimate the optimal policy. However, the Bellman equation depends on a given policy, solving it of course requires the experience samples generated by the given policy. (看明白了,背后的原因是Bellman optimality equation 和 Bellman equation 在數(shù)學形式上的差別?。?/li>
-
T. C. Hesterberg, Advances in importance sampling. PhD Thesis, Stanford Univer- sity, 1988. ??文章來源:http://www.zghlxwxcb.cn/news/detail-788613.html
-
R. S. Sutton, “Learning to predict by the methods of temporal differences,” Machine Learning, vol. 3, no. 1, pp. 9–44, 1988. ??文章來源地址http://www.zghlxwxcb.cn/news/detail-788613.html
到了這里,關于TD算法超詳細解釋,一篇文章看透徹!的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關文章,希望大家以后多多支持TOY模板網(wǎng)!