国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

Stable-Baselines 3 部分源代碼解讀 3 ppo.py

這篇具有很好參考價值的文章主要介紹了Stable-Baselines 3 部分源代碼解讀 3 ppo.py。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

Stable-Baselines 3 部分源代碼解讀 ./ppo/ppo.py

前言

閱讀PPO相關的源碼,了解一下標準庫是如何建立PPO算法以及各種tricks的,以便于自己的復現(xiàn)。

在Pycharm里面一直跳轉,可以看到PPO類是最終繼承于基類,也就是這個py文件的內(nèi)容。

所以閱讀源碼就先從這里開始。: )

import 包

import warnings
from typing import Any, Dict, Optional, Type, TypeVar, Union

import numpy as np
import torch as th
from gym import spaces
from torch.nn import functional as F

from stable_baselines3.common.on_policy_algorithm import OnPolicyAlgorithm
from stable_baselines3.common.policies import ActorCriticCnnPolicy, ActorCriticPolicy, BasePolicy, MultiInputActorCriticPolicy
from stable_baselines3.common.type_aliases import GymEnv, MaybeCallback, Schedule
from stable_baselines3.common.utils import explained_variance, get_schedule_fn

PPO 類

這是面向使用者的淺層的PPO,也就是能直接調(diào)用的類

作者在源碼中給出了PPO的論文、引用了誰的源代碼編寫的以及一個引進PPO的講解。

policy、envlearning_rate三者與基類base-class.py的一致

n_steps表示每次更新前需要經(jīng)過的時間步,作者在這里給出了n_steps * n_envs的例子,可能的意思是,如果環(huán)境是重復的多個,打算做并行訓練的話,那么就是每個子環(huán)境的時間步乘以環(huán)境的數(shù)量

batch_size經(jīng)驗回放的最小批次信息

gamma、gae_lambda、clip_range、clip_range_vf均是具有默認值的參數(shù),分別代表“折扣因子”、“GAE獎勵中平衡偏置和方差的參數(shù)”、“為網(wǎng)絡參數(shù)而限制幅度的范圍”、“為值函數(shù)網(wǎng)絡參數(shù)而限制幅度的范圍”

normalize_advantage標志是否需要歸一化優(yōu)勢(advantage)

ent_coefvf_coef損失計算的熵系數(shù)

max_grad_norm最大的梯度長度,梯度下降的限幅

use_sde、sde_sample_freq是狀態(tài)獨立性探索,只適用于連續(xù)環(huán)境,與基類base-class.py的一致

target_kl限制每次更新時KL散度不能太大,因為clipping限幅不能防止大量更新

剩下的參數(shù)與基類base-class.py的一致文章來源地址http://www.zghlxwxcb.cn/news/detail-648898.html

class PPO(OnPolicyAlgorithm):
    """
    Proximal Policy Optimization algorithm (PPO) (clip version)

    Paper: https://arxiv.org/abs/1707.06347
    Code: This implementation borrows code from OpenAI Spinning Up (https://github.com/openai/spinningup/)
    https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail and
    Stable Baselines (PPO2 from https://github.com/hill-a/stable-baselines)

    Introduction to PPO: https://spinningup.openai.com/en/latest/algorithms/ppo.html

    :param policy: The policy model to use (MlpPolicy, CnnPolicy, ...)
    :param env: The environment to learn from (if registered in Gym, can be str)
    :param learning_rate: The learning rate, it can be a function
        of the current progress remaining (from 1 to 0)
    :param n_steps: The number of steps to run for each environment per update
        (i.e. rollout buffer size is n_steps * n_envs where n_envs is number of environment copies running in parallel)
        NOTE: n_steps * n_envs must be greater than 1 (because of the advantage normalization)
        See https://github.com/pytorch/pytorch/issues/29372
    :param batch_size: Minibatch size
    :param n_epochs: Number of epoch when optimizing the surrogate loss
    :param gamma: Discount factor
    :param gae_lambda: Factor for trade-off of bias vs variance for Generalized Advantage Estimator
    :param clip_range: Clipping parameter, it can be a function of the current progress
        remaining (from 1 to 0).
    :param clip_range_vf: Clipping parameter for the value function,
        it can be a function of the current progress remaining (from 1 to 0).
        This is a parameter specific to the OpenAI implementation. If None is passed (default),
        no clipping will be done on the value function.
        IMPORTANT: this clipping depends on the reward scaling.
    :param normalize_advantage: Whether to normalize or not the advantage
    :param ent_coef: Entropy coefficient for the loss calculation
    :param vf_coef: Value function coefficient for the loss calculation
    :param max_grad_norm: The maximum value for the gradient clipping
    :param use_sde: Whether to use generalized State Dependent Exploration (gSDE)
        instead of action noise exploration (default: False)
    :param sde_sample_freq: Sample a new noise matrix every n steps when using gSDE
        Default: -1 (only sample at the beginning of the rollout)
    :param target_kl: Limit the KL divergence between updates,
        because the clipping is not enough to prevent large update
        see issue #213 (cf https://github.com/hill-a/stable-baselines/issues/213)
        By default, there is no limit on the kl div.
    :param tensorboard_log: the log location for tensorboard (if None, no logging)
    :param policy_kwargs: additional arguments to be passed to the policy on creation
    :param verbose: Verbosity level: 0 for no output, 1 for info messages (such as device or wrappers used), 2 for
        debug messages
    :param seed: Seed for the pseudo random generators
    :param device: Device (cpu, cuda, ...) on which the code should be run.
        Setting it to auto, the code will be run on the GPU if possible.
    :param _init_setup_model: Whether or not to build the network at the creation of the instance
    """

    # PPO策略中限制的可以用字符串的策略就是下面三個
    # 連續(xù)環(huán)境使用時一般會提示選擇"MultiInputPolicy"
    policy_aliases: Dict[str, Type[BasePolicy]] = {
        "MlpPolicy": ActorCriticPolicy,
        "CnnPolicy": ActorCriticCnnPolicy,
        "MultiInputPolicy": MultiInputActorCriticPolicy,
    }

    # 輸入的一些默認參數(shù)表列可以為調(diào)參提供參考
    def __init__(
        self,
        policy: Union[str, Type[ActorCriticPolicy]],
        env: Union[GymEnv, str],
        learning_rate: Union[float, Schedule] = 3e-4,
        n_steps: int = 2048,
        batch_size: int = 64,
        n_epochs: int = 10,
        gamma: float = 0.99,
        gae_lambda: float = 0.95,
        clip_range: Union[float, Schedule] = 0.2,
        clip_range_vf: Union[None, float, Schedule] = None,
        normalize_advantage: bool = True,
        ent_coef: float = 0.0,
        vf_coef: float = 0.5,
        max_grad_norm: float = 0.5,
        use_sde: bool = False,
        sde_sample_freq: int = -1,
        target_kl: Optional[float] = None,
        tensorboard_log: Optional[str] = None,
        policy_kwargs: Optional[Dict[str, Any]] = None,
        verbose: int = 0,
        seed: Optional[int] = None,
        device: Union[th.device, str] = "auto",
        _init_setup_model: bool = True,
    ):
        super().__init__(
            policy,
            env,
            learning_rate=learning_rate,
            n_steps=n_steps,
            gamma=gamma,
            gae_lambda=gae_lambda,
            ent_coef=ent_coef,
            vf_coef=vf_coef,
            max_grad_norm=max_grad_norm,
            use_sde=use_sde,
            sde_sample_freq=sde_sample_freq,
            tensorboard_log=tensorboard_log,
            policy_kwargs=policy_kwargs,
            verbose=verbose,
            device=device,
            seed=seed,
            _init_setup_model=False,
            supported_action_spaces=(
                spaces.Box,
                spaces.Discrete,
                spaces.MultiDiscrete,
                spaces.MultiBinary,
            ),
        )

        # 合理性、完整性檢查,如果需要normalize的話,需要保證batch_size參數(shù)大于1
        # Sanity check, otherwise it will lead to noisy gradient and NaN
        # because of the advantage normalization
        if normalize_advantage:
            assert (
                batch_size > 1
            ), "`batch_size` must be greater than 1. See https://github.com/DLR-RM/stable-baselines3/issues/440"

        if self.env is not None:
            # Check that `n_steps * n_envs > 1` to avoid NaN
            # when doing advantage normalization
            buffer_size = self.env.num_envs * self.n_steps
            # 如果buffer_size等于1但是又需要normalize_advantage標志時
            # 報錯,輸出當前的需要運行的時間步和當前的環(huán)境數(shù)量
            assert buffer_size > 1 or (
                not normalize_advantage
            ), f"`n_steps * n_envs` must be greater than 1. Currently n_steps={self.n_steps} and n_envs={self.env.num_envs}"
            # rollouts的池子大小必須與最小池子數(shù)量mini-batch一致,也就是能整除
            # 這樣才能一份一份的導入(我的理解是這樣)
            # Check that the rollout buffer size is a multiple of the mini-batch size
            untruncated_batches = buffer_size // batch_size
            # 不是整除的話,就爆出警告
            if buffer_size % batch_size > 0:
                warnings.warn(
                    f"You have specified a mini-batch size of {batch_size},"
                    f" but because the `RolloutBuffer` is of size `n_steps * n_envs = {buffer_size}`,"
                    f" after every {untruncated_batches} untruncated mini-batches,"
                    f" there will be a truncated mini-batch of size {buffer_size % batch_size}\n"
                    f"We recommend using a `batch_size` that is a factor of `n_steps * n_envs`.\n"
                    f"Info: (n_steps={self.n_steps} and n_envs={self.env.num_envs})"
                )
        self.batch_size = batch_size
        self.n_epochs = n_epochs
        self.clip_range = clip_range
        self.clip_range_vf = clip_range_vf
        self.normalize_advantage = normalize_advantage
        self.target_kl = target_kl

        if _init_setup_model:
            self._setup_model()

    def _setup_model(self) -> None:
        super()._setup_model()

        # Transform (if needed) learning rate and clip range (for PPO) to callable.
        # 將輸入的限制幅度轉變成可以調(diào)用的變量
        # Initialize schedules for policy/value clipping
        self.clip_range = get_schedule_fn(self.clip_range)
        # 對self.clip_range_vf參數(shù)做數(shù)據(jù)類型和正值檢查
        if self.clip_range_vf is not None:
            if isinstance(self.clip_range_vf, (float, int)):
                assert self.clip_range_vf > 0, "`clip_range_vf` must be positive, " "pass `None` to deactivate vf clipping"

            self.clip_range_vf = get_schedule_fn(self.clip_range_vf)

    def train(self) -> None:
        """
        Update policy using the currently gathered rollout buffer.
        """
        # 將模型設置成訓練模式,這會影響到batch norm和正則化
        # Switch to train mode (this affects batch norm / dropout)
        self.policy.set_training_mode(True)
        # 更新學習率,如果學習率是與當前進度有關的數(shù)值
        # Update optimizer learning rate
        self._update_learning_rate(self.policy.optimizer)
        # 計算限幅參數(shù),輸入的是與當前進度有關的學習率,動態(tài)變化包括clip_range和clip_range_vf
        # 也就是策略網(wǎng)絡和價值網(wǎng)絡
        # Compute current clip range
        clip_range = self.clip_range(self._current_progress_remaining)
        # Optional: clip range for the value function
        if self.clip_range_vf is not None:
            clip_range_vf = self.clip_range_vf(self._current_progress_remaining)

        # 初始化各種損失的記錄
        # 熵損失、策略梯度損失、價值損失和限制參數(shù)
        entropy_losses = []
        pg_losses, value_losses = [], []
        clip_fractions = []

        # 設置continue_training為True,表示現(xiàn)在處于持續(xù)性訓練狀態(tài)
        continue_training = True
        # train for n_epochs epochs
        # self.n_epochs是訓練次數(shù)
        for epoch in range(self.n_epochs):
            # 記錄近似的KL散度數(shù)值
            approx_kl_divs = []
            # Do a complete pass on the rollout buffer
            # 將rollout_buffer池子用batch_size做分割,遍歷每一個循環(huán)
            for rollout_data in self.rollout_buffer.get(self.batch_size):
                # 取出每一小批的動作數(shù)據(jù)
                actions = rollout_data.actions
                # 如果是離散動作空間的話,專程浮點型數(shù)據(jù)并拉直
                if isinstance(self.action_space, spaces.Discrete):
                    # Convert discrete action from float to long
                    actions = rollout_data.actions.long().flatten()

                # 判斷是否使用了狀態(tài)獨立性探索,如果使用了狀態(tài)獨立性探索
                # 那么就重置噪聲
                # Re-sample the noise matrix because the log_std has changed
                if self.use_sde:
                    self.policy.reset_noise(self.batch_size)

                # 根據(jù)策略、觀測的數(shù)據(jù)和動作,輸出價值、對數(shù)概率以及熵
                values, log_prob, entropy = self.policy.evaluate_actions(rollout_data.observations, actions)
                # 再把價值再做拉直
                values = values.flatten()
                # 對經(jīng)驗池子的advantages做Normalize
                # Normalize的公式是advantages里面的數(shù)值減去advantages的均值
                # 然后再除以advantages的均值的方差(自己復現(xiàn)的話,也可以調(diào)用其他庫的方法)
                # Normalize advantage
                advantages = rollout_data.advantages
                # Normalization does not make sense if mini batchsize == 1, see GH issue #325
                if self.normalize_advantage and len(advantages) > 1:
                    advantages = (advantages - advantages.mean()) / (advantages.std() + 1e-8)

                # 輸出先后動作概率的差異值
                # ratio between old and new policy, should be one at the first iteration
                ratio = th.exp(log_prob - rollout_data.old_log_prob)

                # 策略的損失是優(yōu)勢數(shù)值乘以比率,以及限制幅度的優(yōu)勢之間的負最小值
                # 就是論文的公式
                # clipped surrogate loss
                policy_loss_1 = advantages * ratio
                policy_loss_2 = advantages * th.clamp(ratio, 1 - clip_range, 1 + clip_range)
                policy_loss = -th.min(policy_loss_1, policy_loss_2).mean()

                # 記錄在剛才初始化的日志記錄器里面
                # Logging
                pg_losses.append(policy_loss.item())
                clip_fraction = th.mean((th.abs(ratio - 1) > clip_range).float()).item()
                clip_fractions.append(clip_fraction)

                # 如果價值沒有限幅的話,就直接輸出
                # 有限制幅度的話,那么就是限幅增量th.clamp()+原來的數(shù)值
                if self.clip_range_vf is None:
                    # No clipping
                    values_pred = values
                else:
                    # Clip the difference between old and new value
                    # NOTE: this depends on the reward scaling
                    values_pred = rollout_data.old_values + th.clamp(
                        values - rollout_data.old_values, -clip_range_vf, clip_range_vf
                    )
                # 構建損失函數(shù)并記錄下來
                # Value loss using the TD(gae_lambda) target
                value_loss = F.mse_loss(rollout_data.returns, values_pred)
                value_losses.append(value_loss.item())

                # 如果沒有熵損失,那么就直接取得均值的副對數(shù)概率,如果有熵損失那么就是熵損失的均值
                # Entropy loss favor exploration
                if entropy is None:
                    # Approximate entropy when no analytical form
                    entropy_loss = -th.mean(-log_prob)
                else:
                    entropy_loss = -th.mean(entropy)

                entropy_losses.append(entropy_loss.item())

                # 最終的損失就是策略損失和加了系數(shù)的熵損失和價值函數(shù)損失
                loss = policy_loss + self.ent_coef * entropy_loss + self.vf_coef * value_loss

                # Calculate approximate form of reverse KL Divergence for early stopping
                # see issue #417: https://github.com/DLR-RM/stable-baselines3/issues/417
                # and discussion in PR #419: https://github.com/DLR-RM/stable-baselines3/pull/419
                # and Schulman blog: http://joschu.net/blog/kl-approx.html
                # 計算近似KL散度并記錄在列表中
                with th.no_grad():
                    log_ratio = log_prob - rollout_data.old_log_prob
                    approx_kl_div = th.mean((th.exp(log_ratio) - 1) - log_ratio).cpu().numpy()
                    approx_kl_divs.append(approx_kl_div)

                # 如果target_kl存在且近似KL散度太大了,也就是更新程度太大,就提前結束,并報錯
                if self.target_kl is not None and approx_kl_div > 1.5 * self.target_kl:
                    continue_training = False
                    if self.verbose >= 1:
                        print(f"Early stopping at step {epoch} due to reaching max kl: {approx_kl_div:.2f}")
                    break

                # 對損失做優(yōu)化
                # Optimization step
                self.policy.optimizer.zero_grad()
                loss.backward()
                # 限制幅度避免較大的更新
                # Clip grad norm
                th.nn.utils.clip_grad_norm_(self.policy.parameters(), self.max_grad_norm)
                self.policy.optimizer.step()

            self._n_updates += 1
            if not continue_training:
                break

        explained_var = explained_variance(self.rollout_buffer.values.flatten(), self.rollout_buffer.returns.flatten())

        # Logs
        self.logger.record("train/entropy_loss", np.mean(entropy_losses))
        self.logger.record("train/policy_gradient_loss", np.mean(pg_losses))
        self.logger.record("train/value_loss", np.mean(value_losses))
        self.logger.record("train/approx_kl", np.mean(approx_kl_divs))
        self.logger.record("train/clip_fraction", np.mean(clip_fractions))
        self.logger.record("train/loss", loss.item())
        self.logger.record("train/explained_variance", explained_var)
        if hasattr(self.policy, "log_std"):
            self.logger.record("train/std", th.exp(self.policy.log_std).mean().item())

        self.logger.record("train/n_updates", self._n_updates, exclude="tensorboard")
        self.logger.record("train/clip_range", clip_range)
        if self.clip_range_vf is not None:
            self.logger.record("train/clip_range_vf", clip_range_vf)

    def learn(
        self: SelfPPO,
        total_timesteps: int,
        callback: MaybeCallback = None,
        log_interval: int = 1,
        tb_log_name: str = "PPO",
        reset_num_timesteps: bool = True,
        progress_bar: bool = False,
    ) -> SelfPPO:
        # 這個函數(shù)主要是給用戶做調(diào)用,在初始化PPO類之后,將這些參數(shù)引進來就可以
        # total_timesteps總的需要的時間步
        return super().learn(
            total_timesteps=total_timesteps,
            callback=callback,
            log_interval=log_interval,
            tb_log_name=tb_log_name,
            reset_num_timesteps=reset_num_timesteps,
            progress_bar=progress_bar,
        )

到了這里,關于Stable-Baselines 3 部分源代碼解讀 3 ppo.py的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。如若轉載,請注明出處: 如若內(nèi)容造成侵權/違法違規(guī)/事實不符,請點擊違法舉報進行投訴反饋,一經(jīng)查實,立即刪除!

領支付寶紅包贊助服務器費用

相關文章

  • linux 源代碼編譯

    有時候會在linux上下載源碼包,然后進行編譯成可執(zhí)行的文件,這個過程需要經(jīng)過configure、make、make install、make clean四個步驟 configure 為這個程序在當前的操作系統(tǒng)環(huán)境下選擇合適的編譯器和環(huán)境參數(shù)來編譯該代碼 make 對程序代碼進行編譯操作,會將源碼編譯成可執(zhí)行的目標文

    2024年02月11日
    瀏覽(78)
  • matlab查看源代碼

    matlab函數(shù)源代碼-查看 Ctrl+D 最簡單方便的一種方法,鼠標劃中函數(shù)名,按CTRL+D即可打開函數(shù)的m文件

    2024年01月25日
    瀏覽(91)
  • bugku--源代碼

    bugku--源代碼

    查看源代碼 發(fā)顯URL編碼 解碼 在拼接這一串 拿著去提交就行啦

    2024年02月04日
    瀏覽(86)
  • 跑酷游戲源代碼

    跑酷游戲源代碼

    import pygame, sys import random class Person(): ?# 人物 ? ? def __init__(self, surf=None, y=None): ? ? ? ? self.surface = surf ? ? ? ? self.y = y ?# y坐標 ? ? ? ? self.w = (surf.get_width()) / 12 ?# 寬度 ? ? ? ? self.h = surf.get_height() / 2 ?# 高度 ? ? ? ? self.cur_frame = -1 ?# 當前的運動狀態(tài)幀 ? ? ? ? self.

    2024年02月07日
    瀏覽(94)
  • 如何選擇源代碼加密軟件

    比較內(nèi)容 安全容器(SDC沙盒) DLP 文檔加密 云桌面 代表廠家 *信達 賣咖啡、賽門貼科 億*通、IP噶德、*盾、*途 四杰、深*服 設計理念 以隔離容器加準入技術為基礎,構建只進不出,要出需走審批的數(shù)據(jù)安全環(huán)境,環(huán)境內(nèi)數(shù)據(jù)一視同仁,不區(qū)分文件格式,一律保護。 以內(nèi)容識別

    2024年02月07日
    瀏覽(94)
  • C# 關于源代碼生成

    步驟1: 首先建立一個控制臺程序?SourceGeneratorDome1 選擇版本.net7 代碼如下: 建立類文 件??GreetingUsePartialClassm 這是一個類分布文件。 看清楚喲。這里只是定義了一個分布類和分布方法。具體實現(xiàn)方法通過源代碼生成 步驟2:建立一個源代碼生成項目 但是類型選擇.?netstanda

    2024年02月11日
    瀏覽(95)
  • blender源代碼編譯運行

    blender源代碼編譯運行

    其實在blender官網(wǎng)上已經(jīng)給出了編譯步驟https://wiki.blender.org/wiki/Building_Blender/Windows,由于在源碼編譯的過程中還遇到了很多問題,特此記錄一下。 Visual Studio2019或者2022(至少選擇【Desktop Development with C++】),我自己是下載的2022版,這里記錄了下載的方法及過程https://blog.csdn.net

    2024年02月02日
    瀏覽(90)
  • 什么是網(wǎng)站的源代碼?

    什么是網(wǎng)站的源代碼?

    什么是網(wǎng)站的源代碼? 我們可以把它理解成源文代碼,當前看到的這個網(wǎng)頁來說,其實它是由一大堆的源代碼組成,通過我們的IE(Microsoft Internet Explorer)瀏覽器(或服務器)翻譯成現(xiàn)在我們所看到的樣子。 網(wǎng)站源代碼是什么? 如果您要制作網(wǎng)頁,您可以選用如Frontpage或D

    2024年02月12日
    瀏覽(89)
  • 《起風了》C++源代碼

    《起風了》C++源代碼

    Visual Studio、Dev-C++、Visual Studio Code等C/C++創(chuàng)建一個 .cpp 文件,直接粘貼賦值即可。 《起風了》歌詞 這一路上走走停停 順著少年漂流的痕跡 邁出車站的前一刻 竟有些猶豫 不禁笑這近鄉(xiāng)情怯 仍無法避免 而長野的天 依舊那么暖 風吹起了從前 從前初識這世間 萬般流連 看著天邊

    2024年02月12日
    瀏覽(166)
  • Git源代碼管理方案

    Git源代碼管理方案

    背景 現(xiàn)階段的Git源代碼管理上有一些漏洞,導致在每次上線發(fā)布的時間長、出問題,對整體產(chǎn)品的進度有一定的影響。 作用 新的Git源代碼管理方案有以下作用: 多功能并行開發(fā)時,測試人員可以根據(jù)需求任務分配測試自己的功能,環(huán)境互不干擾(需要提供多環(huán)境),也可以集

    2024年02月16日
    瀏覽(88)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領取紅包,優(yōu)惠每天領

二維碼1

領取紅包

二維碼2

領紅包