程式碼範例 / 強化學習 / 深度決定性策略梯度 (DDPG)

深度決定性策略梯度 (DDPG)

作者: amifunny
建立日期 2020/06/04
上次修改日期 2024/03/23
說明: 實作倒立擺問題的 DDPG 演算法。

ⓘ 這個範例使用 Keras 3

在 Colab 中檢視 GitHub 原始碼


簡介

深度決定性策略梯度 (DDPG) 是一種用於學習連續動作的無模型離策略演算法。

它結合了 DPG(決定性策略梯度)和 DQN(深度 Q 網路)的思想。它使用 DQN 的經驗回放和慢速學習目標網路,並且基於 DPG,可以在連續動作空間中運作。

本教學緊密遵循此論文 - 使用深度強化學習進行連續控制


問題

我們正在嘗試解決經典的倒立擺控制問題。在此設定中,我們只能採取兩個動作:向左擺動或向右擺動。

對於 Q 學習演算法來說,這個問題的挑戰之處在於動作是連續的,而不是離散的。也就是說,我們不是使用像 -1+1 這樣的兩個離散動作,而是必須從 -2+2 的無限個動作中選擇。


快速理論

就像 Actor-Critic 方法一樣,我們有兩個網路

  1. Actor - 它根據狀態提出一個動作。
  2. Critic - 它根據狀態和動作預測該動作是好(正值)還是壞(負值)。

DDPG 使用了原始 DQN 中沒有的另外兩個技術

首先,它使用兩個目標網路。

為什麼? 因為它可以增加訓練的穩定性。簡而言之,我們從估計的目標中學習,並且目標網路更新緩慢,因此保持我們估計的目標穩定。

從概念上講,這就像在說,「我有一個關於如何玩好這個遊戲的想法,我會嘗試一下,直到我找到更好的方法」,而不是說「我每次移動後都要重新學習如何玩這個整個遊戲」。請參閱此 StackOverflow 回答

其次,它使用經驗回放。

我們儲存元組 (state, action, reward, next_state) 的清單,並且我們不是只從最近的經驗中學習,而是從目前為止累積的所有經驗的抽樣中學習。

現在,讓我們看看它是如何實作的。

import os

os.environ["KERAS_BACKEND"] = "tensorflow"

import keras
from keras import layers

import tensorflow as tf
import gymnasium as gym
import numpy as np
import matplotlib.pyplot as plt

我們使用 Gymnasium 來建立環境。我們稍後將使用 upper_bound 參數來縮放我們的動作。

# Specify the `render_mode` parameter to show the attempts of the agent in a pop up window.
env = gym.make("Pendulum-v1", render_mode="human")

num_states = env.observation_space.shape[0]
print("Size of State Space ->  {}".format(num_states))
num_actions = env.action_space.shape[0]
print("Size of Action Space ->  {}".format(num_actions))

upper_bound = env.action_space.high[0]
lower_bound = env.action_space.low[0]

print("Max Value of Action ->  {}".format(upper_bound))
print("Min Value of Action ->  {}".format(lower_bound))
Size of State Space ->  3
Size of Action Space ->  1
Max Value of Action ->  2.0
Min Value of Action ->  -2.0

為了實現 Actor 網路更好的探索,我們使用雜訊擾動,特別是論文中描述的Ornstein-Uhlenbeck 過程來產生雜訊。它從相關的常態分佈中抽樣雜訊。

class OUActionNoise:
    def __init__(self, mean, std_deviation, theta=0.15, dt=1e-2, x_initial=None):
        self.theta = theta
        self.mean = mean
        self.std_dev = std_deviation
        self.dt = dt
        self.x_initial = x_initial
        self.reset()

    def __call__(self):
        # Formula taken from https://www.wikipedia.org/wiki/Ornstein-Uhlenbeck_process
        x = (
            self.x_prev
            + self.theta * (self.mean - self.x_prev) * self.dt
            + self.std_dev * np.sqrt(self.dt) * np.random.normal(size=self.mean.shape)
        )
        # Store x into x_prev
        # Makes next noise dependent on current one
        self.x_prev = x
        return x

    def reset(self):
        if self.x_initial is not None:
            self.x_prev = self.x_initial
        else:
            self.x_prev = np.zeros_like(self.mean)

Buffer 類別實作經驗回放。


Algorithm

Critic 損失 - y - Q(s, a) 的均方誤差,其中 y 是目標網路看到的預期回報,而 Q(s, a) 是 Critic 網路預測的動作值。y 是 Critic 模型試圖實現的移動目標;我們透過緩慢更新目標模型來使此目標穩定。

Actor 損失 - 這是使用 Critic 網路針對 Actor 網路採取的動作給出的值的平均值來計算的。我們力求最大化這個量。

因此,我們更新 Actor 網路,使其產生針對給定狀態,獲得 Critic 所見最大預測值的動作。

class Buffer:
    def __init__(self, buffer_capacity=100000, batch_size=64):
        # Number of "experiences" to store at max
        self.buffer_capacity = buffer_capacity
        # Num of tuples to train on.
        self.batch_size = batch_size

        # Its tells us num of times record() was called.
        self.buffer_counter = 0

        # Instead of list of tuples as the exp.replay concept go
        # We use different np.arrays for each tuple element
        self.state_buffer = np.zeros((self.buffer_capacity, num_states))
        self.action_buffer = np.zeros((self.buffer_capacity, num_actions))
        self.reward_buffer = np.zeros((self.buffer_capacity, 1))
        self.next_state_buffer = np.zeros((self.buffer_capacity, num_states))

    # Takes (s,a,r,s') observation tuple as input
    def record(self, obs_tuple):
        # Set index to zero if buffer_capacity is exceeded,
        # replacing old records
        index = self.buffer_counter % self.buffer_capacity

        self.state_buffer[index] = obs_tuple[0]
        self.action_buffer[index] = obs_tuple[1]
        self.reward_buffer[index] = obs_tuple[2]
        self.next_state_buffer[index] = obs_tuple[3]

        self.buffer_counter += 1

    # Eager execution is turned on by default in TensorFlow 2. Decorating with tf.function allows
    # TensorFlow to build a static graph out of the logic and computations in our function.
    # This provides a large speed up for blocks of code that contain many small TensorFlow operations such as this one.
    @tf.function
    def update(
        self,
        state_batch,
        action_batch,
        reward_batch,
        next_state_batch,
    ):
        # Training and updating Actor & Critic networks.
        # See Pseudo Code.
        with tf.GradientTape() as tape:
            target_actions = target_actor(next_state_batch, training=True)
            y = reward_batch + gamma * target_critic(
                [next_state_batch, target_actions], training=True
            )
            critic_value = critic_model([state_batch, action_batch], training=True)
            critic_loss = keras.ops.mean(keras.ops.square(y - critic_value))

        critic_grad = tape.gradient(critic_loss, critic_model.trainable_variables)
        critic_optimizer.apply_gradients(
            zip(critic_grad, critic_model.trainable_variables)
        )

        with tf.GradientTape() as tape:
            actions = actor_model(state_batch, training=True)
            critic_value = critic_model([state_batch, actions], training=True)
            # Used `-value` as we want to maximize the value given
            # by the critic for our actions
            actor_loss = -keras.ops.mean(critic_value)

        actor_grad = tape.gradient(actor_loss, actor_model.trainable_variables)
        actor_optimizer.apply_gradients(
            zip(actor_grad, actor_model.trainable_variables)
        )

    # We compute the loss and update parameters
    def learn(self):
        # Get sampling range
        record_range = min(self.buffer_counter, self.buffer_capacity)
        # Randomly sample indices
        batch_indices = np.random.choice(record_range, self.batch_size)

        # Convert to tensors
        state_batch = keras.ops.convert_to_tensor(self.state_buffer[batch_indices])
        action_batch = keras.ops.convert_to_tensor(self.action_buffer[batch_indices])
        reward_batch = keras.ops.convert_to_tensor(self.reward_buffer[batch_indices])
        reward_batch = keras.ops.cast(reward_batch, dtype="float32")
        next_state_batch = keras.ops.convert_to_tensor(
            self.next_state_buffer[batch_indices]
        )

        self.update(state_batch, action_batch, reward_batch, next_state_batch)


# This update target parameters slowly
# Based on rate `tau`, which is much less than one.
def update_target(target, original, tau):
    target_weights = target.get_weights()
    original_weights = original.get_weights()

    for i in range(len(target_weights)):
        target_weights[i] = original_weights[i] * tau + target_weights[i] * (1 - tau)

    target.set_weights(target_weights)

在這裡,我們定義 Actor 和 Critic 網路。這些是具有 ReLU 啟動的基本 Dense 模型。

注意:我們需要將 Actor 最後一層的初始化設定在 -0.0030.003 之間,因為這樣可以防止我們在初始階段獲得 1-1 的輸出值,這會將我們的梯度壓縮為零,因為我們使用 tanh 啟動。

def get_actor():
    # Initialize weights between -3e-3 and 3-e3
    last_init = keras.initializers.RandomUniform(minval=-0.003, maxval=0.003)

    inputs = layers.Input(shape=(num_states,))
    out = layers.Dense(256, activation="relu")(inputs)
    out = layers.Dense(256, activation="relu")(out)
    outputs = layers.Dense(1, activation="tanh", kernel_initializer=last_init)(out)

    # Our upper bound is 2.0 for Pendulum.
    outputs = outputs * upper_bound
    model = keras.Model(inputs, outputs)
    return model


def get_critic():
    # State as input
    state_input = layers.Input(shape=(num_states,))
    state_out = layers.Dense(16, activation="relu")(state_input)
    state_out = layers.Dense(32, activation="relu")(state_out)

    # Action as input
    action_input = layers.Input(shape=(num_actions,))
    action_out = layers.Dense(32, activation="relu")(action_input)

    # Both are passed through separate layer before concatenating
    concat = layers.Concatenate()([state_out, action_out])

    out = layers.Dense(256, activation="relu")(concat)
    out = layers.Dense(256, activation="relu")(out)
    outputs = layers.Dense(1)(out)

    # Outputs single value for give state-action
    model = keras.Model([state_input, action_input], outputs)

    return model

policy() 會傳回從我們的 Actor 網路取樣的動作,再加上一些雜訊以進行探索。

def policy(state, noise_object):
    sampled_actions = keras.ops.squeeze(actor_model(state))
    noise = noise_object()
    # Adding noise to action
    sampled_actions = sampled_actions.numpy() + noise

    # We make sure action is within bounds
    legal_action = np.clip(sampled_actions, lower_bound, upper_bound)

    return [np.squeeze(legal_action)]

訓練超參數

std_dev = 0.2
ou_noise = OUActionNoise(mean=np.zeros(1), std_deviation=float(std_dev) * np.ones(1))

actor_model = get_actor()
critic_model = get_critic()

target_actor = get_actor()
target_critic = get_critic()

# Making the weights equal initially
target_actor.set_weights(actor_model.get_weights())
target_critic.set_weights(critic_model.get_weights())

# Learning rate for actor-critic models
critic_lr = 0.002
actor_lr = 0.001

critic_optimizer = keras.optimizers.Adam(critic_lr)
actor_optimizer = keras.optimizers.Adam(actor_lr)

total_episodes = 100
# Discount factor for future rewards
gamma = 0.99
# Used to update target networks
tau = 0.005

buffer = Buffer(50000, 64)

現在我們實作我們的主訓練迴圈,並在 episode 上迭代。我們使用 policy() 取樣動作,並在每個時間步長使用 learn() 訓練,同時以 tau 的速率更新目標網路。

# To store reward history of each episode
ep_reward_list = []
# To store average reward history of last few episodes
avg_reward_list = []

# Takes about 4 min to train
for ep in range(total_episodes):
    prev_state, _ = env.reset()
    episodic_reward = 0

    while True:
        tf_prev_state = keras.ops.expand_dims(
            keras.ops.convert_to_tensor(prev_state), 0
        )

        action = policy(tf_prev_state, ou_noise)
        # Receive state and reward from environment.
        state, reward, done, truncated, _ = env.step(action)

        buffer.record((prev_state, action, reward, state))
        episodic_reward += reward

        buffer.learn()

        update_target(target_actor, actor_model, tau)
        update_target(target_critic, critic_model, tau)

        # End this episode when `done` or `truncated` is True
        if done or truncated:
            break

        prev_state = state

    ep_reward_list.append(episodic_reward)

    # Mean of last 40 episodes
    avg_reward = np.mean(ep_reward_list[-40:])
    print("Episode * {} * Avg Reward is ==> {}".format(ep, avg_reward))
    avg_reward_list.append(avg_reward)

# Plotting graph
# Episodes versus Avg. Rewards
plt.plot(avg_reward_list)
plt.xlabel("Episode")
plt.ylabel("Avg. Episodic Reward")
plt.show()
Episode * 0 * Avg Reward is ==> -1020.8244931732263

Episode * 1 * Avg Reward is ==> -1338.2811167733332

Episode * 2 * Avg Reward is ==> -1450.0427316158366

Episode * 3 * Avg Reward is ==> -1529.0751774957375

Episode * 4 * Avg Reward is ==> -1560.3468658090717

Episode * 5 * Avg Reward is ==> -1525.6201906715812

Episode * 6 * Avg Reward is ==> -1522.0047531836371

Episode * 7 * Avg Reward is ==> -1507.4391205141226

Episode * 8 * Avg Reward is ==> -1443.4147334537984

Episode * 9 * Avg Reward is ==> -1452.0432974943765

Episode * 10 * Avg Reward is ==> -1344.1960761302823

Episode * 11 * Avg Reward is ==> -1327.0472948059835

Episode * 12 * Avg Reward is ==> -1332.4638031402194

Episode * 13 * Avg Reward is ==> -1287.4884456842617

Episode * 14 * Avg Reward is ==> -1257.3643575644046

Episode * 15 * Avg Reward is ==> -1210.9679762262906

Episode * 16 * Avg Reward is ==> -1165.8684037899104

Episode * 17 * Avg Reward is ==> -1107.6228192573426

Episode * 18 * Avg Reward is ==> -1049.4192654959388

Episode * 19 * Avg Reward is ==> -1003.3255480245641

Episode * 20 * Avg Reward is ==> -961.6386918013155

Episode * 21 * Avg Reward is ==> -929.1847739440876

Episode * 22 * Avg Reward is ==> -894.356849609832

Episode * 23 * Avg Reward is ==> -872.3450419603026

Episode * 24 * Avg Reward is ==> -842.5992147531034

Episode * 25 * Avg Reward is ==> -818.8730806655396

Episode * 26 * Avg Reward is ==> -793.3147256249664

Episode * 27 * Avg Reward is ==> -769.6124209263007

Episode * 28 * Avg Reward is ==> -747.5122117563488

Episode * 29 * Avg Reward is ==> -726.8111953151997

Episode * 30 * Avg Reward is ==> -707.3781885286952

Episode * 31 * Avg Reward is ==> -688.9993520703357

Episode * 32 * Avg Reward is ==> -672.0164054875188

Episode * 33 * Avg Reward is ==> -652.3297236089893

Episode * 34 * Avg Reward is ==> -633.7305579653394

Episode * 35 * Avg Reward is ==> -622.6444438529929

Episode * 36 * Avg Reward is ==> -612.2391199605028

Episode * 37 * Avg Reward is ==> -599.2441039477458

Episode * 38 * Avg Reward is ==> -593.713500114108

Episode * 39 * Avg Reward is ==> -582.062487157142

Episode * 40 * Avg Reward is ==> -556.559275313473

Episode * 41 * Avg Reward is ==> -518.053376711216

Episode * 42 * Avg Reward is ==> -482.2191305356082

Episode * 43 * Avg Reward is ==> -441.1561293090619

Episode * 44 * Avg Reward is ==> -402.0403515001418

Episode * 45 * Avg Reward is ==> -371.3376110030464

Episode * 46 * Avg Reward is ==> -336.8145387714556

Episode * 47 * Avg Reward is ==> -301.7732070717081

Episode * 48 * Avg Reward is ==> -281.4823965447058

Episode * 49 * Avg Reward is ==> -243.2750024568545

Episode * 50 * Avg Reward is ==> -236.6512197943394

Episode * 51 * Avg Reward is ==> -211.20860968588096

Episode * 52 * Avg Reward is ==> -176.31339260650844

Episode * 53 * Avg Reward is ==> -158.77021134671222

Episode * 54 * Avg Reward is ==> -146.76749516161257

Episode * 55 * Avg Reward is ==> -133.93793525539664

Episode * 56 * Avg Reward is ==> -129.24881351771964

Episode * 57 * Avg Reward is ==> -129.49219614666802

Episode * 58 * Avg Reward is ==> -132.53205721511375

Episode * 59 * Avg Reward is ==> -132.60389802731262

Episode * 60 * Avg Reward is ==> -132.62344822194035

Episode * 61 * Avg Reward is ==> -133.2372468795715

Episode * 62 * Avg Reward is ==> -133.1046546040286

Episode * 63 * Avg Reward is ==> -127.17488349564069

Episode * 64 * Avg Reward is ==> -130.02349725294775

Episode * 65 * Avg Reward is ==> -127.32475296620544

Episode * 66 * Avg Reward is ==> -126.99528350924034

Episode * 67 * Avg Reward is ==> -126.65903554713267

Episode * 68 * Avg Reward is ==> -126.63950221408372

Episode * 69 * Avg Reward is ==> -129.4066259498526

Episode * 70 * Avg Reward is ==> -129.34372109952105

Episode * 71 * Avg Reward is ==> -132.29705860930432

Episode * 72 * Avg Reward is ==> -132.00732697620566

Episode * 73 * Avg Reward is ==> -138.01483877165032

Episode * 74 * Avg Reward is ==> -145.33430273020608

Episode * 75 * Avg Reward is ==> -145.32777005464345

Episode * 76 * Avg Reward is ==> -142.4835146046417

Episode * 77 * Avg Reward is ==> -139.59338840338395

Episode * 78 * Avg Reward is ==> -133.04552232142163

Episode * 79 * Avg Reward is ==> -132.93288588036899

Episode * 80 * Avg Reward is ==> -136.16012471382237

Episode * 81 * Avg Reward is ==> -139.21305348031393

Episode * 82 * Avg Reward is ==> -133.23691621529298

Episode * 83 * Avg Reward is ==> -135.92990594024982

Episode * 84 * Avg Reward is ==> -136.03027429930435

Episode * 85 * Avg Reward is ==> -135.97360824863455

Episode * 86 * Avg Reward is ==> -136.10527880830494

Episode * 87 * Avg Reward is ==> -139.05391439010512

Episode * 88 * Avg Reward is ==> -142.56133171606365

Episode * 89 * Avg Reward is ==> -161.33989090345662

Episode * 90 * Avg Reward is ==> -170.82788477632195

Episode * 91 * Avg Reward is ==> -170.8558841498521

Episode * 92 * Avg Reward is ==> -173.9910213401168

Episode * 93 * Avg Reward is ==> -176.87631595893498

Episode * 94 * Avg Reward is ==> -170.97863292694336

Episode * 95 * Avg Reward is ==> -173.88549953443538

Episode * 96 * Avg Reward is ==> -170.7028462286189

Episode * 97 * Avg Reward is ==> -173.47564018610032

Episode * 98 * Avg Reward is ==> -173.42104867150212

Episode * 99 * Avg Reward is ==> -173.2394285933109

png

如果訓練正確進行,則平均 episode 回報將隨著時間增加。

請隨意嘗試不同的學習率、tau 值以及 Actor 和 Critic 網路的架構。

倒立擺問題的複雜性較低,但 DDPG 在許多其他問題上都表現出色。

另一個很棒的嘗試環境是連續的 LunarLander-v2,但需要更多的 episode 才能獲得良好的結果。

# Save the weights
actor_model.save_weights("pendulum_actor.weights.h5")
critic_model.save_weights("pendulum_critic.weights.h5")

target_actor.save_weights("pendulum_target_actor.weights.h5")
target_critic.save_weights("pendulum_target_critic.weights.h5")

訓練之前

before_img

100 個 episode 後

after_img