程式碼範例 / 電腦視覺 / 使用 AdaMatch 進行半監督學習和領域適應

使用 AdaMatch 進行半監督學習和領域適應

作者: Sayak Paul
建立日期 2021/06/19
上次修改日期 2021/06/19
描述: 使用 AdaMatch 統一半監督學習和無監督領域適應。

ⓘ 此範例使用 Keras 2

在 Colab 中檢視 GitHub 原始碼


簡介

在本範例中,我們將實作 Berthelot 等人於AdaMatch:半監督學習和領域適應的統一方法中提出的 AdaMatch 演算法。它在無監督領域適應方面創下新的技術水準(截至 2021 年 6 月)。AdaMatch 特別有趣,因為它在一個框架下統一了半監督學習 (SSL) 和無監督領域適應 (UDA)。因此,它提供了一種執行半監督領域適應 (SSDA) 的方法。

此範例需要 TensorFlow 2.5 或更高版本,以及 TensorFlow Models,可以使用以下命令安裝

!pip install -q tf-models-official==2.9.2

在繼續之前,讓我們先回顧一下此範例的一些基本概念。


預備知識

半監督學習 (SSL) 中,我們使用少量標記資料在較大的未標記資料集上訓練模型。適用於電腦視覺的熱門半監督學習方法包括FixMatchMixMatchNoisy Student Training 等。您可以參考此範例,以了解標準 SSL 工作流程的外觀。

無監督領域適應中,我們可以存取來源標記資料集和目標未標記資料集。然後,任務是學習一個可以很好地泛化到目標資料集的模型。來源和目標資料集在分佈方面有所不同。下圖提供了這個想法的說明。在本範例中,我們使用MNIST 資料集作為來源資料集,而目標資料集是SVHN,它由房屋號碼的圖像組成。這兩個資料集在紋理、視角、外觀等方面都有各種不同的因素:它們的領域或分佈彼此不同。

深度學習中熱門的領域適應演算法包括Deep CORALMoment Matching 等。


設定

import tensorflow as tf

tf.random.set_seed(42)

import numpy as np

from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import regularizers
from keras_cv.layers import RandAugment

import tensorflow_datasets as tfds

tfds.disable_progress_bar()

準備資料

# MNIST
(
    (mnist_x_train, mnist_y_train),
    (mnist_x_test, mnist_y_test),
) = keras.datasets.mnist.load_data()

# Add a channel dimension
mnist_x_train = tf.expand_dims(mnist_x_train, -1)
mnist_x_test = tf.expand_dims(mnist_x_test, -1)

# Convert the labels to one-hot encoded vectors
mnist_y_train = tf.one_hot(mnist_y_train, 10).numpy()

# SVHN
svhn_train, svhn_test = tfds.load(
    "svhn_cropped", split=["train", "test"], as_supervised=True
)

定義常數和超參數

RESIZE_TO = 32

SOURCE_BATCH_SIZE = 64
TARGET_BATCH_SIZE = 3 * SOURCE_BATCH_SIZE  # Reference: Section 3.2
EPOCHS = 10
STEPS_PER_EPOCH = len(mnist_x_train) // SOURCE_BATCH_SIZE
TOTAL_STEPS = EPOCHS * STEPS_PER_EPOCH

AUTO = tf.data.AUTOTUNE
LEARNING_RATE = 0.03

WEIGHT_DECAY = 0.0005
INIT = "he_normal"
DEPTH = 28
WIDTH_MULT = 2

資料增強實用工具

SSL 演算法的一個標準要素是將相同圖像的弱增強和強增強版本饋送到學習模型,以使其預測保持一致。對於強增強,RandAugment 是一個標準選擇。對於弱增強,我們將使用水平翻轉和隨機裁剪。

# Initialize `RandAugment` object with 2 layers of
# augmentation transforms and strength of 5.
augmenter = RandAugment(value_range=(0, 255), augmentations_per_image=2, magnitude=0.5)


def weak_augment(image, source=True):
    if image.dtype != tf.float32:
        image = tf.cast(image, tf.float32)

    # MNIST images are grayscale, this is why we first convert them to
    # RGB images.
    if source:
        image = tf.image.resize_with_pad(image, RESIZE_TO, RESIZE_TO)
        image = tf.tile(image, [1, 1, 3])
    image = tf.image.random_flip_left_right(image)
    image = tf.image.random_crop(image, (RESIZE_TO, RESIZE_TO, 3))
    return image


def strong_augment(image, source=True):
    if image.dtype != tf.float32:
        image = tf.cast(image, tf.float32)

    if source:
        image = tf.image.resize_with_pad(image, RESIZE_TO, RESIZE_TO)
        image = tf.tile(image, [1, 1, 3])
    image = augmenter(image)
    return image

資料載入實用工具

def create_individual_ds(ds, aug_func, source=True):
    if source:
        batch_size = SOURCE_BATCH_SIZE
    else:
        # During training 3x more target unlabeled samples are shown
        # to the model in AdaMatch (Section 3.2 of the paper).
        batch_size = TARGET_BATCH_SIZE
    ds = ds.shuffle(batch_size * 10, seed=42)

    if source:
        ds = ds.map(lambda x, y: (aug_func(x), y), num_parallel_calls=AUTO)
    else:
        ds = ds.map(lambda x, y: (aug_func(x, False), y), num_parallel_calls=AUTO)

    ds = ds.batch(batch_size).prefetch(AUTO)
    return ds

_w_s 後綴分別表示弱和強。

source_ds = tf.data.Dataset.from_tensor_slices((mnist_x_train, mnist_y_train))
source_ds_w = create_individual_ds(source_ds, weak_augment)
source_ds_s = create_individual_ds(source_ds, strong_augment)
final_source_ds = tf.data.Dataset.zip((source_ds_w, source_ds_s))

target_ds_w = create_individual_ds(svhn_train, weak_augment, source=False)
target_ds_s = create_individual_ds(svhn_train, strong_augment, source=False)
final_target_ds = tf.data.Dataset.zip((target_ds_w, target_ds_s))

這是一個單一圖像批次的外觀


損失計算實用工具

def compute_loss_source(source_labels, logits_source_w, logits_source_s):
    loss_func = keras.losses.CategoricalCrossentropy(from_logits=True)
    # First compute the losses between original source labels and
    # predictions made on the weakly and strongly augmented versions
    # of the same images.
    w_loss = loss_func(source_labels, logits_source_w)
    s_loss = loss_func(source_labels, logits_source_s)
    return w_loss + s_loss


def compute_loss_target(target_pseudo_labels_w, logits_target_s, mask):
    loss_func = keras.losses.CategoricalCrossentropy(from_logits=True, reduction="none")
    target_pseudo_labels_w = tf.stop_gradient(target_pseudo_labels_w)
    # For calculating loss for the target samples, we treat the pseudo labels
    # as the ground-truth. These are not considered during backpropagation
    # which is a standard SSL practice.
    target_loss = loss_func(target_pseudo_labels_w, logits_target_s)

    # More on `mask` later.
    mask = tf.cast(mask, target_loss.dtype)
    target_loss *= mask
    return tf.reduce_mean(target_loss, 0)

用於 AdaMatch 訓練的子類模型

下圖呈現 AdaMatch 的整體工作流程(取自原始論文

以下是工作流程的簡要逐步分解

  1. 我們首先從來源和目標資料集檢索弱增強和強增強的圖像對。
  2. 我們準備兩個串聯的副本:i. 將這兩對串聯在一起的副本。ii. 僅將來源資料圖像對串聯在一起的副本。
  3. 我們透過模型執行兩個正向傳遞:i. 第一個正向傳遞使用從 2.i 獲得的串聯副本。在此正向傳遞中,會更新批次正規化統計資料。ii. 在第二個正向傳遞中,我們僅使用從 2.ii 獲得的串聯副本。批次正規化層在推論模式下執行。
  4. 計算兩個正向傳遞的各自 logits。
  5. logits 會經歷論文中介紹的一系列轉換(我們將稍後討論)。
  6. 我們計算損失並更新基礎模型的梯度。
class AdaMatch(keras.Model):
    def __init__(self, model, total_steps, tau=0.9):
        super().__init__()
        self.model = model
        self.tau = tau  # Denotes the confidence threshold
        self.loss_tracker = tf.keras.metrics.Mean(name="loss")
        self.total_steps = total_steps
        self.current_step = tf.Variable(0, dtype="int64")

    @property
    def metrics(self):
        return [self.loss_tracker]

    # This is a warmup schedule to update the weight of the
    # loss contributed by the target unlabeled samples. More
    # on this in the text.
    def compute_mu(self):
        pi = tf.constant(np.pi, dtype="float32")
        step = tf.cast(self.current_step, dtype="float32")
        return 0.5 - tf.cos(tf.math.minimum(pi, (2 * pi * step) / self.total_steps)) / 2

    def train_step(self, data):
        ## Unpack and organize the data ##
        source_ds, target_ds = data
        (source_w, source_labels), (source_s, _) = source_ds
        (
            (target_w, _),
            (target_s, _),
        ) = target_ds  # Notice that we are NOT using any labels here.

        combined_images = tf.concat([source_w, source_s, target_w, target_s], 0)
        combined_source = tf.concat([source_w, source_s], 0)

        total_source = tf.shape(combined_source)[0]
        total_target = tf.shape(tf.concat([target_w, target_s], 0))[0]

        with tf.GradientTape() as tape:
            ## Forward passes ##
            combined_logits = self.model(combined_images, training=True)
            z_d_prime_source = self.model(
                combined_source, training=False
            )  # No BatchNorm update.
            z_prime_source = combined_logits[:total_source]

            ## 1. Random logit interpolation for the source images ##
            lambd = tf.random.uniform((total_source, 10), 0, 1)
            final_source_logits = (lambd * z_prime_source) + (
                (1 - lambd) * z_d_prime_source
            )

            ## 2. Distribution alignment (only consider weakly augmented images) ##
            # Compute softmax for logits of the WEAKLY augmented SOURCE images.
            y_hat_source_w = tf.nn.softmax(final_source_logits[: tf.shape(source_w)[0]])

            # Extract logits for the WEAKLY augmented TARGET images and compute softmax.
            logits_target = combined_logits[total_source:]
            logits_target_w = logits_target[: tf.shape(target_w)[0]]
            y_hat_target_w = tf.nn.softmax(logits_target_w)

            # Align the target label distribution to that of the source.
            expectation_ratio = tf.reduce_mean(y_hat_source_w) / tf.reduce_mean(
                y_hat_target_w
            )
            y_tilde_target_w = tf.math.l2_normalize(
                y_hat_target_w * expectation_ratio, 1
            )

            ## 3. Relative confidence thresholding ##
            row_wise_max = tf.reduce_max(y_hat_source_w, axis=-1)
            final_sum = tf.reduce_mean(row_wise_max, 0)
            c_tau = self.tau * final_sum
            mask = tf.reduce_max(y_tilde_target_w, axis=-1) >= c_tau

            ## Compute losses (pay attention to the indexing) ##
            source_loss = compute_loss_source(
                source_labels,
                final_source_logits[: tf.shape(source_w)[0]],
                final_source_logits[tf.shape(source_w)[0] :],
            )
            target_loss = compute_loss_target(
                y_tilde_target_w, logits_target[tf.shape(target_w)[0] :], mask
            )

            t = self.compute_mu()  # Compute weight for the target loss
            total_loss = source_loss + (t * target_loss)
            self.current_step.assign_add(
                1
            )  # Update current training step for the scheduler

        gradients = tape.gradient(total_loss, self.model.trainable_variables)
        self.optimizer.apply_gradients(zip(gradients, self.model.trainable_variables))

        self.loss_tracker.update_state(total_loss)
        return {"loss": self.loss_tracker.result()}

作者在論文中介紹了三個改進

  • 在 AdaMatch 中,我們會執行兩次前向傳遞,其中只有一次負責更新批次正規化(Batch Normalization)的統計數據。這樣做的目的是為了處理目標數據集中的分佈偏移。在另一次前向傳遞中,我們只使用來源樣本,並且批次正規化層以推論模式執行。由於批次正規化層的執行方式不同,因此這兩次傳遞的來源樣本(弱擴增和強擴增版本)的 Logit 會略有不同。來源樣本的最終 Logit 是通過在這兩組不同的 Logit 之間進行線性插值來計算的。這會產生一種一致性正規化的形式。此步驟稱為隨機 Logit 插值
  • 分佈對齊用於對齊來源和目標標籤的分佈。這進一步有助於底層模型學習領域不變的表示。在無監督領域自適應的情況下,我們無法訪問目標數據集的任何標籤。這就是為什麼偽標籤是從底層模型生成的。
  • 底層模型會為目標樣本生成偽標籤。模型很可能會做出錯誤的預測。當我們在訓練中取得進展時,這些錯誤可能會傳播回來,並損害整體效能。為了彌補這一點,我們會根據閾值過濾高信心的預測(因此在 compute_loss_target() 內使用 mask)。在 AdaMatch 中,此閾值是相對調整的,這就是為什麼它被稱為相對信賴度閾值

有關這些方法的更多詳細資訊,以及了解它們各自的貢獻,請參閱論文

關於 compute_mu():

AdaMatch 不是使用固定的純量值,而是使用一個可變的純量值。它表示目標樣本貢獻的損失權重。視覺上,權重排程器如下所示

此排程器會在訓練的前半段將目標領域損失的權重從 0 增加到 1。然後在訓練的後半段將權重保持在 1。


實例化 Wide-ResNet-28-2

作者針對我們在此範例中使用的資料集配對使用了 WideResNet-28-2。以下大多數程式碼都參考自此腳本。請注意,以下模型內部有一個縮放層,可將像素值縮放到 [0, 1]。

def wide_basic(x, n_input_plane, n_output_plane, stride):
    conv_params = [[3, 3, stride, "same"], [3, 3, (1, 1), "same"]]

    n_bottleneck_plane = n_output_plane

    # Residual block
    for i, v in enumerate(conv_params):
        if i == 0:
            if n_input_plane != n_output_plane:
                x = layers.BatchNormalization()(x)
                x = layers.Activation("relu")(x)
                convs = x
            else:
                convs = layers.BatchNormalization()(x)
                convs = layers.Activation("relu")(convs)
            convs = layers.Conv2D(
                n_bottleneck_plane,
                (v[0], v[1]),
                strides=v[2],
                padding=v[3],
                kernel_initializer=INIT,
                kernel_regularizer=regularizers.l2(WEIGHT_DECAY),
                use_bias=False,
            )(convs)
        else:
            convs = layers.BatchNormalization()(convs)
            convs = layers.Activation("relu")(convs)
            convs = layers.Conv2D(
                n_bottleneck_plane,
                (v[0], v[1]),
                strides=v[2],
                padding=v[3],
                kernel_initializer=INIT,
                kernel_regularizer=regularizers.l2(WEIGHT_DECAY),
                use_bias=False,
            )(convs)

    # Shortcut connection: identity function or 1x1
    # convolutional
    #  (depends on difference between input & output shape - this
    #   corresponds to whether we are using the first block in
    #   each
    #   group; see `block_series()`).
    if n_input_plane != n_output_plane:
        shortcut = layers.Conv2D(
            n_output_plane,
            (1, 1),
            strides=stride,
            padding="same",
            kernel_initializer=INIT,
            kernel_regularizer=regularizers.l2(WEIGHT_DECAY),
            use_bias=False,
        )(x)
    else:
        shortcut = x

    return layers.Add()([convs, shortcut])


# Stacking residual units on the same stage
def block_series(x, n_input_plane, n_output_plane, count, stride):
    x = wide_basic(x, n_input_plane, n_output_plane, stride)
    for i in range(2, int(count + 1)):
        x = wide_basic(x, n_output_plane, n_output_plane, stride=1)
    return x


def get_network(image_size=32, num_classes=10):
    n = (DEPTH - 4) / 6
    n_stages = [16, 16 * WIDTH_MULT, 32 * WIDTH_MULT, 64 * WIDTH_MULT]

    inputs = keras.Input(shape=(image_size, image_size, 3))
    x = layers.Rescaling(scale=1.0 / 255)(inputs)

    conv1 = layers.Conv2D(
        n_stages[0],
        (3, 3),
        strides=1,
        padding="same",
        kernel_initializer=INIT,
        kernel_regularizer=regularizers.l2(WEIGHT_DECAY),
        use_bias=False,
    )(x)

    ## Add wide residual blocks ##

    conv2 = block_series(
        conv1,
        n_input_plane=n_stages[0],
        n_output_plane=n_stages[1],
        count=n,
        stride=(1, 1),
    )  # Stage 1

    conv3 = block_series(
        conv2,
        n_input_plane=n_stages[1],
        n_output_plane=n_stages[2],
        count=n,
        stride=(2, 2),
    )  # Stage 2

    conv4 = block_series(
        conv3,
        n_input_plane=n_stages[2],
        n_output_plane=n_stages[3],
        count=n,
        stride=(2, 2),
    )  # Stage 3

    batch_norm = layers.BatchNormalization()(conv4)
    relu = layers.Activation("relu")(batch_norm)

    # Classifier
    trunk_outputs = layers.GlobalAveragePooling2D()(relu)
    outputs = layers.Dense(
        num_classes, kernel_regularizer=regularizers.l2(WEIGHT_DECAY)
    )(trunk_outputs)

    return keras.Model(inputs, outputs)

我們現在可以像這樣實例化 Wide ResNet 模型。請注意,此處使用 Wide ResNet 的目的是使實作盡可能接近原始實作。

wrn_model = get_network()
print(f"Model has {wrn_model.count_params()/1e6} Million parameters.")
Model has 1.471226 Million parameters.

實例化 AdaMatch 模型並編譯它

reduce_lr = keras.optimizers.schedules.CosineDecay(LEARNING_RATE, TOTAL_STEPS, 0.25)
optimizer = keras.optimizers.Adam(reduce_lr)

adamatch_trainer = AdaMatch(model=wrn_model, total_steps=TOTAL_STEPS)
adamatch_trainer.compile(optimizer=optimizer)

模型訓練

total_ds = tf.data.Dataset.zip((final_source_ds, final_target_ds))
adamatch_trainer.fit(total_ds, epochs=EPOCHS)
Epoch 1/10
382/382 [==============================] - 155s 392ms/step - loss: 149259583488.0000
Epoch 2/10
382/382 [==============================] - 145s 379ms/step - loss: 2.0935
Epoch 3/10
382/382 [==============================] - 145s 380ms/step - loss: 1.7237
Epoch 4/10
382/382 [==============================] - 142s 370ms/step - loss: 1.9182
Epoch 5/10
382/382 [==============================] - 141s 367ms/step - loss: 2.9698
Epoch 6/10
382/382 [==============================] - 141s 368ms/step - loss: 3.2622
Epoch 7/10
382/382 [==============================] - 141s 367ms/step - loss: 2.9034
Epoch 8/10
382/382 [==============================] - 141s 368ms/step - loss: 3.2735
Epoch 9/10
382/382 [==============================] - 141s 369ms/step - loss: 3.9449
Epoch 10/10
382/382 [==============================] - 141s 369ms/step - loss: 3.5918

<keras.callbacks.History at 0x7f16eb261e20>

在目標和來源測試集上進行評估

# Compile the AdaMatch model to yield accuracy.
adamatch_trained_model = adamatch_trainer.model
adamatch_trained_model.compile(metrics=keras.metrics.SparseCategoricalAccuracy())

# Score on the target test set.
svhn_test = svhn_test.batch(TARGET_BATCH_SIZE).prefetch(AUTO)
_, accuracy = adamatch_trained_model.evaluate(svhn_test)
print(f"Accuracy on target test set: {accuracy * 100:.2f}%")
136/136 [==============================] - 4s 24ms/step - loss: 508.2073 - sparse_categorical_accuracy: 0.2408
Accuracy on target test set: 24.08%

通過更多訓練,此分數會提高。當使用標準分類目標訓練相同網路時,它會產生 7.20% 的準確度,這明顯低於我們使用 AdaMatch 獲得的結果。您可以查看此筆記本以了解有關超參數和其他實驗細節的更多資訊。

# Utility function for preprocessing the source test set.
def prepare_test_ds_source(image, label):
    image = tf.image.resize_with_pad(image, RESIZE_TO, RESIZE_TO)
    image = tf.tile(image, [1, 1, 3])
    return image, label


source_test_ds = tf.data.Dataset.from_tensor_slices((mnist_x_test, mnist_y_test))
source_test_ds = (
    source_test_ds.map(prepare_test_ds_source, num_parallel_calls=AUTO)
    .batch(TARGET_BATCH_SIZE)
    .prefetch(AUTO)
)

# Evaluation on the source test set.
_, accuracy = adamatch_trained_model.evaluate(source_test_ds)
print(f"Accuracy on source test set: {accuracy * 100:.2f}%")
53/53 [==============================] - 2s 24ms/step - loss: 508.2072 - sparse_categorical_accuracy: 0.9736
Accuracy on source test set: 97.36%

您可以使用這些模型權重來重現結果。

HuggingFace 上提供的範例 | 已訓練模型 | 示範 | | :–: | :–: | | Generic badge | Generic badge |