機器學習動手做Lesson 15 — 集成神經網路的 4 招大彙整

施威銘研究室
17 min readJan 21, 2022

--

集成式學習(Ensemble Learning)是機器學習(Machine Learning)中相當重要的技術,我們可以看到 Kaggle 上的各項競賽第一名,常常都有使用集成式學習的方法。

本文根據旗標出版的「集成式學習:Python 實踐!整合全部技術,打造最強模型」,以及 Chen 等人的論文(Chen et al., 2017),一次說明 4 個集成神經網路(Neural Network)的方法。

一、Random Initialization Ensembles(RIE)

即使相同的神經網路,只要初始化的參數不同,訓練結果可能就會有所差異。因此,我們可以利用這樣的特性,訓練很多架構相同的神經網路,最後集成所有神經網路的預測值。

Python 實作很單純,我們只要連續呼叫 k 次訓練模型的函式(本文的 k 是 5),接著對預測值做軟投票(Soft Voting)即可完成。除了使用軟投票,也可以使用加權軟投票(Weighted Soft Voting)、硬投票(Hard Voting),或是使用堆疊法(Stacking)來整合所有神經網路的輸出。關於整合輸出的方法,可以參考旗標出版的「集成式學習:Python 實踐!整合全部技術,打造最強模型」。

model, history, weights_dict, pred0 = train_one_model(x_train,
x_test,
y_train,
y_test)
model, history, weights_dict, pred1 = train_one_model(x_train,
x_test,
y_train,
y_test)
model, history, weights_dict, pred2 = train_one_model(x_train,
x_test,
y_train,
y_test)
model, history, weights_dict, pred3 = train_one_model(x_train,
x_test,
y_train,
y_test)
model, history, weights_dict, pred4 = train_one_model(x_train,
x_test,
y_train,
y_test)
RIE_loss = metrics.log_loss(y_test,
(pred0+pred1+pred2+pred3+pred4)/5)

二、Checkpoint Ensemble(CE)

上述 RIE 方法有一個很大的缺點:必須訓練很多個神經網路。有時候訓練一個神經網路需要很多時間,如果還要訓練一大堆,顯然很耗運算資源。因此,有研究者(Chen et al., 2017)就提出了 Checkpoint Ensemble:在訓練一個神經網路的過程中,記下每個 epoch 的模型參數以及損失,最後挑出最好的 k 個模型參數,集成這 k 個模型的預測值。

我們可以發現一個很重要的事情:參與集成的 k 個模型,都是在訓練一個神經網路過程中就產生了,因此不需要訓練很多個神經網路,省下大把時間。

想要紀錄每個 epoch 的模型參數跟損失,需要使用回呼函式(Callback Function),Keras 已經有內建的函式可以使用,我們只要稍加修改即可。

weights_dict = {}
weight_callback = LambdaCallback(on_epoch_end=lambda epoch,
logs: weights_dict.update({epoch:model.get_weights()}))
history = model.fit(datagen.flow(x_train,
y_train,
batch_size=batch_size),
steps_per_epoch=x_train.shape[0] // batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test),
callbacks=[weight_callback, early_stop])

我們可以使用 Python 的 heapq.nsmallest 函式,來抓出訓練過程中最好的 k 個模型參數。接著將 k 個參數依序灌入模型,得到預測值後做平均,完成集成。

best_epoch = heapq.nsmallest(5, 
range(len(loss_history)),
key=loss_history.__getitem__)

model.set_weights(weights_dict[best_epoch[0]])
pred0 = model.predict(x_test)
model.set_weights(weights_dict[best_epoch[1]])
pred1 = model.predict(x_test)
model.set_weights(weights_dict[best_epoch[2]])
pred2 = model.predict(x_test)
model.set_weights(weights_dict[best_epoch[3]])
pred3 = model.predict(x_test)
model.set_weights(weights_dict[best_epoch[4]])
pred4 = model.predict(x_test)

CE_loss = metrics.log_loss(y_test,
(pred0+pred1+pred2+pred3+pred4)/5)

三、Checkpoint Smoothers(CS)

除了集成神經網路的輸出之外,也可以集成神經網路的參數。Checkpoint Smoothers 是在訓練一個神經網路的過程中,記下每個 epoch 的模型參數以及損失,最後挑出最好的 k 個模型參數,將參數取平均得到一個模型,接著使用該模型做預測。

Checkpoint Smoothers 跟 Checkpoint Ensemble 的差異在於:前者是集成模型參數,後者是集成模型預測值

我們一樣用 heapq.nsmallest 抓出最好的 k 個模型參數,接著將模型參數取平均,灌回模型並做預測,即可完成集成。

best_epoch = heapq.nsmallest(5, 
range(len(loss_history)),
key=loss_history.__getitem__)
final_weights = (np.array(weights_dict[best_epoch[0]]) +
np.array(weights_dict[best_epoch[1]]) +
np.array(weights_dict[best_epoch[2]]) +
np.array(weights_dict[best_epoch[3]]) +
np.array(weights_dict[best_epoch[4]])) / 5.0
model.set_weights(final_weights)CS_loss = metrics.log_loss(y_test, model.predict(x_test))

四、Last K Smoothers(LKS)

有人可能會覺得,Checkpoint Smoothers 集成的參數,會不會某 2 組參數位在 error surface 上很遙遠的 2 處,這樣將這 2 組參數取平均,得到的結果是什麼東西呀?

確實有可能會有這樣的疑慮,因此就出現了 Lask K Smoothers。這個方法是使用最佳模型參數,以及最佳模型 epoch 值之前 k - 1 個 epoch 的模型,取模型參數的平均,灌回模型並做預測,即可完成集成。

best_epoch = np.argmin(loss_history)final_weights = (np.array(weights_dict[best_epoch]) + 
np.array(weights_dict[best_epoch - 1]) +
np.array(weights_dict[best_epoch - 2]) +
np.array(weights_dict[best_epoch - 3]) +
np.array(weights_dict[best_epoch - 4])) / 5.0
model.set_weights(final_weights)LKS_loss = metrics.log_loss(y_test, model.predict(x_test))

五、比較

我們參考旗標出版的「自學機器學習 — 上Kaggle接軌世界,成為資料科學家」第 6 章、第 8 章程式,使用卷積神經網路(Convolutional Neural Network)預測 CIFAR-10 資料集、搭配資料擴增(Data Augmentation)以及提前中止(Early Stopping),來測試本文提到的 4 個集成神經網路的方法。驗證資料上的損失(Loss,損失越小越好)如下:

Best Single
0.5155988335609436
RIE Ensemble
0.4396208110750885
CE Ensemble
0.4710198092700225
CS Ensemble
0.47761235939136787
LKS Ensemble
0.49694786799151325

可以發現,4 個集成方法都可以獲得比單一模型更好的成果。雖然 CE、CS、LKS 的成果比 RIE 略差,但是不需要訓練一大堆神經網路,依舊是一大優勢。

參考資料

Chen, H., Lundberg, S., and Lee, S. I. (2017). Checkpoint Ensembles: Ensemble Methods from a Single Training Process.

關於作者

Chia-Hao Li received the M.S. degree in computer science from Durham University, United Kingdom. He engages in computer algorithm, machine learning, and hardware/software codesign. He was former senior engineer in Mediatek, Taiwan. His currently research topic is the application of machine learning techniques for fault detection in the high-performance computing systems.

完整程式

import numpy as np
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras import optimizers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import Callback, LambdaCallback
from tensorflow.keras.callbacks EarlyStopping
from sklearn import metrics
import heapq
def create_model():
model = Sequential()
model.add(Conv2D(filters=64,
kernel_size=3,
padding='same',
activation='relu',
input_shape=(32,32,3)))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=128,
kernel_size=3,
padding='same',
activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=256,
kernel_size=3,
padding='same',
activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Flatten())
model.add(Dropout(0.4))
model.add(Dense(512, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss="categorical_crossentropy",
optimizer=optimizers.Adam(lr=0.001),
metrics=["accuracy"])
return model
def train_one_model(x_train, x_test, y_train, y_test):
batch_size = 128
epochs = 100

model = create_model()

early_stop = EarlyStopping(monitor='val_loss',
min_delta=0,
patience=5)

weights_dict = {}
weight_callback = LambdaCallback(on_epoch_end=lambda epoch,
logs: weights_dict.update({epoch:model.get_weights()}))
datagen = ImageDataGenerator(width_shift_range=0.1,
height_shift_range=0.1,
rotation_range=10,
zoom_range=0.1,
horizontal_flip=True)
history = model.fit(datagen.flow(x_train,
y_train,
batch_size=batch_size),
steps_per_epoch = x_train.shape[0] //
batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test),
callbacks=[weight_callback, early_stop])

pred = model.predict(x_test)
return model, history, weights_dict, pred# Import Data
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train, x_test = x_train/255.0, x_test/255.0
y_train, y_test = to_categorical(y_train), to_categorical(y_test)
# Build Model
model, history, weights_dict, pred0 = train_one_model(x_train,
x_test,
y_train,
y_test)
model, history, weights_dict, pred1 = train_one_model(x_train,
x_test,
y_train,
y_test)
model, history, weights_dict, pred2 = train_one_model(x_train,
x_test,
y_train,
y_test)
model, history, weights_dict, pred3 = train_one_model(x_train,
x_test,
y_train,
y_test)
model, history, weights_dict, pred4 = train_one_model(x_train,
x_test,
y_train,
y_test)
# Ensemble
loss_history = history.history['val_loss']
# Sigle Best Model
print("Best Single")
print(min(loss_history))
# RIE
print('RIE Ensemble')
print(metrics.log_loss(y_test,
(pred0 + pred1 + pred2 + pred3 + pred4) / 5))
# LKS
best_epoch = np.argmin(loss_history)
final_weights = (np.array(weights_dict[best_epoch]) +
np.array(weights_dict[best_epoch - 1]) +
np.array(weights_dict[best_epoch - 2]) +
np.array(weights_dict[best_epoch - 3]) +
np.array(weights_dict[best_epoch - 4])) / 5.0
model.set_weights(final_weights)
print('LKS Ensemble')
print(metrics.log_loss(y_test, model.predict(x_test)))
#CS
best_epoch = heapq.nsmallest(5,
range(len(loss_history)),
key=loss_history.__getitem__)
final_weights = (np.array(weights_dict[best_epoch[0]]) +
np.array(weights_dict[best_epoch[1]]) +
np.array(weights_dict[best_epoch[2]]) +
np.array(weights_dict[best_epoch[3]]) +
np.array(weights_dict[best_epoch[4]])) / 5.0
model.set_weights(final_weights)
print('CS Ensemble')
print(metrics.log_loss(y_test, model.predict(x_test)))
#CE
best_epoch = heapq.nsmallest(5,
range(len(loss_history)),
key=loss_history.__getitem__)
model.set_weights(weights_dict[best_epoch[0]])
pred0 = model.predict(x_test)
model.set_weights(weights_dict[best_epoch[1]])
pred1 = model.predict(x_test)
model.set_weights(weights_dict[best_epoch[2]])
pred2 = model.predict(x_test)
model.set_weights(weights_dict[best_epoch[3]])
pred3 = model.predict(x_test)
model.set_weights(weights_dict[best_epoch[4]])
pred4 = model.predict(x_test)
print('CE Ensemble')
print(metrics.log_loss(y_test,
(pred0 + pred1 + pred2 + pred3 + pred4) / 5))

--

--

施威銘研究室
施威銘研究室

Written by 施威銘研究室

致力開發AI領域的圖書、創客、教具,希望培養更多的AI人才。整合各種人才,投入創客產品的開發,推廣「實作學習」,希望實踐學以致用的理想。

No responses yet