DL之AlexNet:利用卷积神经网络类AlexNet实现猫狗分类识别(图片数据增强→保存h5模型)

DL之AlexNet:利用卷积神经网络类AlexNet实现猫狗分类识别(图片数据增强→保存h5模型)


相关文章
DL之AlexNet:利用卷积神经网络类AlexNet实现猫狗分类识别(图片数据增强→保存h5模型)
DL之AlexNet:利用卷积神经网络类AlexNet实现猫狗分类识别(图片数据增强→保存h5模型)实现

利用卷积神经网络类AlexNet实现猫狗分类识别(图片数据增强→保存h5模型)

设计思路

处理过程及结果呈现

Found 17500 images belonging to 2 classes.
Found 7500 images belonging to 2 classes.

_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
input_1 (InputLayer)         (None, 150, 150, 3)       0
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 148, 148, 64)      1792
_________________________________________________________________
batch_normalization_1 (Batch (None, 148, 148, 64)      256
_________________________________________________________________
activation_1 (Activation)    (None, 148, 148, 64)      0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 74, 74, 64)        0
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 72, 72, 64)        36928
_________________________________________________________________
batch_normalization_2 (Batch (None, 72, 72, 64)        256
_________________________________________________________________
activation_2 (Activation)    (None, 72, 72, 64)        0
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 36, 36, 64)        0
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 34, 34, 128)       73856
_________________________________________________________________
batch_normalization_3 (Batch (None, 34, 34, 128)       512
_________________________________________________________________
activation_3 (Activation)    (None, 34, 34, 128)       0
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 17, 17, 128)       0
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 15, 15, 128)       147584
_________________________________________________________________
batch_normalization_4 (Batch (None, 15, 15, 128)       512
_________________________________________________________________
activation_4 (Activation)    (None, 15, 15, 128)       0
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 7, 7, 128)         0
_________________________________________________________________
flatten_1 (Flatten)          (None, 6272)              0
_________________________________________________________________
dense_1 (Dense)              (None, 64)                401472
_________________________________________________________________
batch_normalization_5 (Batch (None, 64)                256
_________________________________________________________________
activation_5 (Activation)    (None, 64)                0
_________________________________________________________________
dropout_1 (Dropout)          (None, 64)                0
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 65
_________________________________________________________________
activation_6 (Activation)    (None, 1)                 0
=================================================================
Total params: 663,489
Trainable params: 662,593
Non-trainable params: 896
_________________________________________________________________
None
Epoch 1/10
 - 837s - loss: 0.8109 - binary_accuracy: 0.5731 - val_loss: 0.7552 - val_binary_accuracy: 0.6275
Epoch 2/10
 - 972s - loss: 0.6892 - binary_accuracy: 0.6184 - val_loss: 0.6323 - val_binary_accuracy: 0.6538
Epoch 3/10
 - 888s - loss: 0.6773 - binary_accuracy: 0.6275 - val_loss: 0.6702 - val_binary_accuracy: 0.6475
Epoch 4/10
 - 827s - loss: 0.6503 - binary_accuracy: 0.6522 - val_loss: 1.4757 - val_binary_accuracy: 0.5437
Epoch 5/10
 - 775s - loss: 0.6024 - binary_accuracy: 0.6749 - val_loss: 0.5872 - val_binary_accuracy: 0.6975
Epoch 6/10
 - 775s - loss: 0.5855 - binary_accuracy: 0.6935 - val_loss: 1.6343 - val_binary_accuracy: 0.5075
Epoch 7/10
 - 781s - loss: 0.5725 - binary_accuracy: 0.7117 - val_loss: 1.0417 - val_binary_accuracy: 0.5850
Epoch 8/10
 - 770s - loss: 0.5594 - binary_accuracy: 0.7268 - val_loss: 0.6793 - val_binary_accuracy: 0.6150
Epoch 9/10
 - 774s - loss: 0.5619 - binary_accuracy: 0.7239 - val_loss: 0.7271 - val_binary_accuracy: 0.5737
Epoch 10/10
 - 772s - loss: 0.5206 - binary_accuracy: 0.7485 - val_loss: 1.2269 - val_binary_accuracy: 0.5564
train_history.history {'val_loss': [0.7552271389961243, 0.6323019933700561, 0.6702361726760864, 1.4756725096702576, 0.5872411811351776, 1.6343200182914734, 1.0417238283157348, 0.679338448047638, 0.7270535206794739, 1.2268943945566813], 'val_binary_accuracy': [0.6275, 0.65375, 0.6475, 0.54375, 0.6975, 0.5075, 0.585, 0.615, 0.57375, 0.5564102564102564], 'loss': [0.8109277236846185, 0.6891729639422509, 0.6772915293132106, 0.6502932430275025, 0.6023876513204267, 0.5855168705025027, 0.5725259766463311, 0.5594036031153894, 0.561434359863551, 0.5205760602989504], 'binary_accuracy': [0.5730846774193549, 0.6184475806451613, 0.6275201612903226, 0.6522177419354839, 0.6748991935483871, 0.6935483870967742, 0.7116935483870968, 0.7268145161290323, 0.7242424240015974, 0.7484879032258065]}

基于ImageDataGenerator实现数据增强

扩充数据集大小,增强模型的泛化能力。比如进行旋转、变形、归一化等。

  • 扩充数据量:对图像作简单的预处理(如缩放,改变像素值范围);
    随机打乱图像顺序,并且在图像集上无限循环(不会出现数据用完的情况);
    对图像加入扰动,大大增大数据量,避免多次输入相同的训练图像产生过拟合。
  • 优化训练效率:训练神经网络时经常需要将数据分成小的批次(例如每16张图像作为一个batch提供给神经网络),在ImageDataGenerator中,只需要简单提供一个参数 batch_size = 16。

类AlexNet代码

n_channels = 3
input_shape = (*image_size, n_channels)
input_layer = Input(input_shape)
z = input_layer
z = Conv2D(64, (3, 3))(z)
z = BatchNormalization()(z)
z = Activation('relu')(z)
z = MaxPooling2D(pool_size=(2, 2))(z)

z = Conv2D(64, (3, 3))(z)
z = BatchNormalization()(z)
z = Activation('relu')(z)
z = MaxPooling2D(pool_size=(2, 2))(z)

z = Conv2D(128, (3, 3))(z)
z = BatchNormalization()(z)
z = Activation('relu')(z)
z = MaxPooling2D(pool_size=(2, 2))(z)

z = Conv2D(128, (3, 3))(z)
z = BatchNormalization()(z)
z = Activation('relu')(z)
z = MaxPooling2D(pool_size=(2, 2))(z)

z = Flatten()(z) # 将特征变成一维向量
z = Dense(64)(z)
z = BatchNormalization()(z)
z = Activation('relu')(z)
z = Dropout(0.5)(z)
z = Dense(1)(z)
z = Activation('sigmoid')(z)
(0)

相关推荐