常见与训练模型使用方式

  • 生成特征图
  • 加GlobeAveragePooling
  • 输出比较

生成特征图

见上一篇文章,也可参见

直接使用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
#导入Xception
conv_base = Xception(weights=
'imagenet',#指向模型初始化的权重检查点
include_top=False,#指定模型是否包含密集连接分类器
input_shape=(299, 299, 3)#输入到网络的张量形状
)
conv_base.summary()
#解冻
# conv_base.trainable = True
# set_trainable = False
# for layer in conv_base.layers:
# if layer.name == 'block13_pool (MaxPooling2D) ':
# set_trainable = True
# if set_trainable:
# layer.trainable = True
# else:
# layer.trainable = False

FREEZE_LAYERS = 2
for layer in conv_base.layers[:FREEZE_LAYERS]:
layer.trainable = False
for layer in conv_base.layers[FREEZE_LAYERS:]:
layer.trainable = True

#构建
from keras import models
from keras import layers

model = models.Sequential()
model.add(conv_base)
model.add(layers.GlobalMaxPooling2D())
model.add(layers.Dropout(0.3))
model.add(layers.Dense(12, activation='softmax'))#categorical的话用‘softmax’

输出比较、生成特征向量法②

我的解释:

  • 这里包括了最后的全连层
  • 但是模型只取到{‘layer’:’avg_pool’}获得特征向量,在加了Dense全连层
  • 输出了Xception,InceptionV3处理后的预测结果等四个预测结果
  • 预测结果比较

参考链接

示例代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
input_tensor = Input(shape=(299, 299, 3))
base_model1 = Xception(include_top=True, weights='imagenet', input_tensor=None, input_shape=None)
base_model1 = Model(inputs=[base_model1.input], outputs=[base_model1.get_layer('avg_pool').output], name='xception')

base_model2 = InceptionV3(include_top=True, weights='imagenet', input_tensor=None, input_shape=None)
base_model2 = Model(inputs=[base_model2.input], outputs=[base_model2.get_layer('avg_pool').output],
name='inceptionv3')

img1 = Input(shape=(299, 299, 3), name='img_1')

feature1 = base_model1(img1)
feature2 = base_model2(img1)

# let's add a fully-connected layer
category_predict1 = Dense(100, activation='softmax', name='ctg_out_1')(
Dropout(0.5)(
feature1
)
)

category_predict2 = Dense(100, activation='softmax', name='ctg_out_2')(
Dropout(0.5)(
feature2
)
)

category_predict = Dense(100, activation='softmax', name='ctg_out')(
concatenate([feature1, feature2])
)
max_category_predict = maximum([category_predict1, category_predict2])

model = Model(inputs=[img1], outputs=[category_predict1, category_predict2, category_predict, max_category_predict])