ResNeXt就是一種典型的混合模型,由基礎的Inception+ResNet組合而成,本質在gruops分組卷積,核心創(chuàng)新點就是用一種平行堆疊相同拓撲結構的blocks代替原來 ResNet 的三層卷積的block,在不明顯增加參數量級的情況下提升了模型的準確率,同時由于拓撲結構相同,超參數也減少了,便于模型移植。
關于論文更詳細的解讀可以看我上一篇筆記:經典神經網絡論文超詳細解讀(八)——ResNeXt學習筆記(翻譯+精讀+代碼復現)
接下來我們進行代碼的復現?
一、ResNeXt Block 結構
1.1 基礎結構?
ResNeXt是ResNet基礎上的改進版本,改進的部分不多,主要將之前的殘差結構換成了另外的一個Block結構,并且使用了組卷積的概念。下圖是ResNeXt的一個基礎Block。
左圖是其基礎結構,靈感來自于ResNet的BottleNeck(關于ResNet代碼的詳細講解,大家可以看我之前的文章:ResNet代碼復現+超詳細注釋(PyTorch))。受Inception啟發(fā)論文將Residual部分分成若干個支路,這個支路的數量就是cardinality的含義(Inception代碼詳細講解可參考:GoogLeNet InceptionV1代碼復現+超詳細注釋(PyTorch))。
右圖是ResNeXt提出的一個組卷積的概念:將輸入通道為256的數據通過1*1卷積壓縮成大小為4的32組,合起來也就是128通道,然后進行卷積操作后,再用1*1卷積擴充回32組256通道,將32組數據按對應位置相加合成一個256通道的輸出。
1.2 三種等效的優(yōu)化結構?
(a)表示先劃分,單獨卷積并計算輸出,最后輸出相加。split-transform-merge三階段形式
(b)表示先劃分,單獨卷積,然后拼接再計算輸出。將各分支的最后一個1×1卷積聚合成一個卷積。
(c)就是分組卷積。將各分支的第一個1×1卷積融合成一個卷積,3×3卷積采用group(分組)卷積的形式,分組數=cardinality(基數)?
以上三個Block模塊在數學計算上是完全等價的。
以(c)為例:通過1×1的卷積層將輸入channel從256降為128,然后利用組卷積進行處理,卷積核大小為3×3組數為32,再利用1×1的卷積層進行升維,將輸出與輸入相加,得到最終輸出。
再看(b)模塊,就是將第一層和第二層的卷積分組,將第一層卷積(卷積核大小為1×1,每個卷積核有256層)分為32組,每組4個卷積核,這樣每一組輸出的channel為4;將第二層卷積也分為32組對應第一層,每一組輸入的channel為4,每一組4個卷積核輸出channel也為4,再將輸出拼接為channel為128的輸出,再經過一個256個卷積核的卷積層得到最終輸出。
對于(a)模塊,就是對b模塊的最后一層進行拆分,就是將第二層的32組的輸出再經過一層(卷積核大小為1×1,每個卷積核有4層,一共有256個卷積核)卷積,再把這32組輸出相加得到最終輸出。
二、ResNeXt 網絡結構
?下圖是ResNet-50和ResNeXt-50(32x4d)的對比,可以發(fā)現二者網絡整體結構一致,ResNeXt替換了基本的block。32?指進入網絡的第一個ResNeXt基本結構的分組數量C(即基數)為32。4d?表示depth即每一個分組的通道數為4(所以第一個基本結構輸入通道數為128)?
模型設計兩個原則:
(1)如果輸出的空間尺寸一樣,那么模塊的超參數(寬度和卷積核尺寸)也是一樣的。
(2)每當空間分辨率/2(降采樣),則卷積核的寬度*2。這樣保持模塊計算復雜度。
?
三、ResNeXt的PyTorch實現?
?3.1BasicBlock模塊
基礎Block模塊,也就是對應18/34層的BasicBlock。這里實現和ResNet一樣,就不再過多論述。
代碼
'''-------------一、BasicBlock模塊-----------------------------'''
# 用于ResNet18和ResNet34基本殘差結構塊
class BasicBlock(nn.Module):
def __init__(self, in_channel, out_channel, stride=1, downsample=None):
super(BasicBlock, self).__init__()
self.left = nn.Sequential(
nn.Conv2d(in_channel, out_channel, kernel_size=3, stride=stride, padding=1, bias=False),
nn.BatchNorm2d(out_channel),
nn.ReLU(),
nn.Conv2d(out_channel, out_channel, kernel_size=3, stride=1, padding=1, bias=False),
nn.BatchNorm2d(out_channel),
nn.downsample(downsample)
)
def forward(self, x):
identity = x
if self.downsample is not None:
identity = self.downsample(x)
out = self.left(x) # 這是由于殘差塊需要保留原始輸入
out += identity # 這是ResNet的核心,在輸出上疊加了輸入x
out = F.relu(out)
return out
3.2?Bottleneck模塊
從表中可以看出,ResNeXt網絡每一個convx的第一層和第二層卷積的卷積核個數是ResNet網絡的兩倍,在代碼實現時,需要注意在代碼中增加一下兩個參數groups和width_per_group(即為group數和conv2中組卷積每個group的卷積核個數)并且根據這兩個參數計算出第一層卷積的輸出(為ResNet網絡的兩倍)。
代碼
'''-------------二、Bottleneck模塊-----------------------------'''
class Bottleneck(nn.Module):
expansion = 4
# 這里相對于RseNet,在代碼中增加一下兩個參數groups和width_per_group(即為group數和conv2中組卷積每個group的卷積核個數)
# 默認值就是正常的ResNet
def __init__(self, in_channel, out_channel, stride=1, downsample=None,
groups=1, width_per_group=64):
super(Bottleneck, self).__init__()
# 這里也可以自動計算中間的通道數,也就是3x3卷積后的通道數,如果不改變就是out_channels
# 如果groups=32,with_per_group=4,out_channels就翻倍了
width = int(out_channel * (width_per_group / 64.)) * groups
self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=width,
kernel_size=1, stride=1, bias=False)
self.bn1 = nn.BatchNorm2d(width)
# -----------------------------------------
# 組卷積的數,需要傳入參數
self.conv2 = nn.Conv2d(in_channels=width, out_channels=width, groups=groups,
kernel_size=3, stride=stride, bias=False, padding=1)
self.bn2 = nn.BatchNorm2d(width)
# -----------------------------------------
self.conv3 = nn.Conv2d(in_channels=width, out_channels=out_channel * self.expansion,
kernel_size=1, stride=1, bias=False)
self.bn3 = nn.BatchNorm2d(out_channel * self.expansion)
# -----------------------------------------
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
def forward(self, x):
identity = x
if self.downsample is not None:
identity = self.downsample(x)
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
out += identity # 殘差連接
out = self.relu(out)
return out
3.3搭建ResNeXt網絡結構
(1)網絡整體結構
根據(c)模塊,首先通過1x1的卷積層將輸入特征矩陣的channel從256降維到128;再通過3x3的32組group卷積對其進行處理;再通過1x1的卷積層進行將特征矩陣的channel從128升維到256;最后主分支與短路連接的輸出進行相加得到最終輸出。文章來源:http://www.zghlxwxcb.cn/news/detail-606487.html
代碼
'''-------------三、搭建ResNeXt結構-----------------------------'''
class ResNeXt(nn.Module):
def __init__(self,
block, # 表示block的類型
blocks_num, # 表示的是每一層block的個數
num_classes=1000, # 表示類別
include_top=True, # 表示是否含有分類層(可做遷移學習)
groups=1, # 表示組卷積的數
width_per_group=64):
super(ResNeXt, self).__init__()
self.include_top = include_top
self.in_channel = 64
self.groups = groups
self.width_per_group = width_per_group
self.conv1 = nn.Conv2d(3, self.in_channel, kernel_size=7, stride=2,
padding=3, bias=False)
self.bn1 = nn.BatchNorm2d(self.in_channel)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, blocks_num[0]) # 64 -> 128
self.layer2 = self._make_layer(block, 128, blocks_num[1], stride=2)# 128 -> 256
self.layer3 = self._make_layer(block, 256, blocks_num[2], stride=2)# 256 -> 512
self.layer4 = self._make_layer(block, 512, blocks_num[3], stride=2) # 512 -> 1024
if self.include_top:
self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) # output size = (1, 1)
self.fc = nn.Linear(512 * block.expansion, num_classes)
# 形成單個Stage的網絡結構
def _make_layer(self, block, channel, block_num, stride=1):
downsample = None
if stride != 1 or self.in_channel != channel * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.in_channel, channel * block.expansion, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(channel * block.expansion))
# 該部分是將每個blocks的第一個殘差結構保存在layers列表中。
layers = []
layers.append(block(self.in_channel,
channel,
downsample=downsample,
stride=stride,
groups=self.groups,
width_per_group=self.width_per_group))
self.in_channel = channel * block.expansion # 得到最后的輸出
# 該部分是將每個blocks的剩下殘差結構保存在layers列表中,這樣就完成了一個blocks的構造。
for _ in range(1, block_num):
layers.append(block(self.in_channel,
channel,
groups=self.groups,
width_per_group=self.width_per_group))
# 返回Conv Block和Identity Block的集合,形成一個Stage的網絡結構
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
if self.include_top:
x = self.avgpool(x)
x = torch.flatten(x, 1)
x = self.fc(x)
return x
(2)搭建網絡模型
使用時直接調用每種不同層的結構對應的殘差塊作為參數傳入。除了殘差塊不同以外,每個殘差塊重復的次數也不同,所以也作為參數。每個不同的模型只需往ResNet模型中傳入不同參數即可。文章來源地址http://www.zghlxwxcb.cn/news/detail-606487.html
代碼
def ResNet34(num_classes=1000, include_top=True):
return ResNeXt(BasicBlock, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top)
def ResNet50(num_classes=1000, include_top=True):
return ResNeXt(Bottleneck, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top)
def ResNet101(num_classes=1000, include_top=True):
return ResNeXt(Bottleneck, [3, 4, 23, 3], num_classes=num_classes, include_top=include_top)
# 論文中的ResNeXt50_32x4d
def ResNeXt50_32x4d(num_classes=1000, include_top=True):
groups = 32
width_per_group = 4
return ResNeXt(Bottleneck, [3, 4, 6, 3],
num_classes=num_classes,
include_top=include_top,
groups=groups,
width_per_group=width_per_group)
def ResNeXt101_32x8d(num_classes=1000, include_top=True):
groups = 32
width_per_group = 8
return ResNeXt(Bottleneck, [3, 4, 23, 3],
num_classes=num_classes,
include_top=include_top,
groups=groups,
width_per_group=width_per_group)
3.4測試網絡模型
(1)網絡模型測試并打印論文中的ResNeXt50_32x4d
if __name__ == '__main__':
model = ResNeXt50_32x4d()
print(model)
input = torch.randn(1, 3, 224, 224)
out = model(input)
print(out.shape)
# test()
打印模型如下
ResNeXt(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): Bottleneck(
(conv1): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(layer2): Sequential(
(0): Bottleneck(
(conv1): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(3): Bottleneck(
(conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(layer3): Sequential(
(0): Bottleneck(
(conv1): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(3): Bottleneck(
(conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(4): Bottleneck(
(conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(5): Bottleneck(
(conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(layer4): Sequential(
(0): Bottleneck(
(conv1): Conv2d(1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(downsample): Sequential(
(0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(2048, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(2048, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(bn2): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
)
)
(avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
(fc): Linear(in_features=2048, out_features=1000, bias=True)
)
torch.Size([1, 1000])
Process finished with exit code 0
(2)使用torchsummary打印每個網絡模型的詳細信息
from torchsummary import summary
if __name__ == '__main__':
net = ResNeXt50_32x4d().cuda()
summary(net, (3, 224, 224))
打印模型如下?
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 64, 112, 112] 9,408
BatchNorm2d-2 [-1, 64, 112, 112] 128
ReLU-3 [-1, 64, 112, 112] 0
MaxPool2d-4 [-1, 64, 56, 56] 0
Conv2d-5 [-1, 256, 56, 56] 16,384
BatchNorm2d-6 [-1, 256, 56, 56] 512
Conv2d-7 [-1, 128, 56, 56] 8,192
BatchNorm2d-8 [-1, 128, 56, 56] 256
ReLU-9 [-1, 128, 56, 56] 0
Conv2d-10 [-1, 128, 56, 56] 4,608
BatchNorm2d-11 [-1, 128, 56, 56] 256
ReLU-12 [-1, 128, 56, 56] 0
Conv2d-13 [-1, 256, 56, 56] 32,768
BatchNorm2d-14 [-1, 256, 56, 56] 512
ReLU-15 [-1, 256, 56, 56] 0
Bottleneck-16 [-1, 256, 56, 56] 0
Conv2d-17 [-1, 128, 56, 56] 32,768
BatchNorm2d-18 [-1, 128, 56, 56] 256
ReLU-19 [-1, 128, 56, 56] 0
Conv2d-20 [-1, 128, 56, 56] 4,608
BatchNorm2d-21 [-1, 128, 56, 56] 256
ReLU-22 [-1, 128, 56, 56] 0
Conv2d-23 [-1, 256, 56, 56] 32,768
BatchNorm2d-24 [-1, 256, 56, 56] 512
ReLU-25 [-1, 256, 56, 56] 0
Bottleneck-26 [-1, 256, 56, 56] 0
Conv2d-27 [-1, 128, 56, 56] 32,768
BatchNorm2d-28 [-1, 128, 56, 56] 256
ReLU-29 [-1, 128, 56, 56] 0
Conv2d-30 [-1, 128, 56, 56] 4,608
BatchNorm2d-31 [-1, 128, 56, 56] 256
ReLU-32 [-1, 128, 56, 56] 0
Conv2d-33 [-1, 256, 56, 56] 32,768
BatchNorm2d-34 [-1, 256, 56, 56] 512
ReLU-35 [-1, 256, 56, 56] 0
Bottleneck-36 [-1, 256, 56, 56] 0
Conv2d-37 [-1, 512, 28, 28] 131,072
BatchNorm2d-38 [-1, 512, 28, 28] 1,024
Conv2d-39 [-1, 256, 56, 56] 65,536
BatchNorm2d-40 [-1, 256, 56, 56] 512
ReLU-41 [-1, 256, 56, 56] 0
Conv2d-42 [-1, 256, 28, 28] 18,432
BatchNorm2d-43 [-1, 256, 28, 28] 512
ReLU-44 [-1, 256, 28, 28] 0
Conv2d-45 [-1, 512, 28, 28] 131,072
BatchNorm2d-46 [-1, 512, 28, 28] 1,024
ReLU-47 [-1, 512, 28, 28] 0
Bottleneck-48 [-1, 512, 28, 28] 0
Conv2d-49 [-1, 256, 28, 28] 131,072
BatchNorm2d-50 [-1, 256, 28, 28] 512
ReLU-51 [-1, 256, 28, 28] 0
Conv2d-52 [-1, 256, 28, 28] 18,432
BatchNorm2d-53 [-1, 256, 28, 28] 512
ReLU-54 [-1, 256, 28, 28] 0
Conv2d-55 [-1, 512, 28, 28] 131,072
BatchNorm2d-56 [-1, 512, 28, 28] 1,024
ReLU-57 [-1, 512, 28, 28] 0
Bottleneck-58 [-1, 512, 28, 28] 0
Conv2d-59 [-1, 256, 28, 28] 131,072
BatchNorm2d-60 [-1, 256, 28, 28] 512
ReLU-61 [-1, 256, 28, 28] 0
Conv2d-62 [-1, 256, 28, 28] 18,432
BatchNorm2d-63 [-1, 256, 28, 28] 512
ReLU-64 [-1, 256, 28, 28] 0
Conv2d-65 [-1, 512, 28, 28] 131,072
BatchNorm2d-66 [-1, 512, 28, 28] 1,024
ReLU-67 [-1, 512, 28, 28] 0
Bottleneck-68 [-1, 512, 28, 28] 0
Conv2d-69 [-1, 256, 28, 28] 131,072
BatchNorm2d-70 [-1, 256, 28, 28] 512
ReLU-71 [-1, 256, 28, 28] 0
Conv2d-72 [-1, 256, 28, 28] 18,432
BatchNorm2d-73 [-1, 256, 28, 28] 512
ReLU-74 [-1, 256, 28, 28] 0
Conv2d-75 [-1, 512, 28, 28] 131,072
BatchNorm2d-76 [-1, 512, 28, 28] 1,024
ReLU-77 [-1, 512, 28, 28] 0
Bottleneck-78 [-1, 512, 28, 28] 0
Conv2d-79 [-1, 1024, 14, 14] 524,288
BatchNorm2d-80 [-1, 1024, 14, 14] 2,048
Conv2d-81 [-1, 512, 28, 28] 262,144
BatchNorm2d-82 [-1, 512, 28, 28] 1,024
ReLU-83 [-1, 512, 28, 28] 0
Conv2d-84 [-1, 512, 14, 14] 73,728
BatchNorm2d-85 [-1, 512, 14, 14] 1,024
ReLU-86 [-1, 512, 14, 14] 0
Conv2d-87 [-1, 1024, 14, 14] 524,288
BatchNorm2d-88 [-1, 1024, 14, 14] 2,048
ReLU-89 [-1, 1024, 14, 14] 0
Bottleneck-90 [-1, 1024, 14, 14] 0
Conv2d-91 [-1, 512, 14, 14] 524,288
BatchNorm2d-92 [-1, 512, 14, 14] 1,024
ReLU-93 [-1, 512, 14, 14] 0
Conv2d-94 [-1, 512, 14, 14] 73,728
BatchNorm2d-95 [-1, 512, 14, 14] 1,024
ReLU-96 [-1, 512, 14, 14] 0
Conv2d-97 [-1, 1024, 14, 14] 524,288
BatchNorm2d-98 [-1, 1024, 14, 14] 2,048
ReLU-99 [-1, 1024, 14, 14] 0
Bottleneck-100 [-1, 1024, 14, 14] 0
Conv2d-101 [-1, 512, 14, 14] 524,288
BatchNorm2d-102 [-1, 512, 14, 14] 1,024
ReLU-103 [-1, 512, 14, 14] 0
Conv2d-104 [-1, 512, 14, 14] 73,728
BatchNorm2d-105 [-1, 512, 14, 14] 1,024
ReLU-106 [-1, 512, 14, 14] 0
Conv2d-107 [-1, 1024, 14, 14] 524,288
BatchNorm2d-108 [-1, 1024, 14, 14] 2,048
ReLU-109 [-1, 1024, 14, 14] 0
Bottleneck-110 [-1, 1024, 14, 14] 0
Conv2d-111 [-1, 512, 14, 14] 524,288
BatchNorm2d-112 [-1, 512, 14, 14] 1,024
ReLU-113 [-1, 512, 14, 14] 0
Conv2d-114 [-1, 512, 14, 14] 73,728
BatchNorm2d-115 [-1, 512, 14, 14] 1,024
ReLU-116 [-1, 512, 14, 14] 0
Conv2d-117 [-1, 1024, 14, 14] 524,288
BatchNorm2d-118 [-1, 1024, 14, 14] 2,048
ReLU-119 [-1, 1024, 14, 14] 0
Bottleneck-120 [-1, 1024, 14, 14] 0
Conv2d-121 [-1, 512, 14, 14] 524,288
BatchNorm2d-122 [-1, 512, 14, 14] 1,024
ReLU-123 [-1, 512, 14, 14] 0
Conv2d-124 [-1, 512, 14, 14] 73,728
BatchNorm2d-125 [-1, 512, 14, 14] 1,024
ReLU-126 [-1, 512, 14, 14] 0
Conv2d-127 [-1, 1024, 14, 14] 524,288
BatchNorm2d-128 [-1, 1024, 14, 14] 2,048
ReLU-129 [-1, 1024, 14, 14] 0
Bottleneck-130 [-1, 1024, 14, 14] 0
Conv2d-131 [-1, 512, 14, 14] 524,288
BatchNorm2d-132 [-1, 512, 14, 14] 1,024
ReLU-133 [-1, 512, 14, 14] 0
Conv2d-134 [-1, 512, 14, 14] 73,728
BatchNorm2d-135 [-1, 512, 14, 14] 1,024
ReLU-136 [-1, 512, 14, 14] 0
Conv2d-137 [-1, 1024, 14, 14] 524,288
BatchNorm2d-138 [-1, 1024, 14, 14] 2,048
ReLU-139 [-1, 1024, 14, 14] 0
Bottleneck-140 [-1, 1024, 14, 14] 0
Conv2d-141 [-1, 2048, 7, 7] 2,097,152
BatchNorm2d-142 [-1, 2048, 7, 7] 4,096
Conv2d-143 [-1, 1024, 14, 14] 1,048,576
BatchNorm2d-144 [-1, 1024, 14, 14] 2,048
ReLU-145 [-1, 1024, 14, 14] 0
Conv2d-146 [-1, 1024, 7, 7] 294,912
BatchNorm2d-147 [-1, 1024, 7, 7] 2,048
ReLU-148 [-1, 1024, 7, 7] 0
Conv2d-149 [-1, 2048, 7, 7] 2,097,152
BatchNorm2d-150 [-1, 2048, 7, 7] 4,096
ReLU-151 [-1, 2048, 7, 7] 0
Bottleneck-152 [-1, 2048, 7, 7] 0
Conv2d-153 [-1, 1024, 7, 7] 2,097,152
BatchNorm2d-154 [-1, 1024, 7, 7] 2,048
ReLU-155 [-1, 1024, 7, 7] 0
Conv2d-156 [-1, 1024, 7, 7] 294,912
BatchNorm2d-157 [-1, 1024, 7, 7] 2,048
ReLU-158 [-1, 1024, 7, 7] 0
Conv2d-159 [-1, 2048, 7, 7] 2,097,152
BatchNorm2d-160 [-1, 2048, 7, 7] 4,096
ReLU-161 [-1, 2048, 7, 7] 0
Bottleneck-162 [-1, 2048, 7, 7] 0
Conv2d-163 [-1, 1024, 7, 7] 2,097,152
BatchNorm2d-164 [-1, 1024, 7, 7] 2,048
ReLU-165 [-1, 1024, 7, 7] 0
Conv2d-166 [-1, 1024, 7, 7] 294,912
BatchNorm2d-167 [-1, 1024, 7, 7] 2,048
ReLU-168 [-1, 1024, 7, 7] 0
Conv2d-169 [-1, 2048, 7, 7] 2,097,152
BatchNorm2d-170 [-1, 2048, 7, 7] 4,096
ReLU-171 [-1, 2048, 7, 7] 0
Bottleneck-172 [-1, 2048, 7, 7] 0
AdaptiveAvgPool2d-173 [-1, 2048, 1, 1] 0
Linear-174 [-1, 1000] 2,049,000
================================================================
Total params: 25,028,904
Trainable params: 25,028,904
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 361.78
Params size (MB): 95.48
Estimated Total Size (MB): 457.83
----------------------------------------------------------------
Process finished with exit code 0
3.5完整代碼
import torch
import torch.nn as nn
import torch.nn.functional as F
'''-------------一、BasicBlock模塊-----------------------------'''
# 用于ResNet18和ResNet34基本殘差結構塊
class BasicBlock(nn.Module):
def __init__(self, in_channel, out_channel, stride=1, downsample=None):
super(BasicBlock, self).__init__()
self.left = nn.Sequential(
nn.Conv2d(in_channel, out_channel, kernel_size=3, stride=stride, padding=1, bias=False),
nn.BatchNorm2d(out_channel),
nn.ReLU(),
nn.Conv2d(out_channel, out_channel, kernel_size=3, stride=1, padding=1, bias=False),
nn.BatchNorm2d(out_channel),
nn.downsample(downsample)
)
def forward(self, x):
identity = x
if self.downsample is not None:
identity = self.downsample(x)
out = self.left(x) # 這是由于殘差塊需要保留原始輸入
out += identity # 這是ResNet的核心,在輸出上疊加了輸入x
out = F.relu(out)
return out
'''-------------二、Bottleneck模塊-----------------------------'''
class Bottleneck(nn.Module):
expansion = 4
# 這里相對于RseNet,在代碼中增加一下兩個參數groups和width_per_group(即為group數和conv2中組卷積每個group的卷積核個數)
# 默認值就是正常的ResNet
def __init__(self, in_channel, out_channel, stride=1, downsample=None,
groups=1, width_per_group=64):
super(Bottleneck, self).__init__()
# 這里也可以自動計算中間的通道數,也就是3x3卷積后的通道數,如果不改變就是out_channels
# 如果groups=32,with_per_group=4,out_channels就翻倍了
width = int(out_channel * (width_per_group / 64.)) * groups
self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=width,
kernel_size=1, stride=1, bias=False)
self.bn1 = nn.BatchNorm2d(width)
# -----------------------------------------
# 組卷積的數,需要傳入參數
self.conv2 = nn.Conv2d(in_channels=width, out_channels=width, groups=groups,
kernel_size=3, stride=stride, bias=False, padding=1)
self.bn2 = nn.BatchNorm2d(width)
# -----------------------------------------
self.conv3 = nn.Conv2d(in_channels=width, out_channels=out_channel * self.expansion,
kernel_size=1, stride=1, bias=False)
self.bn3 = nn.BatchNorm2d(out_channel * self.expansion)
# -----------------------------------------
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
def forward(self, x):
identity = x
if self.downsample is not None:
identity = self.downsample(x)
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
out += identity # 殘差連接
out = self.relu(out)
return out
'''-------------三、搭建ResNeXt結構-----------------------------'''
class ResNeXt(nn.Module):
def __init__(self,
block, # 表示block的類型
blocks_num, # 表示的是每一層block的個數
num_classes=1000, # 表示類別
include_top=True, # 表示是否含有分類層(可做遷移學習)
groups=1, # 表示組卷積的數
width_per_group=64):
super(ResNeXt, self).__init__()
self.include_top = include_top
self.in_channel = 64
self.groups = groups
self.width_per_group = width_per_group
self.conv1 = nn.Conv2d(3, self.in_channel, kernel_size=7, stride=2,
padding=3, bias=False)
self.bn1 = nn.BatchNorm2d(self.in_channel)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, blocks_num[0]) # 64 -> 128
self.layer2 = self._make_layer(block, 128, blocks_num[1], stride=2)# 128 -> 256
self.layer3 = self._make_layer(block, 256, blocks_num[2], stride=2)# 256 -> 512
self.layer4 = self._make_layer(block, 512, blocks_num[3], stride=2) # 512 -> 1024
if self.include_top:
self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) # output size = (1, 1)
self.fc = nn.Linear(512 * block.expansion, num_classes)
# 形成單個Stage的網絡結構
def _make_layer(self, block, channel, block_num, stride=1):
downsample = None
if stride != 1 or self.in_channel != channel * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.in_channel, channel * block.expansion, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(channel * block.expansion))
# 該部分是將每個blocks的第一個殘差結構保存在layers列表中。
layers = []
layers.append(block(self.in_channel,
channel,
downsample=downsample,
stride=stride,
groups=self.groups,
width_per_group=self.width_per_group))
self.in_channel = channel * block.expansion # 得到最后的輸出
# 該部分是將每個blocks的剩下殘差結構保存在layers列表中,這樣就完成了一個blocks的構造。
for _ in range(1, block_num):
layers.append(block(self.in_channel,
channel,
groups=self.groups,
width_per_group=self.width_per_group))
# 返回Conv Block和Identity Block的集合,形成一個Stage的網絡結構
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
if self.include_top:
x = self.avgpool(x)
x = torch.flatten(x, 1)
x = self.fc(x)
return x
def ResNet34(num_classes=1000, include_top=True):
return ResNeXt(BasicBlock, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top)
def ResNet50(num_classes=1000, include_top=True):
return ResNeXt(Bottleneck, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top)
def ResNet101(num_classes=1000, include_top=True):
return ResNeXt(Bottleneck, [3, 4, 23, 3], num_classes=num_classes, include_top=include_top)
# 論文中的ResNeXt50_32x4d
def ResNeXt50_32x4d(num_classes=1000, include_top=True):
groups = 32
width_per_group = 4
return ResNeXt(Bottleneck, [3, 4, 6, 3],
num_classes=num_classes,
include_top=include_top,
groups=groups,
width_per_group=width_per_group)
def ResNeXt101_32x8d(num_classes=1000, include_top=True):
groups = 32
width_per_group = 8
return ResNeXt(Bottleneck, [3, 4, 23, 3],
num_classes=num_classes,
include_top=include_top,
groups=groups,
width_per_group=width_per_group)
'''
if __name__ == '__main__':
model = ResNeXt50_32x4d()
print(model)
input = torch.randn(1, 3, 224, 224)
out = model(input)
print(out.shape)
# test()
'''
from torchsummary import summary
if __name__ == '__main__':
net = ResNeXt50_32x4d().cuda()
summary(net, (3, 224, 224))
到了這里,關于ResNeXt代碼復現+超詳細注釋(PyTorch)的文章就介紹完了。如果您還想了解更多內容,請在右上角搜索TOY模板網以前的文章或繼續(xù)瀏覽下面的相關文章,希望大家以后多多支持TOY模板網!