>>>深度學(xué)習(xí)Tricks,第一時(shí)間送達(dá)<<<
目錄
1.MobileOne: 移動(dòng)端僅需1ms的高性能骨干!
2.MobileOne block網(wǎng)絡(luò)模型:
3.源代碼
關(guān)于YOLOv5/v7改進(jìn)方法可關(guān)注并留言博主的CSDN
1.MobileOne: 移動(dòng)端僅需1ms的高性能骨干!
論文題目:An Improved One millisecond Mobile Backbone
論文地址:http://An Improved One millisecond Mobile Backbone
一般用于移動(dòng)設(shè)備的高效神經(jīng)網(wǎng)絡(luò)骨干通常針對(duì) FLOP 或參數(shù)計(jì)數(shù)等指標(biāo)進(jìn)行優(yōu)化。然而,當(dāng)部署在移動(dòng)設(shè)備上時(shí),這些指標(biāo)可能與網(wǎng)絡(luò)的延遲沒(méi)有很好的相關(guān)性。因此,我們通過(guò)在移動(dòng)設(shè)備上部署多個(gè)移動(dòng)友好網(wǎng)絡(luò)來(lái)對(duì)不同指標(biāo)進(jìn)行廣泛分析。我們識(shí)別和分析最近高效神經(jīng)網(wǎng)絡(luò)中的架構(gòu)和優(yōu)化瓶頸,并提供緩解這些瓶頸的方法。為此,我們?cè)O(shè)計(jì)了一個(gè)高效的骨干 MobileOne,其變體在 iPhone12 上的推理時(shí)間低于 1 毫秒,在 ImageNet 上的 top-1 準(zhǔn)確率為 75.9%。我們展示了 MobileOne 在高效架構(gòu)中實(shí)現(xiàn)了SOTA性能,同時(shí)在移動(dòng)設(shè)備上速度提高了許多倍。我們最好的模型在 ImageNet 上獲得了與 MobileFormer 相似的性能,同時(shí)速度提高了 38 倍。我們的模型在 ImageNet 上的 top-1 準(zhǔn)確率比 EfficientNet 在相似的延遲下高 2.3%。此外,我們展示了我們的模型可以推廣到多個(gè)任務(wù)——圖像分類(lèi)、目標(biāo)檢測(cè)和語(yǔ)義分割,與部署在移動(dòng)設(shè)備上的現(xiàn)有高效架構(gòu)相比,延遲和準(zhǔn)確度顯著提高。
MobileOne(≈MobileNetV1+RepVGG+訓(xùn)練Trick)是由Apple公司提出的一種基于iPhone12優(yōu)化的超輕量型架構(gòu),在ImageNet數(shù)據(jù)集上以<1ms的速度取得了75.9%的Top1精度?。。?mark hidden color="red">文章來(lái)源:http://www.zghlxwxcb.cn/news/detail-463467.html
2.MobileOne block網(wǎng)絡(luò)模型:
文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-463467.html
3.源代碼
import time
import torch.nn as nn
import numpy as np
import torch
import copy
def conv_bn(in_channels, out_channels, kernel_size, stride, padding, groups=1):
result = nn.Sequential()
result.add_module('conv', nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size,
stride=stride, padding=padding, groups=groups, bias=False))
result.add_module('bn', nn.BatchNorm2d(num_features=out_channels))
return result
class DepthWiseConv(nn.Module):
def __init__(self, inc, kernel_size, stride=1):
super().__init__()
padding = 1
if kernel_size == 1:
padding = 0
# self.conv = nn.Sequential(
# nn.Conv2d(inc, inc, kernel_size, stride, padding, groups=inc, bias=False,),
# nn.BatchNorm2d(inc),
# )
self.conv = conv_bn(inc, inc, kernel_size, stride, padding, inc)
def forward(self, x):
return self.conv(x)
class PointWiseConv(nn.Module):
def __init__(self, inc, outc):
super().__init__()
# self.conv = nn.Sequential(
# nn.Conv2d(inc, outc, 1, 1, 0, bias=False),
# nn.BatchNorm2d(outc),
# )
self.conv = conv_bn(inc, outc, 1, 1, 0)
def forward(self, x):
return self.conv(x)
class MobileOneBlock(nn.Module):
def __init__(self, in_channels, out_channels, k, stride=1, dilation=1, padding_mode='zeros', deploy=False,
use_se=False):
super(MobileOneBlock, self).__init__()
self.deploy = deploy
self.in_channels = in_channels
self.out_channels = out_channels
self.deploy = deploy
kernel_size = 3
padding = 1
assert kernel_size == 3
assert padding == 1
self.k = k
padding_11 = padding - kernel_size // 2
self.nonlinearity = nn.ReLU()
if use_se:
# self.se = SEBlock(out_channels, internal_neurons=out_channels // 16)
...
else:
self.se = nn.Identity()
if deploy:
self.dw_reparam = nn.Conv2d(in_channels=in_channels, out_channels=in_channels, kernel_size=kernel_size,
stride=stride, padding=padding, dilation=dilation, groups=in_channels,
bias=True, padding_mode=padding_mode)
self.pw_reparam = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=1,
bias=True)
else:
# self.rbr_identity = nn.BatchNorm2d(num_features=in_channels) if out_channels == in_channels and stride == 1 else None
# self.rbr_dense = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=padding, groups=groups)
# self.rbr_1x1 = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=stride, padding=padding_11, groups=groups)
# print('RepVGG Block, identity = ', self.rbr_identity)
self.dw_bn_layer = nn.BatchNorm2d(in_channels) if out_channels == in_channels and stride == 1 else None
for k_idx in range(k):
setattr(self, f'dw_3x3_{k_idx}', DepthWiseConv(in_channels, 3, stride=stride))
self.dw_1x1 = DepthWiseConv(in_channels, 1, stride=stride)
self.pw_bn_layer = nn.BatchNorm2d(in_channels) if out_channels == in_channels and stride == 1 else None
for k_idx in range(k):
setattr(self, f'pw_1x1_{k_idx}', PointWiseConv(in_channels, out_channels))
def forward(self, inputs):
if self.deploy:
x = self.dw_reparam(inputs)
x = self.nonlinearity(x)
x = self.pw_reparam(x)
x = self.nonlinearity(x)
return x
if self.dw_bn_layer is None:
id_out = 0
else:
id_out = self.dw_bn_layer(inputs)
x_conv_3x3 = []
for k_idx in range(self.k):
x = getattr(self, f'dw_3x3_{k_idx}')(inputs)
# print(x.shape)
x_conv_3x3.append(x)
x_conv_1x1 = self.dw_1x1(inputs)
# print(x_conv_1x1.shape, x_conv_3x3[0].shape)
# print(x_conv_1x1.shape)
# print(id_out)
x = id_out + x_conv_1x1 + sum(x_conv_3x3)
x = self.nonlinearity(self.se(x))
# 1x1 conv
if self.pw_bn_layer is None:
id_out = 0
else:
id_out = self.pw_bn_layer(x)
x_conv_1x1 = []
for k_idx in range(self.k):
x_conv_1x1.append(getattr(self, f'pw_1x1_{k_idx}')(x))
x = id_out + sum(x_conv_1x1)
x = self.nonlinearity(x)
return x
# Optional. This improves the accuracy and facilitates quantization.
# 1. Cancel the original weight decay on rbr_dense.conv.weight and rbr_1x1.conv.weight.
# 2. Use like this.
# loss = criterion(....)
# for every RepVGGBlock blk:
# loss += weight_decay_coefficient * 0.5 * blk.get_cust_L2()
# optimizer.zero_grad()
# loss.backward()
def get_custom_L2(self):
# K3 = self.rbr_dense.conv.weight
# K1 = self.rbr_1x1.conv.weight
# t3 = (self.rbr_dense.bn.weight / ((self.rbr_dense.bn.running_var + self.rbr_dense.bn.eps).sqrt())).reshape(-1, 1, 1, 1).detach()
# t1 = (self.rbr_1x1.bn.weight / ((self.rbr_1x1.bn.running_var + self.rbr_1x1.bn.eps).sqrt())).reshape(-1, 1, 1, 1).detach()
# l2_loss_circle = (K3 ** 2).sum() - (K3[:, :, 1:2, 1:2] ** 2).sum() # The L2 loss of the "circle" of weights in 3x3 kernel. Use regular L2 on them.
# eq_kernel = K3[:, :, 1:2, 1:2] * t3 + K1 * t1 # The equivalent resultant central point of 3x3 kernel.
# l2_loss_eq_kernel = (eq_kernel ** 2 / (t3 ** 2 + t1 ** 2)).sum() # Normalize for an L2 coefficient comparable to regular L2.
# return l2_loss_eq_kernel + l2_loss_circle
...
# This func derives the equivalent kernel and bias in a DIFFERENTIABLE way.
# You can get the equivalent kernel and bias at any time and do whatever you want,
# for example, apply some penalties or constraints during training, just like you do to the other models.
# May be useful for quantization or pruning.
def get_equivalent_kernel_bias(self):
# kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense)
# kernel1x1, bias1x1 = self._fuse_bn_tensor(self.rbr_1x1)
# kernelid, biasid = self._fuse_bn_tensor(self.rbr_identity)
# return kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, bias3x3 + bias1x1 + biasid
dw_kernel_3x3 = []
dw_bias_3x3 = []
for k_idx in range(self.k):
k3, b3 = self._fuse_bn_tensor(getattr(self, f"dw_3x3_{k_idx}").conv)
# print(k3.shape, b3.shape)
dw_kernel_3x3.append(k3)
dw_bias_3x3.append(b3)
dw_kernel_1x1, dw_bias_1x1 = self._fuse_bn_tensor(self.dw_1x1.conv)
dw_kernel_id, dw_bias_id = self._fuse_bn_tensor(self.dw_bn_layer, self.in_channels)
dw_kernel = sum(dw_kernel_3x3) + self._pad_1x1_to_3x3_tensor(dw_kernel_1x1) + dw_kernel_id
dw_bias = sum(dw_bias_3x3) + dw_bias_1x1 + dw_bias_id
# pw
pw_kernel = []
pw_bias = []
for k_idx in range(self.k):
k1, b1 = self._fuse_bn_tensor(getattr(self, f"pw_1x1_{k_idx}").conv)
# print(k1.shape)
pw_kernel.append(k1)
pw_bias.append(b1)
pw_kernel_id, pw_bias_id = self._fuse_bn_tensor(self.pw_bn_layer, 1)
pw_kernel_1x1 = sum(pw_kernel) + pw_kernel_id
pw_bias_1x1 = sum(pw_bias) + pw_bias_id
return dw_kernel, dw_bias, pw_kernel_1x1, pw_bias_1x1
def _pad_1x1_to_3x3_tensor(self, kernel1x1):
if kernel1x1 is None:
return 0
else:
return torch.nn.functional.pad(kernel1x1, [1, 1, 1, 1])
def _fuse_bn_tensor(self, branch, groups=None):
if branch is None:
return 0, 0
if isinstance(branch, nn.Sequential):
kernel = branch.conv.weight
bias = branch.conv.bias
running_mean = branch.bn.running_mean
running_var = branch.bn.running_var
gamma = branch.bn.weight
beta = branch.bn.bias
eps = branch.bn.eps
else:
assert isinstance(branch, nn.BatchNorm2d)
# if not hasattr(self, 'id_tensor'):
input_dim = self.in_channels // groups # self.groups
if groups == 1:
ks = 1
else:
ks = 3
kernel_value = np.zeros((self.in_channels, input_dim, ks, ks), dtype=np.float32)
for i in range(self.in_channels):
if ks == 1:
kernel_value[i, i % input_dim, 0, 0] = 1
else:
kernel_value[i, i % input_dim, 1, 1] = 1
self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device)
kernel = self.id_tensor
running_mean = branch.running_mean
running_var = branch.running_var
gamma = branch.weight
beta = branch.bias
eps = branch.eps
std = (running_var + eps).sqrt()
t = (gamma / std).reshape(-1, 1, 1, 1)
return kernel * t, beta - running_mean * gamma / std
def switch_to_deploy(self):
dw_kernel, dw_bias, pw_kernel, pw_bias = self.get_equivalent_kernel_bias()
self.dw_reparam = nn.Conv2d(in_channels=self.pw_1x1_0.conv.conv.in_channels,
out_channels=self.pw_1x1_0.conv.conv.in_channels, kernel_size=self.dw_3x3_0.conv.conv.kernel_size,
stride=self.dw_3x3_0.conv.conv.stride, padding=self.dw_3x3_0.conv.conv.padding,
groups=self.dw_3x3_0.conv.conv.in_channels, bias=True, )
self.pw_reparam = nn.Conv2d(in_channels=self.pw_1x1_0.conv.conv.in_channels,
out_channels=self.pw_1x1_0.conv.conv.out_channels, kernel_size=1, stride=1, bias=True)
self.dw_reparam.weight.data = dw_kernel
self.dw_reparam.bias.data = dw_bias
self.pw_reparam.weight.data = pw_kernel
self.pw_reparam.bias.data = pw_bias
for para in self.parameters():
para.detach_()
self.__delattr__('dw_1x1')
for k_idx in range(self.k):
self.__delattr__(f'dw_3x3_{k_idx}')
self.__delattr__(f'pw_1x1_{k_idx}')
if hasattr(self, 'dw_bn_layer'):
self.__delattr__('dw_bn_layer')
if hasattr(self, 'pw_bn_layer'):
self.__delattr__('pw_bn_layer')
if hasattr(self, 'id_tensor'):
self.__delattr__('id_tensor')
self.deploy = True
class MobileOne(nn.Module):
def __init__(self, blocks, ks, channels, strides, width_muls, num_classes, deploy=False):
super().__init__()
self.stage_num = len(blocks)
# self.stage0 = MobileOneBlock(3, int(channels[0] * width_muls[0]), ks[0], stride=strides[0], deploy=deploy)
self.stage0 = nn.Sequential(nn.Conv2d(4, int(channels[0] * width_muls[0]), 3, 2, 1, bias=False),
nn.BatchNorm2d(int(channels[0] * width_muls[0])), nn.ReLU(), )
in_channels = int(channels[0] * width_muls[0])
for idx, block_num in enumerate(blocks[1:]):
idx += 1
module = []
out_channels = int(channels[idx] * width_muls[idx])
for b_idx in range(block_num):
stride = strides[idx] if b_idx == 0 else 1
block = MobileOneBlock(in_channels, out_channels, ks[idx], stride, deploy=deploy)
in_channels = out_channels
module.append(block)
setattr(self, f"stage{idx}", nn.Sequential(*module))
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.fc1 = nn.Sequential(nn.Linear(out_channels, num_classes, ), )
def forward(self, x):
# for s_idx in range(self.stage_num):
# x = getattr(self, f'stage{s_idx}')(x)
x0 = self.stage0(x)
# print(x0[0,:,0,0])
# return x0
x1 = self.stage1(x0)
x2 = self.stage2(x1)
x3 = self.stage3(x2)
x4 = self.stage4(x3)
x5 = self.stage5(x4)
x = self.avg_pool(x5)
x = torch.flatten(x, start_dim=1) # b, c
x = self.fc1(x)
return x
def make_mobileone_s0(num_classes,deploy=False):
blocks = [1, 2, 8, 5, 5, 1]
strides = [2, 2, 2, 2, 1, 2]
ks = [4, 4, 4, 4, 4, 4] if deploy is False else [1, 1, 1, 1, 1, 1]
width_muls = [0.75, 0.75, 1, 1, 1, 2] # 261 M flops
channels = [64, 64, 128, 256, 256, 512, 512]
model = MobileOneNet(blocks, ks, channels, strides, width_muls, num_classes, deploy)
return model
def repvgg_model_convert(model: torch.nn.Module, do_copy=True, input=None, output=None):
if do_copy:
model = copy.deepcopy(model)
for module in model.modules():
if hasattr(module, 'switch_to_deploy'):
module.switch_to_deploy()
print('swith done. Checking....')
deploy_model = make_mobileone_s0(26,deploy=True)
deploy_model.eval()
deploy_model.load_state_dict(model.state_dict())
if input is not None:
o = deploy_model(x)
# print(o)
# print(output)
print((output - o).sum())
# if save_path is not None:
# torch.save(model.state_dict(), save_path)
return deploy_model
if __name__ == '__main__':
model = make_mobileone_s0(num_classes=4)#.cuda(0)
model.eval()
data = torch.rand(1, 4, 128, 128)#.cuda(0)
for i in range(10):
start = time.time()
out = model(data)
print('time', time.time() - start, out.size())
如何結(jié)合YOLOv5,有需要且感興趣的小伙伴關(guān)注互粉一下,一起學(xué)習(xí)!共同進(jìn)步!
??????【算法創(chuàng)新&算法訓(xùn)練&論文投稿】相關(guān)鏈接??????
?【YOLO創(chuàng)新算法嘗新系列】?
???美團(tuán)出品 | YOLOv6 v3.0 is Coming(超越Y(jié)OLOv7、v8)
???官方正品 | Ultralytics YOLOv8算法來(lái)啦(尖端SOTA模型)
???改進(jìn)YOLOv5/YOLOv7——魔改YOLOv5/YOLOv7提升檢測(cè)精度(漲點(diǎn)必備)
————————————??【重磅干貨來(lái)襲】??————————————
??一、主干網(wǎng)絡(luò)改進(jìn)(持續(xù)更新中)????
1.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之結(jié)合ConvNeXt結(jié)構(gòu)(純卷積|超越Swin)
2.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之結(jié)合MobileOne結(jié)構(gòu)(高性能骨干|僅需1ms)
3.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之結(jié)合Swin Transformer V2(漲點(diǎn)神器)
4.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)結(jié)合BotNet(Transformer)
5.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之GSConv+Slim Neck(優(yōu)化成本)
6.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)結(jié)合新神經(jīng)網(wǎng)絡(luò)算子Involution(CVPR 2021)
7.目標(biāo)檢測(cè)算法——YOLOv7改進(jìn)|增加小目標(biāo)檢測(cè)層
8.目標(biāo)檢測(cè)算法——YOLOv5改進(jìn)|增加小目標(biāo)檢測(cè)層
?? 持續(xù)更新中……
??二、輕量化網(wǎng)絡(luò)(持續(xù)更新中)????
1.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之結(jié)合?RepVGG(速度飆升)
2.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之結(jié)合?PP-LCNet(輕量級(jí)CPU網(wǎng)絡(luò))
3.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之結(jié)合輕量化網(wǎng)絡(luò)MobileNetV3(降參提速)
4.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)|結(jié)合輕量型網(wǎng)絡(luò)ShuffleNetV2
5.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)結(jié)合輕量型Ghost模塊
?? 持續(xù)更新中……
??三、注意力機(jī)制(持續(xù)更新中)????
1.目標(biāo)檢測(cè)算法——YOLOv5改進(jìn)之結(jié)合CBAM注意力機(jī)制
2.目標(biāo)檢測(cè)算法——YOLOv7改進(jìn)之結(jié)合CBAM注意力機(jī)制
3.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7之結(jié)合CA注意力機(jī)制
4.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之結(jié)合ECA注意力機(jī)制
5.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之結(jié)合NAMAttention(提升漲點(diǎn))
6.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之結(jié)合GAMAttention
7.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之結(jié)合無(wú)參注意力SimAM(漲點(diǎn)神器)
8.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之結(jié)合Criss-Cross Attention
9.?目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之結(jié)合?SOCA(單幅圖像超分辨率)
?? 持續(xù)更新中……
??四、檢測(cè)頭部改進(jìn)(持續(xù)更新中)????
1.魔改YOLOv5/v7高階版(魔法搭配+創(chuàng)新組合)——改進(jìn)之結(jié)合解耦頭Decoupled_Detect
2.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)結(jié)合漲點(diǎn)Trick之ASFF(自適應(yīng)空間特征融合)
?? 持續(xù)更新中……
??五、空間金字塔池化(持續(xù)更新中)????
1.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之結(jié)合?ASPP(空洞空間卷積池化金字塔)
2.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之結(jié)合特征提取網(wǎng)絡(luò)RFBNet(漲點(diǎn)明顯)
?? 持續(xù)更新中……
??六、損失函數(shù)及NMS改進(jìn)(持續(xù)更新中)????
1.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)|將IOU Loss替換為EIOU Loss
2.目標(biāo)檢測(cè)算法——助力漲點(diǎn) | YOLOv5改進(jìn)結(jié)合Alpha-IoU
3.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之結(jié)合SIoU
4.目標(biāo)檢測(cè)算法——YOLOv5將NMS替換為DIoU-NMS
?? 持續(xù)更新中……
??七、其他創(chuàng)新改進(jìn)項(xiàng)目(持續(xù)更新中)????
1.手把手教你搭建屬于自己的PyQt5-YOLOv5目標(biāo)檢測(cè)平臺(tái)(保姆級(jí)教程)
2.YOLO算法改進(jìn)之結(jié)合GradCAM可視化熱力圖(附詳細(xì)教程)
3.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之結(jié)合SPD-Conv(低分辨率圖像和小目標(biāo)漲點(diǎn)明顯)
4.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之更換FReLU激活函數(shù)
5.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之結(jié)合BiFPN
?? 持續(xù)更新中……
??八、算法訓(xùn)練相關(guān)項(xiàng)目(持續(xù)更新中)????
1.目標(biāo)檢測(cè)算法——YOLOv7訓(xùn)練自己的數(shù)據(jù)集(保姆級(jí)教程)
2.人工智能前沿——玩轉(zhuǎn)OpenAI語(yǔ)音機(jī)器人ChatGPT(中文版)
3.深度學(xué)習(xí)之語(yǔ)義分割算法(入門(mén)學(xué)習(xí))
4.知識(shí)經(jīng)驗(yàn)分享——YOLOv5-6.0訓(xùn)練出錯(cuò)及解決方法(RuntimeError)
5.目標(biāo)檢測(cè)算法——將xml格式轉(zhuǎn)換為YOLOv5格式txt
6.目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7如何改變bbox檢測(cè)框的粗細(xì)大小
7.人工智能前沿——6款A(yù)I繪畫(huà)生成工具
8.YOLOv5結(jié)合人體姿態(tài)估計(jì)
9.超越Y(jié)OLOv5,0.7M超輕量,又好又快(PP-YOLOE&PP-PicoDet)
10.目標(biāo)檢測(cè)算法——收藏|小目標(biāo)檢測(cè)的定義(一)
11.目標(biāo)檢測(cè)算法——收藏|小目標(biāo)檢測(cè)難點(diǎn)分析(二)
12.目標(biāo)檢測(cè)算法——收藏|小目標(biāo)檢測(cè)解決方案(三)
?? 持續(xù)更新中……
??九、數(shù)據(jù)資源相關(guān)項(xiàng)目(持續(xù)更新中)????
1.目標(biāo)檢測(cè)算法——小目標(biāo)檢測(cè)相關(guān)數(shù)據(jù)集(附下載鏈接)
2.目標(biāo)檢測(cè)算法——3D公共數(shù)據(jù)集匯總(附下載鏈接)
3.目標(biāo)檢測(cè)算法——3D公共數(shù)據(jù)集匯總 2(附下載鏈接)
4.目標(biāo)檢測(cè)算法——行人檢測(cè)&人群計(jì)數(shù)數(shù)據(jù)集匯總(附下載鏈接)
5.目標(biāo)檢測(cè)算法——遙感影像數(shù)據(jù)集資源匯總(附下載鏈接)
6.目標(biāo)檢測(cè)算法——自動(dòng)駕駛開(kāi)源數(shù)據(jù)集匯總(附下載鏈接)
7.目標(biāo)檢測(cè)算法——自動(dòng)駕駛開(kāi)源數(shù)據(jù)集匯總 2(附下載鏈接)
8.目標(biāo)檢測(cè)算法——圖像分類(lèi)開(kāi)源數(shù)據(jù)集匯總(附下載鏈接)
9.目標(biāo)檢測(cè)算法——醫(yī)學(xué)圖像開(kāi)源數(shù)據(jù)集匯總(附下載鏈接)
10.目標(biāo)檢測(cè)算法——工業(yè)缺陷數(shù)據(jù)集匯總1(附下載鏈接)
11.目標(biāo)檢測(cè)算法——工業(yè)缺陷數(shù)據(jù)集匯總2(附下載鏈接)
12.目標(biāo)檢測(cè)算法——垃圾分類(lèi)數(shù)據(jù)集匯總(附下載鏈接)
13.目標(biāo)檢測(cè)算法——人臉識(shí)別數(shù)據(jù)集匯總(附下載鏈接)
14.目標(biāo)檢測(cè)算法——安全帽識(shí)別數(shù)據(jù)集(附下載鏈接)
15.目標(biāo)檢測(cè)算法——人體姿態(tài)估計(jì)數(shù)據(jù)集匯總(附下載鏈接)
16.目標(biāo)檢測(cè)算法——人體姿態(tài)估計(jì)數(shù)據(jù)集匯總 2(附下載鏈接)
17.目標(biāo)檢測(cè)算法——車(chē)輛牌照識(shí)別數(shù)據(jù)集匯總(附下載鏈接)
18.目標(biāo)檢測(cè)算法——車(chē)輛牌照識(shí)別數(shù)據(jù)集匯總 2(附下載鏈接)
19.收藏 | 機(jī)器學(xué)習(xí)公共數(shù)據(jù)集集錦(附下載鏈接)
20.目標(biāo)檢測(cè)算法——圖像分割數(shù)據(jù)集匯總(附下載鏈接)
21.目標(biāo)檢測(cè)算法——圖像分割數(shù)據(jù)集匯總 2(附下載鏈接)
22.收藏 | 自然語(yǔ)言處理(NLP)數(shù)據(jù)集匯總(附下載鏈接)
23.自然語(yǔ)言處理(NLP)數(shù)據(jù)集匯總 2(附下載鏈接)
24.自然語(yǔ)言處理(NLP)數(shù)據(jù)集匯總 3(附下載鏈接)
25.自然語(yǔ)言處理(NLP)數(shù)據(jù)集匯總 4(附下載鏈接)
?? 持續(xù)更新中……
??十、論文投稿相關(guān)項(xiàng)目(持續(xù)更新中)????
1.論文投稿指南——收藏|SCI論文投稿注意事項(xiàng)(提高命中率)
2.論文投稿指南——收藏|SCI論文怎么投?(Accepted)
3.論文投稿指南——收藏|SCI寫(xiě)作投稿發(fā)表全流程
4.論文投稿指南——收藏|如何選擇SCI期刊(含選刊必備神器)
5.論文投稿指南——SCI選刊
6.論文投稿指南——SCI投稿各階段郵件模板
7.人工智能前沿——深度學(xué)習(xí)熱門(mén)領(lǐng)域(確定選題及研究方向)
8.人工智能前沿——2022年最流行的十大AI技術(shù)
9.人工智能前沿——未來(lái)AI技術(shù)的五大應(yīng)用領(lǐng)域
10.人工智能前沿——無(wú)人自動(dòng)駕駛技術(shù)
11.人工智能前沿——AI技術(shù)在醫(yī)療領(lǐng)域的應(yīng)用
12.人工智能前沿——隨需應(yīng)變的未來(lái)大腦
13.目標(biāo)檢測(cè)算法——深度學(xué)習(xí)知識(shí)簡(jiǎn)要普及
14.目標(biāo)檢測(cè)算法——10種深度學(xué)習(xí)框架介紹
15.目標(biāo)檢測(cè)算法——為什么我選擇PyTorch?
16.知識(shí)經(jīng)驗(yàn)分享——超全激活函數(shù)解析(數(shù)學(xué)原理+優(yōu)缺點(diǎn))
17.知識(shí)經(jīng)驗(yàn)分享——卷積神經(jīng)網(wǎng)絡(luò)(CNN)
18.海帶軟件分享——Office 2021全家桶安裝教程(附報(bào)錯(cuò)解決方法)
19.海帶軟件分享——日常辦公學(xué)習(xí)軟件分享(收藏)
20.論文投稿指南——計(jì)算機(jī)視覺(jué) (Computer Vision) 頂會(huì)歸納
21.論文投稿指南——中文核心期刊
22.論文投稿指南——計(jì)算機(jī)領(lǐng)域核心期刊
23.論文投稿指南——中文核心期刊推薦(計(jì)算機(jī)技術(shù))
24.論文投稿指南——中文核心期刊推薦(計(jì)算機(jī)技術(shù)2)
25.論文投稿指南——中文核心期刊推薦(計(jì)算機(jī)技術(shù)3)
26.論文投稿指南——中文核心期刊推薦(電子、通信技術(shù))
27.論文投稿指南——中文核心期刊推薦(電子、通信技術(shù)2)
28.論文投稿指南——中文核心期刊推薦(電子、通信技術(shù)3)
29.論文投稿指南——中文核心期刊推薦(機(jī)械、儀表工業(yè))
30.論文投稿指南——中文核心期刊推薦(機(jī)械、儀表工業(yè)2)
31.論文投稿指南——中文核心期刊推薦(機(jī)械、儀表工業(yè)3)
32.論文投稿指南——中國(guó)(中文EI)期刊推薦(第1期)
33.論文投稿指南——中國(guó)(中文EI)期刊推薦(第2期)
34.論文投稿指南——中國(guó)(中文EI)期刊推薦(第3期)
35.論文投稿指南——中國(guó)(中文EI)期刊推薦(第4期)
36.論文投稿指南——中國(guó)(中文EI)期刊推薦(第5期)
37.論文投稿指南——中國(guó)(中文EI)期刊推薦(第6期)
38.論文投稿指南——中國(guó)(中文EI)期刊推薦(第7期)
39.論文投稿指南——中國(guó)(中文EI)期刊推薦(第8期)
40.【1】SCI易中期刊推薦——計(jì)算機(jī)方向(中科院3區(qū))
41.【2】SCI易中期刊推薦——遙感圖像領(lǐng)域(中科院2區(qū))
42.【3】SCI易中期刊推薦——人工智能領(lǐng)域(中科院1區(qū))
43.【4】SCI易中期刊推薦——神經(jīng)科學(xué)研究(中科院4區(qū))
44.【5】SCI易中期刊推薦——計(jì)算機(jī)科學(xué)(中科院2區(qū))
45.【6】SCI易中期刊推薦——人工智能&神經(jīng)科學(xué)&機(jī)器人學(xué)(中科院3區(qū))
46.【7】SCI易中期刊推薦——計(jì)算機(jī) | 人工智能(中科院4區(qū))
47.【8】SCI易中期刊推薦——圖像處理領(lǐng)域(中科院4區(qū))
48.【9】SCI易中期刊推薦——工程技術(shù)-計(jì)算機(jī):軟件工程(中科院4區(qū))
49.【10】SCI易中期刊推薦——工程技術(shù)-計(jì)算機(jī):人工智能(中科院2區(qū))
50.【11】SCI易中期刊推薦——計(jì)算機(jī)方向(中科院4區(qū))
51.【12】SCI易中期刊推薦——計(jì)算機(jī)信息系統(tǒng)(中科院4區(qū))
?? 持續(xù)更新中……
關(guān)于YOLO改進(jìn)及論文投稿可關(guān)注并留言博主的CSDN/QQ
>>>一起交流!互相學(xué)習(xí)!共同進(jìn)步!<<<
到了這里,關(guān)于目標(biāo)檢測(cè)算法——YOLOv5/YOLOv7改進(jìn)之結(jié)合MobileOne結(jié)構(gòu)(高性能骨干|僅需1ms)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!