推荐 最新
神机妙算

基于YOLOv8的交通摄像头下车辆检测算法(一)

带你学习YOLOv8,从入门到创新,轻轻松松搞定科研; 1.交通摄像头车辆检测数据集介绍数据集来源:极市开发者平台-计算机视觉算法开发落地平台-极市科技数据集类别“car",训练集验证集测试集分别5248,582,291张下图可以看出都是车辆数据集具有不同尺寸的目标物体,既有大目标又有小目标 1.1 小目标检测难点 本文所指的小目标是指COCO中定义的像素面积小于32*32 pixels的物体。小目标检测的核心难点有三个:由本身定义导致的rgb信息过少,因而包含的判别性特征特征过少。数据集方面的不平衡。这主要针对COCO而言,COCO中只有51.82%的图片包含小物体,存在严重的图像级不平衡。具体的统计结果见下图。系。 2.YOLOv8介绍改进点:Backbone:使用的依旧是CSP的思想,不过YOLOv5中的C3模块被替换成了C2f模块,实现了进一步的轻量化,同时YOLOv8依旧使用了YOLOv5等架构中使用的SPPF模块;PAN-FPN:毫无疑问YOLOv8依旧使用了PAN的思想,不过通过对比YOLOv5与YOLOv8的结构图可以看到,YOLOv8将YOLOv5中PAN-FPN上采样阶段中的卷积结构删除了,同时也将C3模块替换为了C2f模块;Decoupled-Head:是不是嗅到了不一样的味道?是的YOLOv8走向了Decoupled-Head;YOLOv8抛弃了以往的Anchor-Base,使用了Anchor-Free的思想;损失函数:YOLOv8使用VFL Loss作为分类损失,使用DFL Loss+CIOU Loss作为分类损失;样本匹配:YOLOv8抛弃了以往的IOU匹配或者单边比例的分配方式,而是使用了Task-Aligned Assigner匹配方式。2.1 C2f模块介绍C2f模块就是参考了C3模块以及ELAN的思想进行的设计,让YOLOv8可以在保证轻量化的同时获得更加丰富的梯度流信息。 代码:class C2f(nn.Module): # CSP Bottleneck with 2 convolutions def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion super().__init__() self.c = int(c2 * e) # hidden channels self.cv1 = Conv(c1, 2 * self.c, 1, 1) self.cv2 = Conv((2 + n) * self.c, c2, 1) # optional act=FReLU(c2) self.m = nn.ModuleList(Bottleneck(self.c, self.c, shortcut, g, k=((3, 3), (3, 3)), e=1.0) for _ in range(n)) def forward(self, x): y = list(self.cv1(x).split((self.c, self.c), 1)) y.extend(m(y[-1]) for m in self.m) return self.cv2(torch.cat(y, 1))3.训练可视化分析YOLOv8 summary (fused): 168 layers, 3005843 parameters, 0 gradients, 8.1 GFLOPs Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 10/10 [00:18<00:00, 1.90s/it] all 582 6970 0.816 0.676 0.745 0.385训练结果如下:P_curve.png表示准确率与置信度的关系图线,横坐标置信度。由下图可以看出置信度越高,准确率越高。PR_curve.pngPR曲线中的P代表的是precision(精准率),R代表的是recall(召回率),其代表的是精准率与召回率的关系。 R_curve.png召回率与置信度之间关系,具体参照 P_curve。results.png(1,1),(2,1):该图分别表示训练时和验证时ClOU损失函数的均值,越小方框越准。(1,2),(2,2):推测为目标检测loss均值,越小目标越准。(2,4),(2,5):表示在不同IoU阈值时计算每一类中所有图片的AP然后所有类别求取均值。mAP_0.5:0.95表示从0.5到0.95以0.05的步长上的平均mAP.

0
0
0
浏览量2015
神机妙算

基于YOLOv8的交通摄像头下车辆检测算法(二):多尺度空洞注意力(MSDA) | 中科院一区顶刊

🚀🚀🚀本文改进: 新的注意力机制——多尺度空洞注意力(MSDA)。MSDA 能够模拟小范围内的局部和稀疏的图像块交互;如何在YOLOv8下使用:1)作为注意力机制放在各个网络位置;2)与C2f结合替代原始的C2f🚀🚀🚀MSCA多尺度特性在交通摄像头下车辆检测项目中, mAP50从原始的0.745提升至0.756🚀🚀🚀YOLOv8改进专栏:http://t.csdnimg.cn/hGhVK学姐带你学习YOLOv8,从入门到创新,轻轻松松搞定科研;1.交通摄像头车辆检测数据集介绍数据集来源:极市开发者平台-计算机视觉算法开发落地平台-极市科技数据集类别“car",训练集验证集测试集分别5248,582,291张下图可以看出都是车辆数据集具有不同尺寸的目标物体,既有大目标又有小目标1.1 小目标检测难点 本文所指的小目标是指COCO中定义的像素面积小于32*32 pixels的物体。小目标检测的核心难点有三个:由本身定义导致的rgb信息过少,因而包含的判别性特征特征过少。数据集方面的不平衡。这主要针对COCO而言,COCO中只有51.82%的图片包含小物体,存在严重的图像级不平衡。具体的统计结果见下图。2.DilateFormer介绍论文: 2302.01791.pdf (arxiv.org)本文提出了一种新颖的多尺度空洞 Transformer,简称DilateFormer,以用于视觉识别任务。原有的 ViT 模型在计算复杂性和感受野大小之间的权衡上存在矛盾。众所周知,ViT 模型使用全局注意力机制,能够在任意图像块之间建立长远距离上下文依赖关系,但是全局感受野带来的是平方级别的计算代价。同时,有些研究表明,在浅层特征上,直接进行全局依赖性建模可能存在冗余,因此是没必要的。为了克服这些问题,作者提出了一种新的注意力机制——多尺度空洞注意力(MSDA)。MSDA 能够模拟小范围内的局部和稀疏的图像块交互,这些发现源自于对 ViTs 在浅层次上全局注意力中图像块交互的分析。作者发现在浅层次上,注意力矩阵具有局部性和稀疏性两个关键属性,这表明在浅层次的语义建模中,远离查询块的块大部分无关,因此全局注意力模块中存在大量的冗余。如下图所示,MSDA 模块同样采用多头的设计,将特征图的通道分为 n 个不同的头部,并在不同的头部使用不同的空洞率执行滑动窗口膨胀注意力(SWDA)。这样可以在被关注的感受野内的各个尺度上聚合语义信息,并有效地减少自注意力机制的冗余,无需复杂的操作和额外的计算成本。总体来说,DilateFormer 通过这种混合使用多尺度空洞注意力和多头自注意力的方式,成功地处理了长距离依赖问题,同时保持了计算效率,并能够适应不同尺度和分辨率的输入。3.训练可视化分析 mAP50从原始的0.745提升至0.756YOLOv8_DilateBlock summary (fused): 182 layers, 3268755 parameters, 0 gradients, 8.3 GFLOPs Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 10/10 [00:19<00:00, 1.97s/it] all 582 6970 0.814 0.688 0.756 0.395 训练结果如下:PR_curve.pngPR曲线中的P代表的是precision(精准率),R代表的是recall(召回率),其代表的是精准率与召回率的关系。

0
0
0
浏览量2013
神机妙算

YOLOv8优化:注意力系列篇 | SEAttention注意力,效果秒杀CBAM

🚀🚀🚀本文改进:SEAttention注意力,引入到YOLOv8,多种实现方式🚀🚀🚀SEAttention在不同检测领域中应用广泛🚀🚀🚀YOLOv8改进专栏:http://t.csdnimg.cn/hGhVK1.SENet​         Squeeze-and-Excitation Networks(SENet)是由自动驾驶公司Momenta在2017年公布的一种全新的图像识别结构,它通过对特征通道间的相关性进行建模,把重要的特征进行强化来提升准确率。这个结构是2017 ILSVR竞赛的冠军,top5的错误率达到了2.251%,比2016年的第一名还要低25%,可谓提升巨大。        SE是指"Squeeze-and-Excitation",是一种用于增强卷积神经网络(CNN)的注意力机制。SE网络结构由Jie Hu等人在2018年提出,其核心思想是在卷积神经网络中引入一个全局的注意力机制,以自适应地学习每个通道的重要性。        SE网络通过两个步骤来实现注意力机制:压缩和激励。在压缩步骤中,SE网络会对每个通道的特征图进行全局池化,将其压缩成一个标量。在激励步骤中,SE网络会通过一个全连接层,将压缩后的特征向量转换为一个权重向量,用于对每个通道的特征图进行加权。        通过引入SE模块,CNN可以自适应地学习每个通道的重要性,从而提高模型的表现能力。SE网络在多个图像分类任务中取得了很好的效果,并被广泛应用于各种视觉任务中。​2.SEAttention加入YOLOv82.1加入ultralytics/nn/attention/attention.py###################### SENet #### start ############################### import numpy as np import torch from torch import nn from torch.nn import init class SEAttention(nn.Module): def __init__(self, channel=512,reduction=16): super().__init__() self.avg_pool = nn.AdaptiveAvgPool2d(1) self.fc = nn.Sequential( nn.Linear(channel, channel // reduction, bias=False), nn.ReLU(inplace=True), nn.Linear(channel // reduction, channel, bias=False), nn.Sigmoid() ) def init_weights(self): for m in self.modules(): if isinstance(m, nn.Conv2d): init.kaiming_normal_(m.weight, mode='fan_out') if m.bias is not None: init.constant_(m.bias, 0) elif isinstance(m, nn.BatchNorm2d): init.constant_(m.weight, 1) init.constant_(m.bias, 0) elif isinstance(m, nn.Linear): init.normal_(m.weight, std=0.001) if m.bias is not None: init.constant_(m.bias, 0) def forward(self, x): b, c, _, _ = x.size() y = self.avg_pool(x).view(b, c) y = self.fc(y).view(b, c, 1, 1) return x * y.expand_as(x) ###################### SENet #### end ############################### 2.2 修改tasks.py首先SEAttention进行注册 from ultralytics.nn.attention.attention import *函数def parse_model(d, ch, verbose=True): # model_dict, input_channels(3)进行修改 if m in (Classify, Conv, ConvTranspose, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, Focus, BottleneckCSP, C1, C2, C2f, C3, C3TR, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x, RepC3,SEAttention): c1, c2 = ch[f], args[0] 2.3 yaml实现2.3.1 yolov8_SEAttention.yaml加入backbone SPPF后# Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 - [-1, 1, SEAttention, [1024]] # 10 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 13 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 16 (P3/8-small) - [-1, 1, Conv, [256, 3, 2]] - [[-1, 13], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 19 (P4/16-medium) - [-1, 1, Conv, [512, 3, 2]] - [[-1, 10], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 22 (P5/32-large) - [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)2.3.2 yolov8_SEAttention2.yamlneck里的连接Detect的3个C2f结合# Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 15 (P3/8-small) - [-1, 1, SEAttention, [256]] # 16 - [-1, 1, Conv, [256, 3, 2]] - [[-1, 12], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 19 (P4/16-medium) - [-1, 1, SEAttention, [512]] # 20 - [-1, 1, Conv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 23 (P5/32-large) - [-1, 1, SEAttention, [1024]] # 24 - [[16, 20, 24], 1, Detect, [nc]] # Detect(P3, P4, P5) 2.3.3 yolov8_SEAttention3.yaml放入neck的C2f后面# Ultralytics YOLO 🚀, GPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 1 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, SEAttention, [512]] # 13 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 16 (P3/8-small) - [-1, 1, SEAttention, [256]] # 17 (P5/32-large) - [-1, 1, Conv, [256, 3, 2]] - [[-1, 13], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 20 (P4/16-medium) - [-1, 1, SEAttention, [512]] # 21 (P5/32-large) - [-1, 1, Conv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 24 (P5/32-large) - [-1, 1, SEAttention, [1024]] # 25 (P5/32-large) - [[17, 21, 25], 1, Detect, [nc]] # Detect(P3, P4, P5)

0
0
0
浏览量1984
神机妙算

YOLOv8改进:注意力系列篇 | SRM特征重新校准方法,效果优于SE、GE,且参数的数量较少

🚀🚀🚀本文改进: SRM特征重新校准方法,引入到YOLOv8,多种实现方式🚀🚀🚀SRM在不同检测领域中应用广泛🚀🚀🚀YOLOv8改进专栏:http://t.csdnimg.cn/hGhVK​​  1.SRM介绍  ​    SRM的总体结构如 Figure 1 所示。它由两个主要组件组成:Style Pooling 和 Style Integration。Style Pooling 运算符通过汇总跨空间维度的特征响应来从每个通道提取风格特征。紧随其后的是 Style Integration 运算符,该运算符通过基于通道的操作利用风格特征来生成特定于示例的风格权重。 ​  SRM首先通过“style pooling”从特征图的每个通道中提取风格信息,然后通过与通道无关的风格集成来估计每个通道的重新校准权重。通过将单个风格的相对重要性纳入特征图,SRM有效地增强了CNN的表示能力。​ Figure 3 展示了带有 SRM 和其他特征重新校准方法的 ResNet-50 的训练和验证曲线。在整个训练过程中,无论是在训练还是在验证曲线上,SRM的准确性都比SE和GE高得多。这意味着,在SRM中使用风格,比在SE中建模通道相关性或在GE中收集全局上下文更有效,这两方面都有助于训练和提高泛化能力。​值得注意的是,SRM的性能优于SE和GE,其附加参数的数量较少。 ​2.SRM加入YOLOv82.1加入ultralytics/nn/attention/attention.py###################### SRM attention #### START ############################## """ PyTorch implementation of Srm : A style-based recalibration module for convolutional neural networks As described in https://arxiv.org/pdf/1903.10829 SRM first extracts the style information from each channel of the feature maps by style pooling, then estimates per-channel recalibration weight via channel-independent style integration. By incorporating the relative importance of individual styles into feature maps, SRM effectively enhances the representational ability of a CNN. """ import torch from torch import nn class SRM(nn.Module): def __init__(self, feature, channel): super().__init__() self.cfc = nn.Conv1d(channel, channel, kernel_size=2, groups=channel, bias=False) self.bn = nn.BatchNorm1d(channel) def forward(self, x): b, c, h, w = x.shape # style pooling mean = x.reshape(b, c, -1).mean(-1).unsqueeze(-1) std = x.reshape(b, c, -1).std(-1).unsqueeze(-1) u = torch.cat([mean, std], dim=-1) # style integration z = self.cfc(u) z = self.bn(z) g = torch.sigmoid(z) g = g.reshape(b, c, 1, 1) return x * g.expand_as(x) ###################### SRM attention #### END ############################### 2.2 修改tasks.py首先SRM进行注册from ultralytics.nn.attention.attention import *函数def parse_model(d, ch, verbose=True): # model_dict, input_channels(3)进行修改 if m in (Classify, Conv, ConvTranspose, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, Focus, BottleneckCSP, C1, C2, C2f, C3, C3TR, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x, RepC3 ,SRM): c1, c2 = ch[f], args[0] 2.3 yaml实现2.3.1 yolov8_SRM.yaml加入backbone SPPF后​# Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 - [-1, 1, SRM, [1024]] # 10 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 13 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 16 (P3/8-small) - [-1, 1, Conv, [256, 3, 2]] - [[-1, 13], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 19 (P4/16-medium) - [-1, 1, Conv, [512, 3, 2]] - [[-1, 10], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 22 (P5/32-large) - [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)2.3.2 yolov8_SRM2.yamlneck里的连接Detect的3个C2f结合​# Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 15 (P3/8-small) - [-1, 1, SRM, [256]] # 16 - [-1, 1, Conv, [256, 3, 2]] - [[-1, 12], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 19 (P4/16-medium) - [-1, 1, SRM, [512]] # 20 - [-1, 1, Conv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 23 (P5/32-large) - [-1, 1, SRM, [1024]] # 24 - [[16, 20, 24], 1, Detect, [nc]] # Detect(P3, P4, P5) 2.3.3 yolov8_SRM3.yaml放入neck的C2f后面​# Ultralytics YOLO 🚀, GPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 1 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, SRM, [512]] # 13 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 16 (P3/8-small) - [-1, 1, SRM, [256]] # 17 (P5/32-large) - [-1, 1, Conv, [256, 3, 2]] - [[-1, 13], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 20 (P4/16-medium) - [-1, 1, SRM, [512]] # 21 (P5/32-large) - [-1, 1, Conv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 24 (P5/32-large) - [-1, 1, SRM, [1024]] # 25 (P5/32-large) - [[17, 21, 25], 1, Detect, [nc]] # Detect(P3, P4, P5)

0
0
0
浏览量1896
神机妙算

基于YOLOv8的交通摄像头下车辆检测算法(七):HIC-YOLOv8复现HIC-YOLOv5,助力

本文改进: 新的注意力机制——多尺度空洞注意力(MSDA)。MSDA 能够模拟小范围内的局部和稀疏的图像块交互;如何在YOLOv8下使用:1)作为注意力机制放在各个网络位置;2)与C2f结合替代原始的C2fMSCA多尺度特性在交通摄像头下车辆检测项目中, mAP50从原始的0.745提升至0.756学姐带你学习YOLOv8,从入门到创新,轻轻松松搞定科研; 1.交通摄像头车辆检测数据集介绍数据集来源:极市开发者平台-计算机视觉算法开发落地平台-极市科技数据集类别“car",训练集验证集测试集分别5248,582,291张下图可以看出都是车辆数据集具有不同尺寸的目标物体,既有大目标又有小目标 1.1 小目标检测难点 本文所指的小目标是指COCO中定义的像素面积小于32*32 pixels的物体。小目标检测的核心难点有三个:由本身定义导致的rgb信息过少,因而包含的判别性特征特征过少。数据集方面的不平衡。这主要针对COCO而言,COCO中只有51.82%的图片包含小物体,存在严重的图像级不平衡。具体的统计结果见下图。系。 2.HIC-YOLOv5介绍摘要:小目标检测一直是目标检测领域的一个具有挑战性的问题。 已经有一些工作提出了对该任务的改进,例如添加几个注意力块或改变特征融合网络的整体结构。 然而,这些模型的计算成本很大,这使得部署实时目标检测系统不可行,同时还有改进的空间。 为此,提出了一种改进的YOLOv5模型:HICYOLOv5来解决上述问题。 首先,添加一个针对小物体的额外预测头,以提供更高分辨率的特征图,以实现更好的预测。 其次,在backbone和neck之间采用involution block来增加特征图的通道信息。 此外,在主干网末端应用了一种名为 CBAM 的注意力机制,与之前的工作相比,不仅降低了计算成本,而且还强调了通道和空间域中的重要信息。 我们的结果表明,HIC-YOLOv5 在 VisDrone-2019-DET 数据集上将 mAP@[.5:.95] 提高了 6.42%,将 mAP@0.5 提高了 9.38%。 2.1 Convolutional Block Attention Module(CBAM)介绍 ​ 论文地址:https://arxiv.org/pdf/1807.06521.pdf 摘要:我们提出了卷积块注意力模块(CBAM),这是一种用于前馈卷积神经网络的简单而有效的注意力模块。 给定中间特征图,我们的模块沿着两个独立的维度(通道和空间)顺序推断注意力图,然后将注意力图乘以输入特征图以进行自适应特征细化。 由于 CBAM 是一个轻量级通用模块,因此它可以无缝集成到任何 CNN 架构中,且开销可以忽略不计,并且可以与基础 CNN 一起进行端到端训练。 我们通过在 ImageNet-1K、MS COCO 检测和 VOC 2007 检测数据集上进行大量实验来验证我们的 CBAM。我们的实验显示各种模型在分类和检测性能方面的持续改进,证明了 CBAM 的广泛适用性。 代码和模型将公开。 上图可以看到,CBAM包含CAM(Channel Attention Module)和SAM(Spartial Attention Module)两个子模块,分别进行通道和空间上的Attention。这样不只能够节约参数和计算力,并且保证了其能够做为即插即用的模块集成到现有的网络架构中去。2.2 Involution原理介绍论文链接:https://arxiv.org/abs/2103.06255作者认为卷积操作的两个特征虽然也有一定的优势,但同样也有缺点。所以提出了Involution,Involution所拥有的特征正好和卷积相对称,即 spatial-specific and channel-agnostic那就是通道无关和特定于空间。和卷积一样,内卷也有内卷核(involution kernels)。内卷核在空间范围上是不同的,但在通道之间共享。看到这里就有一定的画面感了。内卷的优点:1.可以在更大的空间范围中总结上下文信息,从而克服long-range interaction(本来的卷积操作只能在特定的小空间如3x3中集合空间信息)2.内卷可以将权重自适应地分配到不同的位置,从而对空间域中信息量最大的视觉元素进行优先级排序。(本来的卷积在空间的每一个地方都是用到同一个卷积核,用的同一套权重)重新考虑了视觉任务标准卷积的固有原理,特别是与空间无关和特定于通道的方法。取而代之的是,我们通过反转前述的卷积设计原理(称为卷积)提出了一种用于深度神经网络的新颖atomic操作。2.3 多头检测器在进行目标检测时,小目标会出现漏检或检测效果不佳等问题。YOLOv8有3个检测头,能够多尺度对目标进行检测,但对微小目标检测可能存在检测能力不佳的现象,因此添加一个微小物体的检测头,能够大量涨点,map提升明显;源码:YOLOv8改进:复现HIC-YOLOv5,助力小目标检测-CSDN博客3.训练可视化分析 mAP50从原始的0.745提升至0.802YOLOv8_HIC-YOLOv8 summary (fused): 221 layers, 3004550 parameters, 0 gradients, 12.2 GFLOPs Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 19/19 [00:12<00:00, 1.49it/s] all 582 6970 0.816 0.711 0.775 0.407训练结果如下:PR_curve.pngPR曲线中的P代表的是precision(精准率),R代表的是recall(召回率),其代表的是精准率与召回率的关系。

0
0
0
浏览量1887
神机妙算

YOLOv8优化:注意力系列篇 | CoTAttention,捕捉全局信息的能力与CNN捕捉临近局部

🚀🚀🚀本文改进:CoTAttention注意力,引入到YOLOv8,多种实现方式🚀🚀🚀CoTAttention在不同检测领域中应用广泛🚀🚀🚀YOLOv8改进专栏:http://t.csdnimg.cn/hGhVK​1.CoTAttention         CoTAttention网络是一种用于多模态场景下的视觉问答(Visual Question Answering,VQA)任务的神经网络模型。它是在经典的注意力机制(Attention Mechanism)上进行了改进,能够自适应地对不同的视觉和语言输入进行注意力分配,从而更好地完成VQA任务。CoTAttention网络中的“CoT”代表“Cross-modal Transformer”,即跨模态Transformer。在该网络中,视觉和语言输入分别被编码为一组特征向量,然后通过一个跨模态的Transformer模块进行交互和整合。在这个跨模态的Transformer模块中,Co-Attention机制被用来计算视觉和语言特征之间的交互注意力,从而实现更好的信息交换和整合。在计算机视觉和自然语言处理紧密结合的VQA任务中,CoTAttention网络取得了很好的效果。  新加坡国立大学的Qibin Hou等人提出了一种为轻量级网络设计的新的注意力机制,该机制将位置信息嵌入到了通道注意力中,称为coordinate attention 京东AI Research提出新的主干网络CoTNet,在CVPR上2021获得开放域图像识别竞赛冠军        京东AI Research提出一种Contextual Transformer Networks结构,就是将Transformer捕捉全局信息的能力与CNN捕捉临近局部信息能力相结合,从而提高网络模型的特征表达能力。值得注意的是,该方法可以实现模块的“即插即用”,将ResNet网络中的3x3模块替换成CoTNet的核心模块即可使用,Res2Net网络也是基于这种思想实现的。​​   在相同深度(50层或101层)下,top-1和top-5结果都表明本文的方法比卷积网络和Attention-based网络性能更好。​2.CoTAttention加入YOLOv82.1加入ultralytics/nn/attention/attention.py###################### CoTAttention #### start ############################### import torch from torch import flatten, nn from torch.nn import functional as F class CoTAttention(nn.Module): def __init__(self, dim=512, kernel_size=3): super().__init__() self.dim = dim self.kernel_size = kernel_size self.key_embed = nn.Sequential( nn.Conv2d(dim, dim, kernel_size=kernel_size, padding=kernel_size // 2, groups=4, bias=False), nn.BatchNorm2d(dim), nn.ReLU() ) self.value_embed = nn.Sequential( nn.Conv2d(dim, dim, 1, bias=False), nn.BatchNorm2d(dim) ) factor = 4 self.attention_embed = nn.Sequential( nn.Conv2d(2 * dim, 2 * dim // factor, 1, bias=False), nn.BatchNorm2d(2 * dim // factor), nn.ReLU(), nn.Conv2d(2 * dim // factor, kernel_size * kernel_size * dim, 1) ) def forward(self, x): bs, c, h, w = x.shape k1 = self.key_embed(x) # bs,c,h,w v = self.value_embed(x).view(bs, c, -1) # bs,c,h,w y = torch.cat([k1, x], dim=1) # bs,2c,h,w att = self.attention_embed(y) # bs,c*k*k,h,w att = att.reshape(bs, c, self.kernel_size * self.kernel_size, h, w) att = att.mean(2, keepdim=False).view(bs, c, -1) # bs,c,h*w k2 = F.softmax(att, dim=-1) * v k2 = k2.view(bs, c, h, w) return k1 + k2 ###################### CoTAttention #### end ############################### 2.2 修改tasks.py首先CoTAttention进行注册from ultralytics.nn.attention.attention import *函数def parse_model(d, ch, verbose=True): # model_dict, input_channels(3)进行修改 #####attention #### elif m in (MHSA,ECAAttention,TripletAttention,BAM,CoTAttention): c1, c2 = ch[f], args[0] if c2 != nc: c2 = make_divisible(min(c2, max_channels) * width, 8) args = [c1, *args[1:]] #####attention #### 2.3 yaml实现2.3.1 yolov8_CoTAttention.yaml加入backbone SPPF后​​# Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 - [-1, 1, CoTAttention, [1024]] # 10 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 13 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 16 (P3/8-small) - [-1, 1, Conv, [256, 3, 2]] - [[-1, 13], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 19 (P4/16-medium) - [-1, 1, Conv, [512, 3, 2]] - [[-1, 10], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 22 (P5/32-large) - [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)2.3.2 yolov8_CoTAttention2.yamlneck里的连接Detect的3个C2f结合​​ # Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 15 (P3/8-small) - [-1, 1, CoTAttention, [256]] # 16 - [-1, 1, Conv, [256, 3, 2]] - [[-1, 12], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 19 (P4/16-medium) - [-1, 1, CoTAttention, [512]] # 20 - [-1, 1, Conv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 23 (P5/32-large) - [-1, 1, CoTAttention, [1024]] # 24 - [[16, 20, 24], 1, Detect, [nc]] # Detect(P3, P4, P5) 2.3.3 yolov8_CoTAttention3.yaml放入neck的C2f后面​​# Ultralytics YOLO 🚀, GPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 1 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, CoTAttention, [512]] # 13 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 16 (P3/8-small) - [-1, 1, CoTAttention, [256]] # 17 (P5/32-large) - [-1, 1, Conv, [256, 3, 2]] - [[-1, 13], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 20 (P4/16-medium) - [-1, 1, CoTAttention, [512]] # 21 (P5/32-large) - [-1, 1, Conv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 24 (P5/32-large) - [-1, 1, CoTAttention, [1024]] # 25 (P5/32-large) - [[17, 21, 25], 1, Detect, [nc]] # Detect(P3, P4, P5)

0
0
0
浏览量1615
神机妙算

基于YOLOv8的交通摄像头下车辆检测算法(六):SPD-Conv,低分辨率图像和小物体等更困难任务

本文改进: 新的注意力机制——多尺度空洞注意力(MSDA)。MSDA 能够模拟小范围内的局部和稀疏的图像块交互;如何在YOLOv8下使用:1)作为注意力机制放在各个网络位置;2)与C2f结合替代原始的C2fMSCA多尺度特性在交通摄像头下车辆检测项目中, mAP50从原始的0.745提升至0.756学姐带你学习YOLOv8,从入门到创新,轻轻松松搞定科研; 1.交通摄像头车辆检测数据集介绍数据集来源:极市开发者平台-计算机视觉算法开发落地平台-极市科技数据集类别“car",训练集验证集测试集分别5248,582,291张下图可以看出都是车辆数据集具有不同尺寸的目标物体,既有大目标又有小目标 1.1 小目标检测难点 本文所指的小目标是指COCO中定义的像素面积小于32*32 pixels的物体。小目标检测的核心难点有三个:由本身定义导致的rgb信息过少,因而包含的判别性特征特征过少。数据集方面的不平衡。这主要针对COCO而言,COCO中只有51.82%的图片包含小物体,存在严重的图像级不平衡。具体的统计结果见下图。系。   2.论文简介 论文:https://arxiv.org/pdf/2208.03641v1.pdfgithub:SPD-Conv/YOLOv5-SPD at main · LabSAINT/SPD-Conv · GitHub摘要:卷积神经网络(CNNs)在计算即使觉任务中如图像分类和目标检测等取得了显著的成功。然而,当图像分辨率较低或物体较小时,它们的性能会灾难性下降。这是由于现有CNN常见的设计体系结构中有缺陷,即使用卷积步长和/或池化层,这导致了细粒度信息的丢失和较低效的特征表示的学习。为此,我们提出了一个名为SPD-Conv的新的CNN构建块来代替每个卷积步长和每个池化层(因此完全消除了它们)。SPD-Conv由一个空间到深度(SPD)层和一个无卷积步长(Conv)层组成,可以应用于大多数CNN体系结构。我们从两个最具代表性的计算即使觉任务:目标检测和图像分类来解释这个新设计。然后,我们将SPD-Conv应用于YOLOv5和ResNet,创建了新的CNN架构,并通过经验证明,我们的方法明显优于最先进的深度学习模型,特别是在处理低分辨率图像和小物体等更困难的任务时。1.1  SPD- convSPD- conv由一个空间到深度(SPD)层和一个非跨步卷积层组成。SPD组件推广了一种(原始)图像转换技术[29]来对CNN内部和整个CNN的特征映射进行下采样:1.2 Yolov5-SPD网络结构图只需更换YOLOv5 stride-2卷积层即可得到YOLOv5- SPD,用SPD-Conv构建块取代原有卷积。有7个这样的替换实例,因为YOLOv5在主干中使用5个stride-2卷积层对特征图进行25倍的下采样,在neck使用2个stride-2卷积层。在YOLOv5 neck中,每一次步长卷积后都有一个连接层;这并没有改变我们的方法,我们只是将其保持在SPD和Conv之间。YOLOv5-SPD提供多个版本:YOLOv5-SPD性能:   我们比较YOLOv5-SPD-m和YOLOv5m,因为后者是相应(中等)类别中所有基准模型中性能最好的。图5(a)(b)表明YOLOv5-SPD-m能够检测到被遮挡的长颈鹿,YOLOv5m没有检测到,图5(c)(d)显示YOLOv5-SPD-m检测到非常小的目标(一张脸和两个长凳),而YOLOv5m检测不到。源码详见:YOLOv8改进:小目标涨点系列篇 | SPD-Conv,低分辨率图像和小物体等更困难任务涨点明显-CSDN博客3.训练可视化分析 mAP50从原始的0.745提升至0.802YOLOv8_SPD summary (fused): 174 layers, 3451283 parameters, 0 gradients, 50.9 GFLOPs Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 37/37 [00:22<00:00, 1.68it/s] all 582 6970 0.828 0.742 0.802 0.416训练结果如下:PR_curve.pngPR曲线中的P代表的是precision(精准率),R代表的是recall(召回率),其代表的是精准率与召回率的关系。文章知识点与官方知识档案匹配,可进一步学习相关知识

0
0
0
浏览量1548
神机妙算

YOLOv8优化:注意力系列篇 | 瓶颈注意力模块BAM,效果秒杀CBAM、SE

🚀🚀🚀本文改进:BAM注意力,引入到YOLOv8,多种实现方式🚀🚀🚀BAM在不同检测领域中应用广泛🚀🚀🚀YOLOv8改进专栏:http://t.csdnimg.cn/hGhVK​​ 1.BAM介绍​        摘要:提出了一种简单有效的注意力模块,称为瓶颈注意力模块(BAM),可以与任何前馈卷积神经网络集成。我们的模块沿着两条独立的路径,通道和空间,推断出一张注意力图。我们将我们的模块放置在模型的每个瓶颈处,在那里会发生特征图的下采样。我们的模块用许多参数在瓶颈处构建了分层注意力,并且它可以以端到端的方式与任何前馈模型联合训练。我们通过在CIFAR-100、ImageNet-1K、VOC 2007和MS COCO基准上进行大量实验来验证我们的BAM。我们的实验表明,各种模型在分类和检测性能上都有持续的改进,证明了BAM的广泛适用性。        作者将BAM放在了Resnet网络中每个stage之间。有趣的是,通过可视化我们可以看到多层BAMs形成了一个分层的注意力机制,这有点像人类的感知机制。BAM在每个stage之间消除了像背景语义特征这样的低层次特征,然后逐渐聚焦于高级的语义–明确的目标。 ​ 作者提出了新的Attention模型——瓶颈注意模块,通过分离的两个路径channel和spatial得到attention map,减少计算开销和参数开销。​实验 BAM可以在大规模数据集中的各种模型上有很好的泛化能力,同时参数和计算的开销可以忽略不计,这表明提出的模块BAM可以有效地提高网络容量。另一个值得注意的是,改进的性能来自于只在网络中放置三个模块。​ BAM提高了所有具有两个骨干网络的强大基线的准确性.BAM的准确率提高是以可忽略不计的参数开销实现的,这表明提高不是由于天真的容量增加,而是由于我们有效的特征细化。​2.BAM加入YOLOv82.1加入ultralytics/nn/attention/attention.py ###################### BAM attention #### START ############################### import torch from torch import nn import torch.nn.functional as F class ChannelGate(nn.Module): def __init__(self, channel, reduction=16): super().__init__() self.avgpool = nn.AdaptiveAvgPool2d(1) self.mlp = nn.Sequential( nn.Linear(channel, channel // reduction), nn.ReLU(inplace=True), nn.Linear(channel // reduction, channel) ) self.bn = nn.BatchNorm1d(channel) def forward(self, x): b, c, h, w = x.shape y = self.avgpool(x).view(b, c) y = self.mlp(y) y = self.bn(y).view(b, c, 1, 1) return y.expand_as(x) class SpatialGate(nn.Module): def __init__(self, channel, reduction=16, kernel_size=3, dilation_val=4): super().__init__() self.conv1 = nn.Conv2d(channel, channel // reduction, kernel_size=1) self.conv2 = nn.Sequential( nn.Conv2d(channel // reduction, channel // reduction, kernel_size, padding=dilation_val, dilation=dilation_val), nn.BatchNorm2d(channel // reduction), nn.ReLU(inplace=True), nn.Conv2d(channel // reduction, channel // reduction, kernel_size, padding=dilation_val, dilation=dilation_val), nn.BatchNorm2d(channel // reduction), nn.ReLU(inplace=True) ) self.conv3 = nn.Conv2d(channel // reduction, 1, kernel_size=1) self.bn = nn.BatchNorm2d(1) def forward(self, x): b, c, h, w = x.shape y = self.conv1(x) y = self.conv2(y) y = self.conv3(y) y = self.bn(y) return y.expand_as(x) class BAM(nn.Module): def __init__(self, channel): super(BAM, self).__init__() self.channel_attn = ChannelGate(channel) self.spatial_attn = SpatialGate(channel) def forward(self, x): attn = F.sigmoid(self.channel_attn(x) + self.spatial_attn(x)) return x + x * attn ###################### BAM attention #### END ############################### 2.2 修改tasks.py首先BAM进行注册from ultralytics.nn.attention.attention import *函数def parse_model(d, ch, verbose=True): # model_dict, input_channels(3)进行修改 elif m is BAM: c1, c2 = ch[f], args[0] if c2 != nc: c2 = make_divisible(min(c2, max_channels) * width, 8) args = [c1, *args[1:]] 2.3 yaml实现2.3.1 yolov8_BAM.yaml加入backbone SPPF后​# Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 - [-1, 1, BAM, [1024]] # 10 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 13 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 16 (P3/8-small) - [-1, 1, Conv, [256, 3, 2]] - [[-1, 13], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 19 (P4/16-medium) - [-1, 1, Conv, [512, 3, 2]] - [[-1, 10], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 22 (P5/32-large) - [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)2.3.2 yolov8_BAM2.yamlneck里的连接Detect的3个C2f结合​# Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 15 (P3/8-small) - [-1, 1, BAM, [256]] # 16 - [-1, 1, Conv, [256, 3, 2]] - [[-1, 12], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 19 (P4/16-medium) - [-1, 1, BAM, [512]] # 20 - [-1, 1, Conv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 23 (P5/32-large) - [-1, 1, BAM, [1024]] # 24 - [[16, 20, 24], 1, Detect, [nc]] # Detect(P3, P4, P5) 2.3.3 yolov8_BAM3.yaml放入neck的C2f后面​# Ultralytics YOLO 🚀, GPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 1 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, BAM, [512]] # 13 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 16 (P3/8-small) - [-1, 1, BAM, [256]] # 17 (P5/32-large) - [-1, 1, Conv, [256, 3, 2]] - [[-1, 13], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 20 (P4/16-medium) - [-1, 1, BAM, [512]] # 21 (P5/32-large) - [-1, 1, Conv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 24 (P5/32-large) - [-1, 1, BAM, [1024]] # 25 (P5/32-large) - [[17, 21, 25], 1, Detect, [nc]] # Detect(P3, P4, P5)

0
0
0
浏览量1470
神机妙算

YOLOv8优化:注意力系列篇 | 线性上下文变换LCT,性能优于SE等注意力

🚀🚀🚀本文改进:线性上下文变换LCT,引入到YOLOv8,多种实现方式🚀🚀🚀LCT在不同检测领域中应用广泛🚀🚀🚀YOLOv8改进专栏:http://t.csdnimg.cn/hGhVK​ 1.LCT介绍 AAAI 2020摘要:        在本研究中,我们首先重新审视了SE块,然后基于全局上下文和注意力分布之间的关系进行了详细的实证研究,基于此提出了一个简单而有效的模块,称为线性上下文变换(LCT)块。我们将所有通道分成不同的组,并在每个通道组内对全局聚合的上下文特征进行归一化,减少了来自无关通道的干扰。通过对归一化的上下文特征进行线性变换,我们独立地为每个通道建模全局上下文。LCT块非常轻量级,易于插入不同的主干模型,同时增加的参数和计算负担可以忽略不计。大量实验证明,在ImageNet图像分类任务和COCO数据集上的目标检测/分割任务中,LCT块在不同主干模型上的性能优于SE块。此外,LCT在现有最先进的检测架构上都能带来一致的性能提升,例如在COCO基准测试中,无论基线模型的容量如何,APbbox提升1.5∼1.7%,APmask提升1.0%∼1.2%。我们希望我们的简单而有效的方法能为基于注意力的模型的未来研究提供一些启示。 LCT结构图: 实验:分类任务很重,LCT优于SE 检测任务重,ap提升1.5∼1.7%2.LCT加入YOLOv82.1加入ultralytics/nn/attention/attention.py ###################### LCT attention #### start ############################### """ PyTorch implementation of Linear Context Transform Block As described in https://arxiv.org/pdf/1909.03834v2 """ import torch from torch import nn class LCTattention(nn.Module): def __init__(self, channels, groups, eps=1e-5): super().__init__() assert channels % groups == 0, "Number of channels should be evenly divisible by the number of groups" self.groups = groups self.channels = channels self.eps = eps self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.w = nn.Parameter(torch.ones(channels)) self.b = nn.Parameter(torch.zeros(channels)) self.sigmoid = nn.Sigmoid() def forward(self, x): batch_size = x.shape[0] y = self.avgpool(x).view(batch_size, self.groups, -1) mean = y.mean(dim=-1, keepdim=True) mean_x2 = (y ** 2).mean(dim=-1, keepdim=True) var = mean_x2 - mean ** 2 y_norm = (y - mean) / torch.sqrt(var + self.eps) y_norm = y_norm.reshape(batch_size, self.channels, 1, 1) y_norm = self.w.reshape(1, -1, 1, 1) * y_norm + self.b.reshape(1, -1, 1, 1) y_norm = self.sigmoid(y_norm) return x * y_norm.expand_as(x) ###################### LCT attention #### END ############################### 2.2 修改tasks.py首先LCTattention进行注册from ultralytics.nn.attention.attention import *函数def parse_model(d, ch, verbose=True): # model_dict, input_channels(3)进行修改 if m in (Classify, Conv, ConvTranspose, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, Focus, BottleneckCSP, C1, C2, C2f, C3, C3TR, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x, RepC3,LCTattention): c1, c2 = ch[f], args[0] 2.3 yaml实现2.3.1 yolov8_LCTattention.yaml加入backbone SPPF后# Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 - [-1, 1, LCTattention, [1024]] # 10 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 13 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 16 (P3/8-small) - [-1, 1, Conv, [256, 3, 2]] - [[-1, 13], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 19 (P4/16-medium) - [-1, 1, Conv, [512, 3, 2]] - [[-1, 10], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 22 (P5/32-large) - [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)2.3.2 yolov8_LCTattention2.yamlneck里的连接Detect的3个C2f结合# Ultralytics YOLO 🚀, AGPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 15 (P3/8-small) - [-1, 1, LCTattention, [256]] # 16 - [-1, 1, Conv, [256, 3, 2]] - [[-1, 12], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 19 (P4/16-medium) - [-1, 1, LCTattention, [512]] # 20 - [-1, 1, Conv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 23 (P5/32-large) - [-1, 1, LCTattention, [1024]] # 24 - [[16, 20, 24], 1, Detect, [nc]] # Detect(P3, P4, P5) 2.3.3 yolov8_LCTattention3.yaml放入neck的C2f后面# Ultralytics YOLO 🚀, GPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect # Parameters nc: 1 # number of classes scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs # YOLOv8.0n backbone backbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9 # YOLOv8.0n head head: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, LCTattention, [512]] # 13 - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 16 (P3/8-small) - [-1, 1, LCTattention, [256]] # 17 (P5/32-large) - [-1, 1, Conv, [256, 3, 2]] - [[-1, 13], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 20 (P4/16-medium) - [-1, 1, LCTattention, [512]] # 21 (P5/32-large) - [-1, 1, Conv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 24 (P5/32-large) - [-1, 1, LCTattention, [1024]] # 25 (P5/32-large) - [[17, 21, 25], 1, Detect, [nc]] # Detect(P3, P4, P5)

0
0
0
浏览量1443
神机妙算

基于YOLOv8的人脸检测小目标识别算法

1.人脸识别小目标数据集介绍本文章主要通过小目标来进行优化,因此数据集只选择partA部分,总共 2000张,按照7:2:1随机进行分配;下图可以看出都是小目标人脸识别 1.1 小目标检测难点 本文所指的小目标是指COCO中定义的像素面积小于32*32 pixels的物体。小目标检测的核心难点有三个:由本身定义导致的rgb信息过少,因而包含的判别性特征特征过少。数据集方面的不平衡。这主要针对COCO而言,COCO中只有51.82%的图片包含小物体,存在严重的图像级不平衡。具体的统计结果见下图。2.YOLOv8介绍改进点:Backbone:使用的依旧是CSP的思想,不过YOLOv5中的C3模块被替换成了C2f模块,实现了进一步的轻量化,同时YOLOv8依旧使用了YOLOv5等架构中使用的SPPF模块;PAN-FPN:毫无疑问YOLOv8依旧使用了PAN的思想,不过通过对比YOLOv5与YOLOv8的结构图可以看到,YOLOv8将YOLOv5中PAN-FPN上采样阶段中的卷积结构删除了,同时也将   C3模块替换为了C2f模块;Decoupled-Head:是不是嗅到了不一样的味道?是的YOLOv8走向了Decoupled-Head;YOLOv8抛弃了以往的Anchor-Base,使用了Anchor-Free的思想;损失函数:YOLOv8使用VFL Loss作为分类损失,使用DFL Loss+CIOU Loss作为分类损失;样本匹配:YOLOv8抛弃了以往的IOU匹配或者单边比例的分配方式,而是使用了Task-Aligned Assigner匹配方式。2.1 C2f模块介绍C2f模块就是参考了C3模块以及ELAN的思想进行的设计,让YOLOv8可以在保证轻量化的同时获得更加丰富的梯度流信息。 代码:class C2f(nn.Module): # CSP Bottleneck with 2 convolutions def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion super().__init__() self.c = int(c2 * e) # hidden channels self.cv1 = Conv(c1, 2 * self.c, 1, 1) self.cv2 = Conv((2 + n) * self.c, c2, 1) # optional act=FReLU(c2) self.m = nn.ModuleList(Bottleneck(self.c, self.c, shortcut, g, k=((3, 3), (3, 3)), e=1.0) for _ in range(n)) def forward(self, x): y = list(self.cv1(x).split((self.c, self.c), 1)) y.extend(m(y[-1]) for m in self.m) return self.cv2(torch.cat(y, 1))3.训练可视化分析YOLOv8 summary (fused): 168 layers, 3005843 parameters, 0 gradients, 8.1 GFLOPs Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 17/17 [00:37<00:00, 2.21s/it] all 540 18086 0.912 0.885 0.929 0.43训练结果如下:P_curve.png表示准确率与置信度的关系图线,横坐标置信度。由下图可以看出置信度越高,准确率越高。PR_curve.pngPR曲线中的P代表的是precision(精准率),R代表的是recall(召回率),其代表的是精准率与召回率的关系。 R_curve.png召回率与置信度之间关系,具体参照 P_curve。results.png(1,1),(2,1):该图分别表示训练时和验证时ClOU损失函数的均值,越小方框越准。(1,2),(2,2):推测为目标检测loss均值,越小目标越准。(2,4),(2,5):表示在不同IoU阈值时计算每一类中所有图片的AP然后所有类别求取均值。mAP_0.5:0.95表示从0.5到0.95以0.05的步长上的平均mAP.

0
0
0
浏览量1379