ATTENTION_SE_ [se_block, cbam_block, eca_block]

1:参考了一下的博客,主要是看英文费劲了点,看这个博客也大概能了解

注意力机制代码_注意力机制BAM和CBAM详细解析(附代码)_weixin_39932300的博客-CSDN博客

三种注意力机制的封装已经完成可参考:

https://gitee.com/skming7216/attention/blob/master/nets/attention.py

应用时可以将注意力机制放置在forword之前就可以

from nets.attention import cbam_block, eca_block, se_block

attention_block = [se_block, cbam_block, eca_block]
class YoloBody(nn.Module):
    def __init__(self, num_anchors, num_classes, phi=0):
        super(YoloBody, self).__init__()
        if phi >= 4:
            raise AssertionError("Phi must be less than or equal to 3 (0, 1, 2, 3).")

        self.phi            = phi
        self.backbone       = darknet53_tiny(None)

        self.conv_for_P5    = BasicConv(512,256,1)
        self.yolo_headP5    = yolo_head([512, num_anchors * (5 + num_classes)],256)

        self.upsample       = Upsample(256,128)
        self.yolo_headP4    = yolo_head([256, num_anchors * (5 + num_classes)],384)

        if 1 <= self.phi and self.phi <= 3:
            self.feat1_att      = attention_block[self.phi - 1](256)
            self.feat2_att      = attention_block[self.phi - 1](512)
            self.upsample_att   = attention_block[self.phi - 1](128)

    def forward(self, x):
        #---------------------------------------------------#
        #   生成CSPdarknet53_tiny的主干模型
        #   feat1的shape为26,26,256
        #   feat2的shape为13,13,512
        #---------------------------------------------------#
        feat1, feat2 = self.backbone(x)
        if 1 <= self.phi and self.phi <= 3:
            feat1 = self.feat1_att(feat1)
            feat2 = self.feat2_att(feat2)

初始化每一个输出分支后面,然后将attention应用于forword的各个分支,至于怎么定义参数,很简单看来例子就能搞。

版权声明:本文为CSDN博主「齐名南」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/qq_51609636/article/details/121821360

齐名南

我还没有学会写个人说明!

暂无评论

发表评论

相关推荐

pt模型在VS上运行

注意:本例子为我个人对YOLOv5的源码进行在VS上的实现,大家可按自己的模型根据源码进行修改 我这为单个物体的检测,如果检测多个物体需对结果处理函数(non_max_suppression2)进行修改