Win10下手把手教你Mask RCNN绘制PR曲线、计算mAP

导航:

1. keras版的mask rcnn环境配置https://blog.csdn.net/hesongzefairy/article/details/104702119

2. keras版的mask rcnn训练自己的数据集https://blog.csdn.net/hesongzefairy/article/details/105738318

3. 一文理解精确率Precision、召回率Recall以及ROC曲线https://blog.csdn.net/hesongzefairy/article/details/104295431

之前的两篇文章已经对mask rcnn的用法做了详细介绍,那么还剩最后一个任务,对目标检测模型评价其性能所用到的指标,PR曲线怎么画,AP、mAP(mean Average Precision)怎么计算?

首先在概念上应该有导航3的基础,本文再简单的解释一下TP、FP、TN、FN

T为True,表示检测类别正确    F为False,表示检测类别错误

P为Positive,表示正样本         N为Negative,表示负样本

TP为True Positive,表示检测正确的正样本

FP为False Positive,表示检测错误的正样本

TN为True Negative,表示检测正确的负样本

FN为False Negative,表示检测错误的负样本

在mAP的计算过程中,会用到TP、FP、FN

计算mAP,首先需要计算precision和recall

精确率Precision:

又称查准率,能够体现模型分类为正样本的数量中分类正确的比例,分母靠预测值决定。

正样本的预测数/被预测为正样本的数量(包含错误预测为正样本的负样本)P = \frac{TP}{TP + FP}

召回率Recall:

又称查全率,即上述的TPR。

分类正确的样本书/正样本的数量 R = \frac{TP}{TP + FN}

计算出precision和recall之后,就可以调整阈值来获取一组组的(P,R)点,绘制出PR曲线,进一步计算出AP和mAP

那么在代码中如何实现:

在utils中有一个函数compute_ap()是专门用来计算AP的,可以看一下源码

def compute_ap(gt_boxes, gt_class_ids, gt_masks,
               pred_boxes, pred_class_ids, pred_scores, pred_masks,
               iou_threshold=0.5):
    """Compute Average Precision at a set IoU threshold (default 0.5).

    Returns:
    mAP: Mean Average Precision
    precisions: List of precisions at different class score thresholds.
    recalls: List of recall values at different class score thresholds.
    overlaps: [pred_boxes, gt_boxes] IoU overlaps.
    """

函数需要的输入是:

gt_boxes:读取json文件中mask并生成相应的bounding box(目标数量,4)

gt_class_ids:读取json文件中的目标类别(目标数量,)

gt_masks:读取json文件中的mask边缘坐标点生成mask(512,512,目标数量)

pred_boxes:模型预测生成的box(目标数量,4)

pred_class_ids:模型预测生成的目标类别(目标数量,)

pred_scores:预测分数

pred_masks:模型预测生成的mask(512,512,目标数量)

IoU默认值为0.5

那么对于原本的测试代码中,需要添加的部分就是三点:

1. 需要将label的json文件读进来,并将整个数据集的gt_boxes, gt_class_ids, gt_masks保存下来

2. 将模型预测生成的pred_boxes, pred_class_ids, pred_scores, pred_masks保存下来

3. 将保存的数据输入compute_ap()中计算出precision、recall、mAP

代码如下:

其中代码关于类别的名称需要修改成自己的类别才不会报错

# -*- coding: utf-8 -*-
import os
import sys
import random
import math
import re
import time
import numpy as np
import cv2
import matplotlib
import matplotlib.pyplot as plt
import tensorflow as tf
from mrcnn.config import Config
# import utils
from mrcnn import model as modellib, utils
from mrcnn import visualize
import yaml
from mrcnn.model import log
from PIL import Image

# Root directory of the project
ROOT_DIR = os.getcwd()
MODEL_DIR = os.path.join(ROOT_DIR, "logs")

iter_num = 0

# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
    utils.download_trained_weights(COCO_MODEL_PATH)

class ShapesConfig(Config):
    """Configuration for training on the toy shapes dataset.
    Derives from the base Config class and overrides values specific
    to the toy shapes dataset.
    """
    # Give the configuration a recognizable name
    NAME = "shapes"

    # Train on 1 GPU and 8 images per GPU. We can put multiple images on each
    # GPU because the images are small. Batch size is 8 (GPUs * images/GPU).
    GPU_COUNT = 1
    IMAGES_PER_GPU = 1

    # Number of classes (including background)
    NUM_CLASSES = 1 + 1  # background + 3 shapes

    # Use small images for faster training. Set the limits of the small side
    # the large side, and that determines the image shape.
    IMAGE_MIN_DIM = 256
    IMAGE_MAX_DIM = 512

    # Use smaller anchors because our image and objects are small
    RPN_ANCHOR_SCALES = (8 * 6, 16 * 6, 32 * 6, 64 * 6, 128 * 6)  # anchor side in pixels

    # Reduce training ROIs per image because the images are small and have
    # few objects. Aim to allow ROI sampling to pick 33% positive ROIs.
    TRAIN_ROIS_PER_IMAGE = 50

    # Use a small epoch since the data is simple
    STEPS_PER_EPOCH = 50
    # use small validation steps since the epoch is small
    VALIDATION_STEPS = 20


config = ShapesConfig()
config.display()


class DrugDataset(utils.Dataset):
    # 得到该图中有多少个实例(物体)
    def get_obj_index(self, image):
        n = np.max(image)
        return n

    # 解析labelme中得到的yaml文件,从而得到mask每一层对应的实例标签
    def from_yaml_get_class(self, image_id):
        info = self.image_info[image_id]
        with open(info['yaml_path']) as f:
            temp = yaml.load(f.read())
            labels = temp['label_names']
            del labels[0]
        return labels

    # 重新写draw_mask
    def draw_mask(self, num_obj, mask, image, image_id):
        # print("draw_mask-->",image_id)
        # print("self.image_info",self.image_info)
        info = self.image_info[image_id]
        # print("info-->",info)
        # print("info[width]----->",info['width'],"-info[height]--->",info['height'])
        for index in range(num_obj):
            for i in range(info['width']):
                for j in range(info['height']):
                    # print("image_id-->",image_id,"-i--->",i,"-j--->",j)
                    # print("info[width]----->",info['width'],"-info[height]--->",info['height'])
                    at_pixel = image.getpixel((i, j))
                    if at_pixel == index + 1:
                        mask[j, i, index] = 1
        return mask

    def load_shapes(self, count, img_floder, mask_floder, imglist, dataset_root_path):
        """Generate the requested number of synthetic images.
        count: number of images to generate.
        height, width: the size of the generated images.
        """
        # Add classes
        self.add_class("shapes", 1, "defect")  # 脆性区域

        for i in range(count):
            # 获取图片宽和高

            filestr = imglist[i].split(".")[0]

            mask_path = mask_floder + "/" + filestr + ".png"
            yaml_path = dataset_root_path + "labelme_json/" + filestr + "_json/info.yaml"
            print(dataset_root_path + "labelme_json/" + filestr + "_json/img.png")
            cv_img = cv2.imread(dataset_root_path + "labelme_json/" + filestr + "_json/img.png")

            self.add_image("shapes", image_id=i, path=img_floder + "/" + imglist[i],
                           width=cv_img.shape[1], height=cv_img.shape[0], mask_path=mask_path, yaml_path=yaml_path)

    # 重写load_mask
    def load_mask(self, image_id):
        """Generate instance masks for shapes of the given image ID.
        """
        global iter_num
        print("image_id", image_id)
        info = self.image_info[image_id]
        count = 1  # number of object
        img = Image.open(info['mask_path'])
        num_obj = self.get_obj_index(img)
        mask = np.zeros([info['height'], info['width'], num_obj], dtype=np.uint8)
        mask = self.draw_mask(num_obj, mask, img, image_id)
        occlusion = np.logical_not(mask[:, :, -1]).astype(np.uint8)
        for i in range(count - 2, -1, -1):
            mask[:, :, i] = mask[:, :, i] * occlusion

            occlusion = np.logical_and(occlusion, np.logical_not(mask[:, :, i]))
        labels = []
        labels = self.from_yaml_get_class(image_id)
        labels_form = []
        for i in range(len(labels)):
            if labels[i].find("defect") != -1:
                # print "box"
                labels_form.append("defect")
            # elif labels[i].find("PJ")!=-1:
            #     #print "column"
            #     labels_form.append("PJ")

        class_ids = np.array([self.class_names.index(s) for s in labels_form])
        return mask, class_ids.astype(np.int32)


def get_ax(rows=1, cols=1, size=8):
    """Return a Matplotlib Axes array to be used in
    all visualizations in the notebook. Provide a
    central point to control graph sizes.
    Change the default size attribute to control the size
    of rendered images
    """
    _, ax = plt.subplots(rows, cols, figsize=(size * cols, size * rows))
    return ax

def list2array(list):
    b = np.array(list[0])
    for i in range(1, len(list)):
        b = np.append(b, list[i],axis=0)
    return b

def text_save(filename, data):#filename为写入CSV文件的路径,data为要写入数据列表.
    file = open(filename,'a')
    for i in range(len(data)):
        s = str(data[i]).replace('[','').replace(']','')#去除[],这两行按数据不同,可以选择
        s = s.replace("'",'').replace(',','') +'\n'   #去除单引号,逗号,每行末尾追加换行符
        file.write(s)
    file.close()
    print("保存txt文件成功")


# 测试集设置
dataset_root_path="test_data_L/"
img_floder = dataset_root_path + "pic"
mask_floder = dataset_root_path + "cv2_mask"
imglist = os.listdir(img_floder)
count = len(imglist)

# 准备test数据集
dataset_test = DrugDataset()
dataset_test.load_shapes(count, img_floder, mask_floder, imglist, dataset_root_path)
dataset_test.prepare()

# mAP
# Compute VOC-Style mAP @ IoU=0.5
# Running on 10 images. Increase for better accuracy.
class InferenceConfig(ShapesConfig):
    GPU_COUNT = 1
    IMAGES_PER_GPU = 1


inference_config = InferenceConfig()

# Recreate the model in inference mode
model = modellib.MaskRCNN(mode="inference",
                          config=inference_config,
                          model_dir=MODEL_DIR)


model_path = os.path.join(MODEL_DIR, "KL1000.h5")  # 修改成自己训练好的模型

# Load trained weights
print("Loading weights from ", model_path)
model.load_weights(model_path, by_name=True)

img_list = np.random.choice(dataset_test.image_ids, 85)
APs = []
count1 = 0

# 遍历测试集
for image_id in img_list:
    # 加载测试集的ground truth
    image, image_meta, gt_class_id, gt_bbox, gt_mask = \
        modellib.load_image_gt(dataset_test, inference_config,
                               image_id, use_mini_mask=False)
    # 将所有ground truth载入并保存
    if count1 == 0:
        save_box, save_class, save_mask = gt_bbox, gt_class_id, gt_mask
    else:
        save_box = np.concatenate((save_box, gt_bbox), axis=0)
        save_class = np.concatenate((save_class, gt_class_id), axis=0)
        save_mask = np.concatenate((save_mask, gt_mask), axis=2)

    molded_images = np.expand_dims(modellib.mold_image(image, inference_config), 0)

    # 启动检测
    results = model.detect([image], verbose=0)
    r = results[0]

    # 将所有检测结果保存
    if count1 == 0:
        save_roi, save_id, save_score, save_m = r["rois"], r["class_ids"], r["scores"], r['masks']
    else:
        save_roi = np.concatenate((save_roi, r["rois"]), axis=0)
        save_id = np.concatenate((save_id, r["class_ids"]), axis=0)
        save_score = np.concatenate((save_score, r["scores"]), axis=0)
        save_m = np.concatenate((save_m, r['masks']), axis=2)

    count1 += 1

# 计算AP, precision, recall
AP, precisions, recalls, overlaps = \
        utils.compute_ap(save_box, save_class, save_mask,
                         save_roi, save_id, save_score, save_m)

print("AP: ", AP)
print("mAP: ", np.mean(AP))

# 绘制PR曲线
plt.plot(recalls, precisions, 'b', label='PR')
plt.title('precision-recall curve')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.legend()
plt.show()

# 保存precision, recall信息用于后续绘制图像
text_save('Kpreci.txt', precisions)
text_save('Krecall.txt', recalls)

可以绘制出PR曲线

 

版权声明:本文为CSDN博主「Forizon」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/hesongzefairy/article/details/106746216

Forizon

我还没有学会写个人说明!

暂无评论

发表评论

相关推荐

单目3D目标检测调研

单目3D目标检测调研 一、 简介 现有的单目3D目标检测方案主要方案主要分为两类,分别为基于图片的方法和基于伪雷达点云的方法。   基于图片的方法一般通过2D-3D之间的几何约束来学习,包括目标形状信息&#xff0