使用TensorFlow Object Detection API训练自己的目标检测模型(一)制作数据集

        当前目标检测网络已经十分成熟,github上可以找到各种各样的检测网络,如果是在手机上使用,还有十分方便的推理框架,但是如果不是手机应用,又没有NPU可以用,又不想使用CPU推理,那选择就不多了,这里我是使用ARMNN作为推理框架,使用RK3399上的GPU进行推理。网络使用的是SSD,但是跑起来只有6-8帧,达不到实时性要求,因为检测目标种类很少,原生的SSD网络要检测80种,显然是有很多冗余的,网络压缩的方法很多,其中剪枝基本上就是去掉冗余的通道,这和直接在设计网络时少一些通道有什么区别我也不太清楚,有知道的还请多多指教。这里参考了网上的一些做法,决定先使用TensorFlow Object Detection API训练一个只检测特定种类的小型的SSD网络。

参考连接:

MobileNet SSD V2模型的压缩与tflite格式的转换 - 简书

1. 使用fiftyone下载数据集中的特定类别

        要进行网络训练第一步就是制作数据集,我要检测的物体是person,首先肯定要用一个比较大的数据集去训练,这样模型的性能会好一些,目前主流的开源数据集合大概有以下几种:

image net 1400w张图片,27大类和2W+小类
open image 170w张图片,600类
ms coco 33w张图片, 91类
pascal voc  1.7w张图片,20类

tensorflow detection API 中预训练用的是coco,我们也用coco,因为我们只检测人,所以只下载带人的图片,这里要用到fiftyone,这是一个管理数据集的,上边那些数据集都支持。

安装好fiftyone以后,使用python下载:

import fiftyone as fo
import fiftyone.zoo as foz

dataset = foz.load_zoo_dataset(
    "coco-2017",
    label_types=["detections"],
    classes=["person"],
    only_matching=True
)	

里边这些参数可以到文档里看,都是什么意思,我这里是把coco数据集里所有的带人的图片都下载了,如果想先试一试可以限制一下下载数量,官方文档里的例子:

dataset = foz.load_zoo_dataset(
    "open-images-v6",#用的openimage
    split="validation",#默认的化就是分成tran val test
    max_samples=100,#下载100张
    seed=51,
    shuffle=True,
)

coco中带人的图片一共61407张,进度条有的时候会断开,重新输入上边下载的命令就可以接着下载,还没有找到什么好的解决办法。

 看一下coco数据集的目录结构 

这里是fiftyone下的dataset结构,大概的意思就是把coco数据集下载下来以后,fiftyone可以根据load时的参数生成一个dataset结构,之后可以通过fiftyone的app去操作这个dataset ,这里的json文件都是fiftyone用的,我们只是通过fiftyone下载特定类别,所以不用研究这些json。raw文件夹下是coco数据集的标注信息,这些标注信息里就有我们想要的bbox,当然还有其他信息,我们需要进行过滤。剩下的/test /train /validation 里边的data就是放的图片了。

2. 将下载的数据集转换成Tensorflow使用的tfrecord格式

 将下载好的数据整理成如下目录结构

 /annotations 文件夹里存放标注文件

/test2017 /train2017 /val2017 存放相应图片,使用下边脚本根据coco数据集生成图片对应的xml标注文件,因为脚本里train2017 val2017除了路径还有其他作用,所以就不改代码了,改一下目录结构吧。还有一点要注意,下载的图片多的话有可能有损坏的,这个脚本没有做这个判断,需要手动把损坏的图片先删了。

from pycocotools.coco import COCO
import os
import shutil
from tqdm import tqdm
import skimage.io as io
import matplotlib.pyplot as plt
import cv2
from PIL import Image, ImageDraw, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
# 需要设置的路径
home_path = os.environ['HOME']
savepath=home_path+"/COCO/" 
img_dir=savepath+'images/'
anno_dir=savepath+'annotations/'
datasets_list=['train2017', 'val2017']#
#coco有80类,这里写要提取类的名字,以person为例 
classes_names = ['person'] 
#这里是coco数据集的路径
dataDir= home_path+'/coco_data/' 
print(dataDir)
'''
目录格式如下:
$COCO_PATH
----|annotations
----|train2017
----|val2017
----|test2017
'''

 
headstr = """\
<annotation>
    <folder>VOC</folder>
    <filename>%s</filename>
    <source>
        <database>My Database</database>
        <annotation>COCO</annotation>
        
        <flickrid>NULL</flickrid>
    </source>
    <owner>
        <flickrid>NULL</flickrid>
        <name>company</name>
    </owner>
    <size>
        <width>%d</width>
        <height>%d</height>
        <depth>%d</depth>
    </size>
    <segmented>0</segmented>
"""
objstr = """\
    <object>
        <name>%s</name>
        <pose>Unspecified</pose>
        <truncated>0</truncated>
        <difficult>0</difficult>
        <bndbox>
            <xmin>%d</xmin>
            <ymin>%d</ymin>
            <xmax>%d</xmax>
            <ymax>%d</ymax>
        </bndbox>
    </object>
"""
 
tailstr = '''\
</annotation>
'''
 
# 检查目录是否存在,如果存在,先删除再创建,否则,直接创建
def mkr(path):
    if not os.path.exists(path):
        os.makedirs(path)  # 可以创建多级目录

def id2name(coco):
    classes=dict()
    for cls in coco.dataset['categories']:
        classes[cls['id']]=cls['name']
    return classes
 
def write_xml(anno_path,head, objs, tail):
    f = open(anno_path, "w")
    f.write(head)
    for obj in objs:
        f.write(objstr%(obj[0],obj[1],obj[2],obj[3],obj[4]))
    f.write(tail)
 
 
def save_annotations_and_imgs(coco,dataset,filename,objs):
    #将图片转为xml,例:COCO_train2017_000000196610.jpg-->COCO_train2017_000000196610.xml
    dst_anno_dir = os.path.join(anno_dir, dataset)
    mkr(dst_anno_dir)
    anno_path=dst_anno_dir + '/' + filename[:-3]+'xml'
    img_path=dataDir+dataset+'/'+filename
    #print("img_path: ", img_path)
    dst_img_dir = os.path.join(img_dir, dataset)
    mkr(dst_img_dir)
    dst_imgpath=dst_img_dir+ '/' + filename
    #print("dst_imgpath: ", dst_imgpath)
    img=cv2.imread(img_path)
    #if (img.shape[2] == 1):
    #    print(filename + " not a RGB image")
     #   return
    shutil.copy(img_path, dst_imgpath)
 
    head=headstr % (filename, img.shape[1], img.shape[0], img.shape[2])
    tail = tailstr
    write_xml(anno_path,head, objs, tail)
 
 #	   标注文件 train&val 图片信息 所有类别 人的类别id
def showimg(coco,dataset,img,classes,cls_id,show=True):
    global dataDir
    img_path = os.path.join(dataDir, dataset,img['file_name']) #dataDir+dataset+'/'+img['file_name']
    #print(img_path)
    objs = []
    if not os.path.exists(img_path):
    	print("no such file!!!!!!!!!!!!!!!!!!")
    else:
        # 打开这个图片
    	try:
    	    	I=Image.open('%s/%s/%s'%(dataDir,dataset,img['file_name']))
    	except UnidentifiedImageError:
    	    	print("bad image, skip!")
    	    	print(img_path)
    	#通过id,得到注释的信息
    	annIds = coco.getAnnIds(imgIds=img['id'], catIds=cls_id, iscrowd=None)
    	# print(annIds)
    	anns = coco.loadAnns(annIds)#得到这个图片的标注信息
    	# print(anns)
    	# coco.showAnns(anns)
   
    	for ann in anns:#遍历标注信息
        	class_name=classes[ann['category_id']]#得到这个标注的类别
        	if class_name in classes_names:#如果是我们想要的
            		#print(class_name)
            		if 'bbox' in ann:#如果标注信息里有bbox
                		bbox=ann['bbox']
                		xmin = int(bbox[0])
                		ymin = int(bbox[1])
                		xmax = int(bbox[2] + bbox[0])
                		ymax = int(bbox[3] + bbox[1])
                		obj = [class_name, xmin, ymin, xmax, ymax]
                		objs.append(obj)
                		draw = ImageDraw.Draw(I)
                		draw.rectangle([xmin, ymin, xmax, ymax])
    	if show:
        	plt.figure()
        	plt.axis('off')
        	plt.imshow(I)
        	plt.show()
 
    return objs

# 遍历标注文件 instances_train2017 和 instances_val2017 里的数据 
for dataset in datasets_list:
    #./COCO/annotations/instances_train2017.json
    annFile='{}/annotations/instances_{}.json'.format(dataDir,dataset)
 
    #使用COCO API用来初始化注释数据
    coco = COCO(annFile)
 
    #获取COCO数据集中的所有类别
    classes = id2name(coco)
    #print(classes)
    #[1, 2, 3, 4, 6, 8]
    classes_ids = coco.getCatIds(catNms=classes_names)#classes_names:person
    #print(classes_ids)# 打印出我们要挑选的类的id(person -> 1)
    miss = 0
    for cls in classes_names:
        #获取该类的id
        cls_id=coco.getCatIds(catNms=[cls])
        img_ids=coco.getImgIds(catIds=cls_id)
        #print(cls,len(img_ids))# person 64115 标准文件里一共有64115个图片里有person
        # imgIds=img_ids[0:10]
        #print(img_ids)
        for imgId in tqdm(img_ids):
            img = coco.loadImgs(imgId)[0]
            #print(img)
            filename = img['file_name']
            #print(filename)
            objs=showimg(coco, dataset, img, classes,classes_ids,show=False)
            if(objs):
            	save_annotations_and_imgs(coco, dataset, filename, objs)
            else:
            	miss+1
print(miss)

运行脚本以后会生成目录

 /annotatios 里装的是xml文件  /images 里装的是jpg图片

然后按照Tensorflow官方文档里的方法制作tfrecord

""" Sample TensorFlow XML-to-TFRecord converter

usage: generate_tfrecord.py [-h] [-x XML_DIR] [-l LABELS_PATH] [-o OUTPUT_PATH] [-i IMAGE_DIR] [-c CSV_PATH]

optional arguments:
  -h, --help            show this help message and exit
  -x XML_DIR, --xml_dir XML_DIR
                        Path to the folder where the input .xml files are stored.
  -l LABELS_PATH, --labels_path LABELS_PATH
                        Path to the labels (.pbtxt) file.
  -o OUTPUT_PATH, --output_path OUTPUT_PATH
                        Path of output TFRecord (.record) file.
  -i IMAGE_DIR, --image_dir IMAGE_DIR
                        Path to the folder where the input image files are stored. Defaults to the same directory as XML_DIR.
  -c CSV_PATH, --csv_path CSV_PATH
                        Path of output .csv file. If none provided, then no file will be written.
"""

import os
import glob
import pandas as pd
import io
import xml.etree.ElementTree as ET
import argparse

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'    # Suppress TensorFlow logging (1)
import tensorflow.compat.v1 as tf
from PIL import Image
from object_detection.utils import dataset_util, label_map_util
from collections import namedtuple

# Initiate argument parser
parser = argparse.ArgumentParser(
    description="Sample TensorFlow XML-to-TFRecord converter")
parser.add_argument("-x",
                    "--xml_dir",
                    help="Path to the folder where the input .xml files are stored.",
                    type=str)
parser.add_argument("-l",
                    "--labels_path",
                    help="Path to the labels (.pbtxt) file.", type=str)
parser.add_argument("-o",
                    "--output_path",
                    help="Path of output TFRecord (.record) file.", type=str)
parser.add_argument("-i",
                    "--image_dir",
                    help="Path to the folder where the input image files are stored. "
                         "Defaults to the same directory as XML_DIR.",
                    type=str, default=None)
parser.add_argument("-c",
                    "--csv_path",
                    help="Path of output .csv file. If none provided, then no file will be "
                         "written.",
                    type=str, default=None)

args = parser.parse_args()

if args.image_dir is None:
    args.image_dir = args.xml_dir

label_map = label_map_util.load_labelmap(args.labels_path)
label_map_dict = label_map_util.get_label_map_dict(label_map)


def xml_to_csv(path):
    """Iterates through all .xml files (generated by labelImg) in a given directory and combines
    them in a single Pandas dataframe.

    Parameters:
    ----------
    path : str
        The path containing the .xml files
    Returns
    -------
    Pandas DataFrame
        The produced dataframe
    """

    xml_list = []
    for xml_file in glob.glob(path + '/*.xml'):
        tree = ET.parse(xml_file)
        root = tree.getroot()
        filename = root.find('filename').text
        width = int(root.find('size').find('width').text)
        height = int(root.find('size').find('height').text)
        for member in root.findall('object'):
            bndbox = member.find('bndbox')
            value = (filename,
                     width,
                     height,
                     member.find('name').text,
                     int(bndbox.find('xmin').text),
                     int(bndbox.find('ymin').text),
                     int(bndbox.find('xmax').text),
                     int(bndbox.find('ymax').text),
                     )
            xml_list.append(value)
    column_name = ['filename', 'width', 'height',
                   'class', 'xmin', 'ymin', 'xmax', 'ymax']
    xml_df = pd.DataFrame(xml_list, columns=column_name)
    return xml_df


def class_text_to_int(row_label):
    return label_map_dict[row_label]


def split(df, group):
    data = namedtuple('data', ['filename', 'object'])
    gb = df.groupby(group)
    return [data(filename, gb.get_group(x)) for filename, x in zip(gb.groups.keys(), gb.groups)]


def create_tf_example(group, path):
    with tf.gfile.GFile(os.path.join(path, '{}'.format(group.filename)), 'rb') as fid:
        encoded_jpg = fid.read()
    encoded_jpg_io = io.BytesIO(encoded_jpg)
    image = Image.open(encoded_jpg_io)
    width, height = image.size

    filename = group.filename.encode('utf8')
    image_format = b'jpg'
    xmins = []
    xmaxs = []
    ymins = []
    ymaxs = []
    classes_text = []
    classes = []

    for index, row in group.object.iterrows():
        xmins.append(row['xmin'] / width)
        xmaxs.append(row['xmax'] / width)
        ymins.append(row['ymin'] / height)
        ymaxs.append(row['ymax'] / height)
        classes_text.append(row['class'].encode('utf8'))
        classes.append(class_text_to_int(row['class']))

    tf_example = tf.train.Example(features=tf.train.Features(feature={
        'image/height': dataset_util.int64_feature(height),
        'image/width': dataset_util.int64_feature(width),
        'image/filename': dataset_util.bytes_feature(filename),
        'image/source_id': dataset_util.bytes_feature(filename),
        'image/encoded': dataset_util.bytes_feature(encoded_jpg),
        'image/format': dataset_util.bytes_feature(image_format),
        'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
        'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
        'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
        'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
        'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
        'image/object/class/label': dataset_util.int64_list_feature(classes),
    }))
    return tf_example


def main(_):

    writer = tf.python_io.TFRecordWriter(args.output_path)
    path = os.path.join(args.image_dir)
    examples = xml_to_csv(args.xml_dir)
    grouped = split(examples, 'filename')
    for group in grouped:
        tf_example = create_tf_example(group, path)
        writer.write(tf_example.SerializeToString())
    writer.close()
    print('Successfully created the TFRecord file: {}'.format(args.output_path))
    if args.csv_path is not None:
        examples.to_csv(args.csv_path, index=None)
        print('Successfully created the CSV file: {}'.format(args.csv_path))


if __name__ == '__main__':
    tf.app.run()
# Create train data:
python generate_tfrecord.py -x [PATH_TO_IMAGES_FOLDER]/train -l [PATH_TO_ANNOTATIONS_FOLDER]/label_map.pbtxt -o [PATH_TO_ANNOTATIONS_FOLDER]/train.record

# Create test data:
python generate_tfrecord.py -x [PATH_TO_IMAGES_FOLDER]/test -l [PATH_TO_ANNOTATIONS_FOLDER]/label_map.pbtxt -o [PATH_TO_ANNOTATIONS_FOLDER]/test.record

# For example
# python generate_tfrecord.py -x C:/Users/sglvladi/Documents/Tensorflow/workspace/training_demo/images/train -l C:/Users/sglvladi/Documents/Tensorflow/workspace/training_demo/annotations/label_map.pbtxt -o C:/Users/sglvladi/Documents/Tensorflow/workspace/training_demo/annotations/train.record
# python generate_tfrecord.py -x C:/Users/sglvladi/Documents/Tensorflow/workspace/training_demo/images/test -l C:/Users/sglvladi/Documents/Tensorflow2/workspace/training_demo/annotations/label_map.pbtxt -o C:/Users/sglvladi/Documents/Tensorflow/workspace/training_demo/annotations/test.record

其中/train文件夹里装的是img和xml,根据自己的情况设置。

PS:

1.  之前参考其他博客中的一些脚本 xml- > csv -> tfrecord 发现转换出来的tfrecord太大,后来按照官方文档就没有这个问题了,看来还是得按官方文档来。

2.  之前想只下载person标签的图片,所以用了fiftyone,后来还是把整个数据集下载下来了

至此数据集制作完毕,接下来是配置训练环境。

版权声明:本文为CSDN博主「陌生的天花板」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/weixin_41680653/article/details/121789804

我还没有学会写个人说明!

暂无评论

发表评论

相关推荐

【无标题】

Using winsys: x11 ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/sources/deepst

YoloV3 案例

学习目标 知道YoloV3模型结构及构建方法知道数据处理方法能够利用yoloV3模型进行训练和预测 1.数据获取 一部分是网络数据,可以是开源数据,也可以通过百度、Google图片爬虫得到 在接下来的课程中我们