MMdetection3d环境搭建、使用MMdetection3d做3D目标检测训练自己的数据集、测试、可视化,以及常见的错误

1 mmdetection3d环境搭建与测试

1.1 从docker开始搭建环境

1.1.1 开始从docker环境搭建

1、克隆代码

git clone https://github.com.cnpmjs.org/open-mmlab/mmdetection3d.git

2、下载好代码后进入代码目录

cd mmdetection3d

3、开始使用docker构建mmdetection3d的镜像环境

docker build -t mmdetection3d docker/

在这里插入图片描述

构建mmdetection3d镜像的过程中可能会报错

W: GPG error: https://developer.download.nvidia.cn/compute/cuda/repos/ubuntu1804/x86_64  Release: The following signatures were invalid: BADSIG F60F4B3D7FA2AF80 cudatools <cudatools@nvidia.com>
E: The repository 'https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64  Release' is not signed.
The command '/bin/sh -c apt-get update && apt-get install -y ffmpeg libsm6 libxext6 git ninja-build libglib2.0-0 libsm6 libxrender-dev libxext6     && apt-get clean     && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100

错误解决方式

在docker/Dockerfile文件FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel的下面添加如下一行:RUN rm /etc/apt/sources.list.d/cuda.list

FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel
RUN rm /etc/apt/sources.list.d/cuda.list

4、成功构建mmdetection3镜像如下:

(base) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d$ docker build -t mmdetection3d docker/
Sending build context to Docker daemon  3.072kB
Step 1/19 : ARG PYTORCH="1.6.0"
Step 2/19 : ARG CUDA="10.1"
Step 3/19 : ARG CUDNN="7"
Step 4/19 : FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel
 ---> bb833e4d631f
Step 5/19 : RUN rm /etc/apt/sources.list.d/cuda.list
 ---> Using cache
 ---> 984b1a43fc0d
Step 6/19 : ENV TORCH_CUDA_ARCH_LIST="6.0 6.1 7.0+PTX"
 ---> Using cache
 ---> 3c68ebdb8a2f
Step 7/19 : ENV TORCH_NVCC_FLAGS="-Xfatbin -compress-all"
......
  Stored in directory: /tmp/pip-ephem-wheel-cache-9v_li1t0/wheels/cc/fa/4a/067979eccddf6a22b46722493df8e07b0541956a5ab5bac8b1
Successfully built mmpycocotools
Installing collected packages: mmpycocotools
  Attempting uninstall: mmpycocotools
    Found existing installation: mmpycocotools 12.0.3
    Uninstalling mmpycocotools-12.0.3:
      Successfully uninstalled mmpycocotools-12.0.3
Successfully installed mmpycocotools-12.0.3
Removing intermediate container 3689dccd83f3
 ---> 2ee366dc3c2f
Successfully built 2ee366dc3c2f
Successfully tagged mmdetection3d:latest
(base) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d$

使用docker images可以查看到成功构建的镜像:

在这里插入图片描述

5、生成和启动mmdetection3容器

docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d

可以映射一个本地的路径到容器中,用于存放数据,这样不会导致你容器删除的时候出现数据丢失!!!

启动容器:

docker run --gpus all --shm-size=8g -it -v /home/shl/shl_res/MMlab/mmdetection3d/mmdetect3d_data:/mmdetection3d/data mmdetection3d

(base) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d$ docker run --gpus all --shm-size=8g -it -v /home/shl/shl_res/MMlab/mmdetection3d/mmdetect3d_data:/mmdetection3d/data mmdetection3d
root@3672fb821035:/mmdetection3d# ls
LICENSE      README.md        build    data  docker  mmdet3d           requirements      resources  setup.py  tools
MANIFEST.in  README_zh-CN.md  configs  demo  docs    mmdet3d.egg-info  requirements.txt  setup.cfg  tests
root@3672fb821035:/mmdetection3d# 

1.1.2 测试demo

1、下载预训练模型

下载页面

在这里插入图片描述

如果你要下载这个模型模型,直接点击下载

2、测试命令

python demo/pcd_demo.py demo/data/kitti/kitti_000008.bin configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py data/predtrain_models/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth --out-dir data/output_result/

3、测试结果:

root@3672fb821035:/mmdetection3d# tree data
data
|-- output_result
|   `-- kitti_000008
|       |-- kitti_000008_points.obj
|       `-- kitti_000008_pred.obj
`-- predtrain_models
    `-- hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth

3 directories, 3 files
root@3672fb821035:/mmdetection3d# 

在output_result目录下,会生成一个kitti_000008的目录,目录下会生成目标的

4、对测试demo生成结果可视化

可视化参考

1.2 直接在外部安装mmdetection3d环境

1.2.1 创建并激活虚拟环境

conda create -n open-mmlab python=3.7 -y
conda activate open-mmlab

1.2.2 安装pytorch和torchvision

conda install pytorch torchvision -c pytorch

我安装的方式,是在[pypi]官网先下载好torch和torchvisionwhl安装包,然后使用pip install xxx.whl的方式安装!

注意:

在安装torch和torchvision的时候,二者的版本需要对应,版本对应关系如下

在这里插入图片描述

1.2.3 安装mmcv

1、在安装mmcv之前,需要先确定你的:cudatorch的版本,我的安装cuda和torch版本为:

  • cuda10.2
  • torch1.7.0

2、安装命令:

pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html

然后把自己的cuda和torch的版本号替换即可,我的环境对应的具体命令为:

pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu102/torch1.7.0/index.html

你也可以去上面的mmcv-full链接中,先把你需要的对应版本的mmcv-full的whl文件下载下来再安装!我安装的版本是:mmcv-full==1.3.1

1.2.4 安装MMDetection

安装MMDetection有两种方式:

  • 直接使用pip安装
  • 从MMDetection源码安装

1、直接使用pip安装

pip install git+https://github.com/open-mmlab/mmdetection.git

2、从MMDetection的源码安装(我是直接从源码安装的)

git clone https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -r requirements/build.txt
pip install -v -e .  # or "python setup.py develop"

注意:

安装mmdet需要对应mmcv-full版本,具体如下表:

在这里插入图片描述

1.2.5 克隆编译mmdetection3d

1、克隆mmdetection3d仓库

git clone https://github.com/open-mmlab/mmdetection3d.git
cd mmdetection3d

2、编译mmdetection3d

pip install -v -e . # or "python setup.py develop"

注意:

编译的时候,我建议你使用命令:python setup.py develop

3、所有环境安装成功,导入相关库包,简单测试一下:

(open-mmlab) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d$ python
Python 3.7.10 (default, Feb 26 2021, 18:47:35) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
>>> import mmcv
>>> import mmdet
>>> import mmdet3d
>>> 

1.3 测试demo

1、下载预训练模型

下载页面

在这里插入图片描述

如果你要下载通过一个模型,直接点击下载

2、测试命令

python demo/pcd_demo.py demo/data/kitti/kitti_000008.bin configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py my_checkpoints/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth

(open-mmlab) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d$ python demo/pcd_demo.py demo/data/kitti/kitti_000008.bin configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py my_checkpoints/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth 
Use load_from_local loader
(open-mmlab) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d$ ls

3、测试结果:

在这里插入图片描述

默认生成结果会保存到./demo/目录下,kitti_000008就是生成的目录和结果,当然你也可以通过--out-dir参数指定输出结果保存路径!

4、对测试demo生成结果可视化

可视化参考

2 数据转换和更高级的API

2.1 把ply格式文件转换为bin文件

1、如果你的点云数据是ply格式,可以通过下面的代码进行转换,但首先需要确保你已经安装了:pandasplyfile库包:

import numpy as np
import pandas as pd
from plyfile import PlyData

def convert_ply(input_path, output_path):
    plydata = PlyData.read(input_path)  # read file
    data = plydata.elements[0].data  # read data
    data_pd = pd.DataFrame(data)  # convert to DataFrame
    data_np = np.zeros(data_pd.shape, dtype=np.float)  # initialize array to store data
    property_names = data[0].dtype.names  # read names of properties
    for i, name in enumerate(
            property_names):  # read data by property
        data_np[:, i] = data_pd[name]
    data_np.astype(np.float32).tofile(output_path)

2、然后调用上面的转换函数即可:

convert_ply('./test.ply', './test.bin')

2.2 使用trimesh模块转换其他格式点云到ply格式

1、如果你的点云是其他格式,例如:off、obj等,可以使用trimesh模块转换为ply格式:

import trimesh

def to_ply(input_path, output_path, original_type):
    mesh = trimesh.load(input_path, file_type=original_type)  # read file
    mesh.export(output_path, file_type='ply')  # convert to ply

2、调用上面定义的函数

to_ply('./test.obj', './test.ply', 'obj')

2.3 更高级的API

1、下面是更高级的API

from mmdet3d.apis import init_detector, inference_detector

config_file = 'configs/votenet/votenet_8x8_scannet-3d-18class.py'
checkpoint_file = 'my_checkpoints/votenet_8x8_scannet-3d-18class_20200620_230238-2cea9c3a.pth'

# build the model from a config file and a checkpoint file
model = init_detector(config_file, checkpoint_file, device='cuda:0')

# test a single image and show the results
#point_cloud = 'test.bin'
point_cloud = './demo/data/kitti/kitti_000008.bin'
result, data = inference_detector(model, point_cloud)
# visualize the results and save the results in 'results' folder
model.show_results(data, result, out_dir='my_results')

生成结果保存在my_results目录下:

(open-mmlab) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d$ tree my_results/
my_results/
└── kitti_000008
    ├── kitti_000008_points.obj
    └── kitti_000008_pred.obj

1 directory, 2 files
(open-mmlab) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d$ 

注意:

使用上面的代码在可视化的过程中,需要使用到open3d库,因此需要先安装:

pip install open3d

2、显示结果
在这里插入图片描述

3 使用pcl可视化obj数据(这部分我自己写的)

首先,你需要确保你已经安装了pcl(point cloud library),怎么安装pcl就自己百度吧!

1、下面是我写的一个简单可视化obj数据:pcl_viewer_obj.py

__Author__ = "Shliang"
__Email__ = "shliang0603@gmail.com"

import sys
import os


obj_name = sys.argv[1]
obj_file_abspath = os.path.abspath(obj_name)
obj_file_dir = os.path.dirname(obj_file_abspath)
output_pcd_name = obj_file_abspath.split("/")[-1].split(".")[0] + ".pcd"
output_pcd_file_abspath = os.path.join(obj_file_dir, output_pcd_name)

def pcl_obj2pcd(rm_pcd_file=True):
    # 转换obj文件为pcd类型文件
    cmd1 = "pcl_obj2pcd " + obj_name + " " + output_pcd_file_abspath
    print(f"cmd1: {cmd1}")
    os.system(cmd1)
    # 使用pcl_viewer可视化pcd文件
    cmd2 = "pcl_viewer " + output_pcd_file_abspath
    print(f"cmd2: {cmd2}")
    os.system(cmd2)


    # 程序退出时,删除
    if rm_pcd_file:
        os.remove(output_pcd_file_abspath)
        sys.exit()

if __name__ == '__main__':
    pcl_obj2pcd(False)

2、然后执行命令:

python pcl_viewer_obj.py demo/kitti_000008/kitti_000008_points.obj

注意:

pcl_viewer更多参数:

  • 先按h或H进入交互模式
  • g可以显示网格
  • Shift+鼠标左键:可以拖动点云图
  • j截图当前的窗口,并保存为png图
  • c显示当前 camera/window参数
  • e退出交互界面

3、显示结果如下:

在这里插入图片描述

4 下载KITTI 3D object数据集和KITTI数据集详细介绍

4.1 下载KITTI 3D object数据集

1、数据集下载主页

  • https://s3.eu-central-1.amazonaws.com/avg-kitti/

2、为了方便,这里我直接提供下载链接,大家直接点击即可下载,如果下载较慢,建议科学上网

4.2 关于KITTI 3d数据集的详细介绍

如果你对KITTI 3d数据集还不是很了解,我建议你先去浏览一下面的几篇博客:


在开始之前,可以先对kitti的3D 数据集有一个更深入的了解与认识:


5 训练自己的数据集

这里我并没有训练自己的数据集,因为我手上并没有采集并标注好的激光雷达点云和摄像机图像数据,因此这里使用的是KITTI的3D object数据集,官方提供了KITTI数据集格式的支持,如果你有自己的数据集,只需要把数据集转换成KITTI格式即可!!!

5.1 准备数据集和数据集的预处理

5.1.1 开始准备数据集

1、先创建数据目录:

mkdir -p data/kitti/ImageSets data/kitti/testing data/kitti/training

创建好如下:

(base) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d$ tree data/kitti/
data/kitti/
├── ImageSets
├── testing
└── training

2、把数据集准备为kitti格式,如下所示是我下载的kitti的数据集(我这里是建立的动态链接,你们按照如下格式存放即可,参考):

在这里插入图片描述

训练和测试图片数量:

  • trianing/image_2:7481张
  • testing/image_2:7518张

3、其中,ImageSets自己创建,用于把数据集划分成训练、验证、测试,把对应的txt文件复制到该位置即可:

在这里插入图片描述

可能有些小伙伴还是不知道如何划分这个数据集,不要慌,其实我也不知道,但是,好在遇事不决,文档来协,我们可以从官网文档查看到如何进行数据的具体划分

4、划分kitti数据集,其实官方已经提供了划分好的文件

wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/test.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/test.txt

wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/train.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/train.txt

wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/val.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/val.txt

wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/trainval.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/trainval.txt

或者直接把上面的链接中对应的划分文件,下载到./data/kitti/ImageSets目录下即可,下面是是链接,直接点击下载即可:

每个txt文件中存储的都是文件名,例如:test.txt文件中的内容:

(base) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d/data/kitti/ImageSets$ cat test.txt 
000000
000001
000002
000003
000004
000005
......
007513
007514
007515
007516
007517
(base) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d/data/kitti/ImageSets$ 

5.1.2 数据集的预处理

1、数据的预处理

python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti

处理的时间比较久,大概需要半个小时,耐心等待吧!

(mmlab) shl@zhihui-mint:~/shl_res/mmlab/mmdetection3d$ python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti
Generate info. this may take several minutes.
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 3712/3712, 18.2 task/s, elapsed: 205s, ETA:     0s
Kitti info train file is saved to data/kitti/kitti_infos_train.pkl
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 3769/3769, 18.4 task/s, elapsed: 205s, ETA:     0s
Kitti info val file is saved to data/kitti/kitti_infos_val.pkl
Kitti info trainval file is saved to data/kitti/kitti_infos_trainval.pkl
Kitti info test file is saved to data/kitti/kitti_infos_test.pkl
create reduced point cloud for training set
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 3712/3712, 18.7 task/s, elapsed: 199s, ETA:     0s
create reduced point cloud for validation set
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 3769/3769, 18.8 task/s, elapsed: 200s, ETA:     0s
create reduced point cloud for testing set
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 7518/7518, 18.5 task/s, elapsed: 406s, ETA:     0s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 3712/3712, 38.2 task/s, elapsed: 97s, ETA:     0s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 3769/3769, 40.3 task/s, elapsed: 93s, ETA:     0s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 7481/7481, 39.5 task/s, elapsed: 189s, ETA:     0s
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 7518/7518, 42.4 task/s, elapsed: 177s, ETA:     0s
Create GT Database of KittiDataset
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 3712/3712, 17.3 task/s, elapsed: 215s, ETA:     0s
load 2207 Pedestrian database infos
load 14357 Car database infos
load 734 Cyclist database infos
load 1297 Van database infos
load 488 Truck database infos
load 224 Tram database infos
load 337 Misc database infos
load 56 Person_sitting database infos
(mmlab) shl@zhihui-mint:~/shl_res/mmlab/mmdetection3d$ 

2、数据预处理完之后,会生成一些pkl文件,保存在./data/kitti目录下

在这里插入图片描述

5.1.3 数据预处理阶段可能会遇到的错误

1、可能报错:TypeError: expected dtype object, got 'numpy.dtype[float64]'

(open-mmlab) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d$ python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti
Generate info. this may take several minutes.
[                                                  ] 0/3712, elapsed: 0s, ETA:points_v type: <class 'numpy.ndarray'>
Traceback (most recent call last):
  File "tools/create_data.py", line 243, in <module>
    out_dir=args.out_dir)
  File "tools/create_data.py", line 23, in kitti_data_prep
    kitti.create_kitti_info_file(root_path, info_prefix)
  File "/home/shl/shl_res/MMlab/mmdetection3d/tools/data_converter/kitti_converter.py", line 118, in create_kitti_info_file
    _calculate_num_points_in_gt(data_path, kitti_infos_train, relative_path)
  File "/home/shl/shl_res/MMlab/mmdetection3d/tools/data_converter/kitti_converter.py", line 66, in _calculate_num_points_in_gt
    points_v, rect, Trv2c, P2, image_info['image_shape'])
  File "/home/shl/shl_res/MMlab/mmdetection3d/mmdet3d/core/bbox/box_np_ops.py", line 639, in remove_outside_points
    frustum_surfaces = corner_to_surfaces_3d_jit(frustum[np.newaxis, ...])
TypeError: expected dtype object, got 'numpy.dtype[float64]'
(open-mmlab) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d$ 

我的版本:

(open-mmlab) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d$ conda list mmcv
# packages in environment at /home/shl/anaconda3/envs/open-mmlab:
#
# Name                    Version                   Build  Channel
mmcv-full                 1.3.1                    pypi_0    pypi
(open-mmlab) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d$ conda list mmdet
# packages in environment at /home/shl/anaconda3/envs/open-mmlab:
#
# Name                    Version                   Build  Channel
mmdet                     2.11.0                    dev_0    <develop>
mmdet3d                   0.12.0                    dev_0    <develop>
(open-mmlab) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d$ conda list numba
# packages in environment at /home/shl/anaconda3/envs/open-mmlab:
#
# Name                    Version                   Build  Channel
numba                     0.48.0                   pypi_0    pypi
(open-mmlab) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d$ conda list numpy
# packages in environment at /home/shl/anaconda3/envs/open-mmlab:
#
# Name                    Version                   Build  Channel
numpy                     1.20.2                   pypi_0    pypi
(open-mmlab) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d$ 
  • 这个报错原因,就是因为numba的版本和numpy版本不匹配导致的!

  • 对于numba==0.48.0要求对应的numpy版本:1.15.0 <numpy<1.20.0

2、改成1.17.0版本后报错:ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject(1.18.0、1.19.0也是报这个错误)

(open-mmlab) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d$ python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti
Traceback (most recent call last):
  File "tools/create_data.py", line 5, in <module>
    from tools.data_converter import kitti_converter as kitti
  File "/home/shl/shl_res/MMlab/mmdetection3d/tools/data_converter/kitti_converter.py", line 7, in <module>
    from mmdet3d.core.bbox import box_np_ops
  File "/home/shl/shl_res/MMlab/mmdetection3d/mmdet3d/core/__init__.py", line 1, in <module>
    from .anchor import *  # noqa: F401, F403
  File "/home/shl/shl_res/MMlab/mmdetection3d/mmdet3d/core/anchor/__init__.py", line 1, in <module>
    from mmdet.core.anchor import build_anchor_generator
  File "/home/shl/shl_res/MMlab/mmdetection/mmdet/core/__init__.py", line 5, in <module>
    from .mask import *  # noqa: F401, F403
  File "/home/shl/shl_res/MMlab/mmdetection/mmdet/core/mask/__init__.py", line 2, in <module>
    from .structures import BaseInstanceMasks, BitmapMasks, PolygonMasks
  File "/home/shl/shl_res/MMlab/mmdetection/mmdet/core/mask/structures.py", line 6, in <module>
    import pycocotools.mask as maskUtils
  File "/home/shl/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/pycocotools-2.0.2-py3.7-linux-x86_64.egg/pycocotools/mask.py", line 3, in <module>
    import pycocotools._mask as _mask
  File "pycocotools/_mask.pyx", line 1, in init pycocotools._mask
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject
(open-mmlab) shl@zhihui-mint:~/shl_res/MMlab/mmdetection3d$ 

3、最终解决方式,是需要重新编译(mmcv等库包都是要重新编译的),把之前编译的文件全部删除之后再编译,其实我的最终解决方式(参考):

  • 删除之前conda创建的虚拟环境,然后重型创建一个虚拟环境,重新安装环境(因为,我之前有下载很多库包,所以我不建议你也这样做)
  • 首先安装numpy==1.18.0
  • 然后在安装编译其他的库包
  • mmcv安装版本为:1.13.2

5.2 开始训练

5.2.1 开始训练

数据准备好了之后,就可以开始训练啦!首先来看下,训练都有哪些可选参数

(mmlab) shl@zhihui-mint:~/shl_res/mmlab/mmdetection3d$ python tools/train.py -h
usage: train.py [-h] [--work-dir WORK_DIR] [--resume-from RESUME_FROM]
                [--no-validate]
                [--gpus GPUS | --gpu-ids GPU_IDS [GPU_IDS ...]] [--seed SEED]
                [--deterministic] [--options OPTIONS [OPTIONS ...]]
                [--cfg-options CFG_OPTIONS [CFG_OPTIONS ...]]
                [--launcher {none,pytorch,slurm,mpi}]
                [--local_rank LOCAL_RANK] [--autoscale-lr]
                config

Train a detector

positional arguments:
  config                train config file path

optional arguments:
  -h, --help            show this help message and exit
  --work-dir WORK_DIR   the dir to save logs and models
  --resume-from RESUME_FROM
                        the checkpoint file to resume from
  --no-validate         whether not to evaluate the checkpoint during training
  --gpus GPUS           number of gpus to use (only applicable to non-
                        distributed training)
  --gpu-ids GPU_IDS [GPU_IDS ...]
                        ids of gpus to use (only applicable to non-distributed
                        training)
  --seed SEED           random seed
  --deterministic       whether to set deterministic options for CUDNN
                        backend.
  --options OPTIONS [OPTIONS ...]
                        override some settings in the used config, the key-
                        value pair in xxx=yyy format will be merged into
                        config file (deprecate), change to --cfg-options
                        instead.
  --cfg-options CFG_OPTIONS [CFG_OPTIONS ...]
                        override some settings in the used config, the key-
                        value pair in xxx=yyy format will be merged into
                        config file. If the value to be overwritten is a list,
                        it should be like key="[a,b]" or key=a,b It also
                        allows nested list/tuple values, e.g.
                        key="[(a,b),(c,d)]" Note that the quotation marks are
                        necessary and that no white space is allowed.
  --launcher {none,pytorch,slurm,mpi}
                        job launcher
  --local_rank LOCAL_RANK
  --autoscale-lr        automatically scale lr with the number of gpus
(mmlab) shl@zhihui-mint:~/shl_res/mmlab/mmdetection3d$ 

2、根据自己的需求,选择相应的模型配置文件,然后开始训练

python tools/train.py configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py

5.2.2 训练可能会遇到的问题:显存泄漏 RuntimeError: CUDA out of memory

首先说明一下我的显卡型号:

  • NVIDIA GeForceGTX 1080
  • 显存大小:8GB

1、在训练的时候报显存泄漏错误:

2021-04-29 10:07:13,961 - mmdet - INFO - load 337 Misc database infos
2021-04-29 10:07:13,961 - mmdet - INFO - load 56 Person_sitting database infos
2021-04-29 10:07:30,144 - mmdet - INFO - Start running, host: shl@zhihui-mint, work_dir: /home/shl/shl_res/mmlab/mmdetection3d/work_dirs/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class
2021-04-29 10:07:30,144 - mmdet - INFO - workflow: [('train', 1)], max: 80 epochs
Traceback (most recent call last):
  File "tools/train.py", line 212, in <module>
    main()
  File "tools/train.py", line 208, in main
    meta=meta)
  File "/home/shl/anaconda3/envs/mmlab/lib/python3.7/site-packages/mmdet/apis/train.py", line 170, in train_detector
    runner.run(data_loaders, cfg.workflow)
  File "/home/shl/anaconda3/envs/mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 125, in run
    epoch_runner(data_loaders[i], **kwargs)
  File "/home/shl/anaconda3/envs/mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 51, in train
    self.call_hook('after_train_iter')
  File "/home/shl/anaconda3/envs/mmlab/lib/python3.7/site-packages/mmcv/runner/base_runner.py", line 307, in call_hook
    getattr(hook, fn_name)(self)
  File "/home/shl/anaconda3/envs/mmlab/lib/python3.7/site-packages/mmcv/runner/hooks/optimizer.py", line 35, in after_train_iter
    runner.outputs['loss'].backward()
  File "/home/shl/anaconda3/envs/mmlab/lib/python3.7/site-packages/torch/tensor.py", line 221, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/shl/anaconda3/envs/mmlab/lib/python3.7/site-packages/torch/autograd/__init__.py", line 132, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: CUDA out of memory. Tried to allocate 472.00 MiB (GPU 0; 7.93 GiB total capacity; 5.03 GiB already allocated; 405.12 MiB free; 5.71 GiB reserved in total by PyTorch)
(mmlab) shl@zhihui-mint:~/shl_res/mmlab/mmdetection3d$ ls

2、错误解决的方式

我的解决方式是通过修改batch_size大小,但是我并没有在训练参数和配置文件中找到batch_size的设置,因此我在github上提交了issues,原来batch_size的参数是叫samples_per_gpu,它的默认值是6,我改成2即可正确运行了:

修改:configs/_base_/datasets/kitti-3d-3class.py第99行的sample_per_gpu=2,大家根据自己显卡的显存进行修改

3、修改之后可以正确训练了

(mmlab) shl@zhihui-mint:~/shl_res/mmlab/mmdetection3d$ ./5_train_pointpillars_kitti_3class 
2021-04-29 14:06:05,920 - mmdet - INFO - Environment info:
------------------------------------------------------------
sys.platform: linux
Python: 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0]
CUDA available: True
GPU 0: GeForce GTX 1080
CUDA_HOME: /usr/local/cuda-10.2
NVCC: Cuda compilation tools, release 10.2, V10.2.89
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.7.0
PyTorch compiling details: PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 10.2
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75
  - CuDNN 7.6.5
  - Magma 2.5.2
  - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, 

TorchVision: 0.8.0
OpenCV: 4.5.1
MMCV: 1.3.2
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 10.2
MMDetection: 2.11.0
MMDetection3D: 0.12.0+86ef23d
------------------------------------------------------------

2021-04-29 14:06:06,763 - mmdet - INFO - Distributed training: False
2021-04-29 14:06:07,539 - mmdet - INFO - Config:
voxel_size = [0.16, 0.16, 4]
model = dict(
    type='VoxelNet',
    voxel_layer=dict(
        max_num_points=32,
        point_cloud_range=[0, -39.68, -3, 69.12, 39.68, 1],
......
    (pfn_layers): ModuleList(
      (0): PFNLayer(
        (norm): BatchNorm1d(64, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
        (linear): Linear(in_features=9, out_features=64, bias=False)
      )
    )
  )
  (middle_encoder): PointPillarsScatter()
)
2021-04-29 14:06:07,828 - mmdet - INFO - load 2207 Pedestrian database infos
2021-04-29 14:06:07,828 - mmdet - INFO - load 14357 Car database infos
2021-04-29 14:06:07,828 - mmdet - INFO - load 734 Cyclist database infos
2021-04-29 14:06:07,828 - mmdet - INFO - load 1297 Van database infos
2021-04-29 14:06:07,828 - mmdet - INFO - load 488 Truck database infos
2021-04-29 14:06:07,828 - mmdet - INFO - load 224 Tram database infos
2021-04-29 14:06:07,828 - mmdet - INFO - load 337 Misc database infos
2021-04-29 14:06:07,828 - mmdet - INFO - load 56 Person_sitting database infos
2021-04-29 14:06:07,854 - mmdet - INFO - After filter database:
2021-04-29 14:06:07,854 - mmdet - INFO - load 2089 Pedestrian database infos
2021-04-29 14:06:07,854 - mmdet - INFO - load 13509 Car database infos
2021-04-29 14:06:07,854 - mmdet - INFO - load 684 Cyclist database infos
2021-04-29 14:06:07,855 - mmdet - INFO - load 1297 Van database infos
2021-04-29 14:06:07,855 - mmdet - INFO - load 488 Truck database infos
2021-04-29 14:06:07,855 - mmdet - INFO - load 224 Tram database infos
2021-04-29 14:06:07,855 - mmdet - INFO - load 337 Misc database infos
2021-04-29 14:06:07,855 - mmdet - INFO - load 56 Person_sitting database infos
2021-04-29 14:06:10,480 - mmdet - INFO - Start running, host: shl@zhihui-mint, work_dir: /home/shl/shl_res/mmlab/mmdetection3d/work_dirs/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class
2021-04-29 14:06:10,481 - mmdet - INFO - workflow: [('train', 1)], max: 80 epochs
2021-04-29 14:06:33,687 - mmdet - INFO - Epoch [1][50/3712]	lr: 1.000e-03, eta: 1 day, 12:30:51, time: 0.443, data_time: 0.188, memory: 1982, loss_cls: 0.7894, loss_bbox: 1.8513, loss_dir: 0.1458, loss: 2.7864, grad_norm: 12.9730
2021-04-29 14:06:45,074 - mmdet - INFO - Epoch [1][100/3712]	lr: 1.000e-03, eta: 1 day, 3:38:37, time: 0.228, data_time: 0.002, memory: 1982, loss_cls: 0.6169, loss_bbox: 1.3952, loss_dir: 0.1387, loss: 2.1508, grad_norm: 10.1342
2021-04-29 14:06:56,451 - mmdet - INFO - Epoch [1][150/3712]	lr: 1.000e-03, eta: 1 day, 0:40:45, time: 0.228, data_time: 0.002, memory: 1982, loss_cls: 0.5477, loss_bbox: 1.2741, loss_dir: 0.1361, loss: 1.9578, grad_norm: 9.0859
2021-04-29 14:07:08,080 - mmdet - INFO - Epoch [1][200/3712]	lr: 1.000e-03, eta: 23:17:57, time: 0.233, data_time: 0.002, memory: 1982, loss_cls: 0.4903, loss_bbox: 1.1924, loss_dir: 0.1352, loss: 1.8180, grad_norm: 9.9547
2021-04-29 14:07:19,705 - mmdet - INFO - Epoch [1][250/3712]	lr: 1.000e-03, eta: 22:28:06, time: 0.232, data_time: 0.002, memory: 1982, loss_cls: 0.4968, loss_bbox: 1.1553, loss_dir: 0.1337, loss: 1.7858, grad_norm: 8.9025
2021-04-29 14:07:31,165 - mmdet - INFO - Epoch [1][300/3712]	lr: 1.000e-03, eta: 21:52:06, time: 0.229, data_time: 0.002, memory: 1988, loss_cls: 0.4640, loss_bbox: 1.1292, loss_dir: 0.1308, loss: 1.7241, grad_norm: 8.6915
2021-04-29 14:07:42,547 - mmdet - INFO - Epoch [1][350/3712]	lr: 1.000e-03, eta: 21:25:14, time: 0.228, data_time: 0.002, memory: 1988, loss_cls: 0.4549, loss_bbox: 1.1149, loss_dir: 0.1269, loss: 1.6966, grad_norm: 8.3001
2021-04-29 14:07:53,968 - mmdet - INFO - Epoch [1][400/3712]	lr: 1.000e-03, eta: 21:05:31, time: 0.228, data_time: 0.002, memory: 1989, loss_cls: 0.4386, loss_bbox: 1.0360, loss_dir: 0.1252, loss: 1.5998, grad_norm: 8.4704
2021-04-29 14:08:05,299 - mmdet - INFO - Epoch [1][450/3712]	lr: 1.000e-03, eta: 20:49:09, time: 0.227, data_time: 0.002, memory: 1989, loss_cls: 0.4229, loss_bbox: 1.0270, loss_dir: 0.1207, loss: 1.5706, grad_norm: 7.9375
2021-04-29 14:08:16,598 - mmdet - INFO - Epoch [1][500/3712]	lr: 1.000e-03, eta: 20:35:42, time: 0.226, data_time: 0.002, memory: 1999, loss_cls: 0.3957, loss_bbox: 0.9792, loss_dir: 0.1196, loss: 1.4945, grad_norm: 7.5102
2021-04-29 14:08:27,929 - mmdet - INFO - Epoch [1][550/3712]	lr: 1.000e-03, eta: 20:24:57, time: 0.227, data_time: 0.002, memory: 1999, loss_cls: 0.3900, loss_bbox: 0.9470, loss_dir: 0.1154, loss: 1.4523, grad_norm: 7.0491
2021-04-29 14:08:39,170 - mmdet - INFO - Epoch [1][600/3712]	lr: 1.001e-03, eta: 20:15:13, time: 0.225, data_time: 0.002, memory: 1999, loss_cls: 0.3822, loss_bbox: 0.9552, loss_dir: 0.1166, loss: 1.4541, grad_norm: 7.0271
2021-04-29 14:08:50,525 - mmdet - INFO - Epoch [1][650/3712]	lr: 1.001e-03, eta: 20:07:49, time: 0.227, data_time: 0.002, memory: 1999, loss_cls: 0.3699, loss_bbox: 0.9163, loss_dir: 0.1167, loss: 1.4029, grad_norm: 7.1026
2021-04-29 14:09:01,906 - mmdet - INFO - Epoch [1][700/3712]	lr: 1.001e-03, eta: 20:01:38, time: 0.228, data_time: 0.002, memory: 1999, loss_cls: 0.3610, loss_bbox: 0.9148, loss_dir: 0.1121, loss: 1.3880, grad_norm: 7.1760
2021-04-29 14:09:13,433 - mmdet - INFO - Epoch [1][750/3712]	lr: 1.001e-03, eta: 19:57:12, time: 0.231, data_time: 0.002, memory: 1999, loss_cls: 0.3580, loss_bbox: 0.9246, loss_dir: 0.1131, loss: 1.3957, grad_norm: 7.2312
2021-04-29 14:09:24,840 - mmdet - INFO - Epoch [1][800/3712]	lr: 1.001e-03, eta: 19:52:34, time: 0.228, data_time: 0.002, memory: 1999, loss_cls: 0.3579, loss_bbox: 0.8869, loss_dir: 0.1129, loss: 1.3578, grad_norm: 6.2828
2021-04-29 14:09:36,273 - mmdet - INFO - Epoch [1][850/3712]	lr: 1.001e-03, eta: 19:48:37, time: 0.229, data_time: 0.002, memory: 1999, loss_cls: 0.3360, loss_bbox: 0.8947, loss_dir: 0.1105, loss: 1.3412, grad_norm: 7.1239
2021-04-29 14:09:47,786 - mmdet - INFO - Epoch [1][900/3712]	lr: 1.001e-03, eta: 19:45:30, time: 0.230, data_time: 0.002, memory: 1999, loss_cls: 0.3390, loss_bbox: 0.8851, loss_dir: 0.1093, loss: 1.3334, grad_norm: 6.7943
2021-04-29 14:09:59,886 - mmdet - INFO - Epoch [1][950/3712]	lr: 1.001e-03, eta: 19:45:45, time: 0.242, data_time: 0.003, memory: 1999, loss_cls: 0.3461, loss_bbox: 0.8913, loss_dir: 0.1123, loss: 1.3497, grad_norm: 6.1542
2021-04-29 14:10:11,571 - mmdet - INFO - Exp name: hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py
2021-04-29 14:10:11,571 - mmdet - INFO - Epoch [1][1000/3712]	lr: 1.002e-03, eta: 19:43:55, time: 0.234, data_time: 0.002, memory: 1999, loss_cls: 0.3369, loss_bbox: 0.8745, loss_dir: 0.1111, loss: 1.3225, grad_norm: 6.1611
2021-04-29 14:10:23,420 - mmdet - INFO - Epoch [1][1050/3712]	lr: 1.002e-03, eta: 19:43:00, time: 0.237, data_time: 0.003, memory: 1999, loss_cls: 0.3332, loss_bbox: 0.8765, loss_dir: 0.1118, loss: 1.3216, grad_norm: 6.1433
2021-04-29 14:10:34,900 - mmdet - INFO - Epoch [1][1100/3712]	lr: 1.002e-03, eta: 19:40:30, time: 0.230, data_time: 0.002, memory: 1999, loss_cls: 0.3238, loss_bbox: 0.8262, loss_dir: 0.1109, loss: 1.2610, grad_norm: 6.0072
2021-04-29 14:10:46,469 - mmdet - INFO - Epoch [1][1150/3712]	lr: 1.002e-03, eta: 19:38:35, time: 0.231, data_time: 0.002, memory: 1999, loss_cls: 0.3230, loss_bbox: 0.8378, loss_dir: 0.1102, loss: 1.2709, grad_norm: 6.3352
2021-04-29 14:10:58,090 - mmdet - INFO - Epoch [1][1200/3712]	lr: 1.002e-03, eta: 19:37:01, time: 0.232, data_time: 0.002, memory: 1999, loss_cls: 0.3116, loss_bbox: 0.8198, loss_dir: 0.1110, loss: 1.2423, grad_norm: 5.8147
2021-04-29 14:11:09,553 - mmdet - INFO - Epoch [1][1250/3712]	lr: 1.002e-03, eta: 19:34:56, time: 0.229, data_time: 0.002, memory: 1999, loss_cls: 0.3150, loss_bbox: 0.8013, loss_dir: 0.1077, loss: 1.2240, grad_norm: 5.8795
2021-04-29 14:11:21,253 - mmdet - INFO - Epoch [1][1300/3712]	lr: 1.003e-03, eta: 19:33:54, time: 0.234, data_time: 0.002, memory: 1999, loss_cls: 0.3214, loss_bbox: 0.8256, loss_dir: 0.1113, loss: 1.2583, grad_norm: 5.7060
......

4、显存使用:nvidia-smi -lms

^C(mmlab) shl@zhihui-mint:~/shl_res$ nvidia-smi
Thu Apr 29 14:18:47 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02    Driver Version: 450.80.02    CUDA Version: 11.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce GTX 1080    Off  | 00000000:01:00.0  On |                  N/A |
| 42%   62C    P2   106W / 198W |   4401MiB /  8116MiB |     87%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      2599      G   /usr/lib/xorg/Xorg                649MiB |
|    0   N/A  N/A      4225      G   cinnamon                          270MiB |
|    0   N/A  N/A      6205      G   ...f_6596.log --shared-files       66MiB |
|    0   N/A  N/A      9651      G   ...AAAAAAAAA= --shared-files       11MiB |
|    0   N/A  N/A     17657      C   ...lib/image_view/image_view      105MiB |
|    0   N/A  N/A     18429      G   rviz                               23MiB |
|    0   N/A  N/A     20248      G   rviz                               10MiB |
|    0   N/A  N/A     27594      G   ...token=3315384324615884356       14MiB |
|    0   N/A  N/A     28907      G   obs                                41MiB |
|    0   N/A  N/A     29555      G   ...AAAAAAAAA= --shared-files        9MiB |
|    0   N/A  N/A     32591      C   python                           3191MiB |
+-----------------------------------------------------------------------------+
(mmlab) shl@zhihui-mint:~/shl_res$ 

5.3 测试

5.3.1 开始测试

1、查看一下测试脚本有哪些参数选项

2、测试命令:

python tools/test.py configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py checkpoints/epoch_80.pth --eval mAP --options 'show=True' 'out_dir=./data/pointpillars/show_results'

你也可以在这里下载PointPillars预训练模型

注意:

使用show=True的进行可视化的时候,需要先确保你已经安装了open3d库包pip install open3d

(mmlab) shl@zhihui-mint:~/shl_res/mmlab/mmdetection3d$ python tools/test.py configs/pointpillars/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class.py my_checkpoints/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class_20200620_230421-aa0f3adb.pth --eval mAP --options 'show=True' 'out_dir=./data/pointpillars/show_results'
tools/test.py:96: UserWarning: --options is deprecated in favor of --eval-options
  warnings.warn('--options is deprecated in favor of --eval-options')
Use load_from_local loader
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 3769/3769, 24.3 task/s, elapsed: 155s, ETA:     0s
Converting prediction to KITTI format
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 3769/3769, 916.4 task/s, elapsed: 4s, ETA:     0s
Result is saved to /tmp/tmpg_92aqys/results.pkl.

Pedestrian AP@0.50, 0.50, 0.50:
bbox AP:62.2413, 58.9157, 55.3660
bev  AP:59.0778, 53.3638, 48.4230
3d   AP:52.0263, 46.4037, 42.4841
aos  AP:46.00, 43.22, 40.94
Pedestrian AP@0.50, 0.25, 0.25:
bbox AP:62.2413, 58.9157, 55.3660
bev  AP:69.3504, 65.8573, 63.4631
3d   AP:69.2037, 65.4513, 62.8754
aos  AP:46.00, 43.22, 40.94
Cyclist AP@0.50, 0.50, 0.50:
bbox AP:82.6460, 72.3547, 68.4669
bev  AP:80.9328, 63.3447, 60.0618
3d   AP:78.7231, 59.9526, 57.2489
aos  AP:80.85, 67.20, 63.63
Cyclist AP@0.50, 0.25, 0.25:
bbox AP:82.6460, 72.3547, 68.4669
bev  AP:83.0013, 69.9254, 66.6552
3d   AP:83.0013, 69.9254, 66.6552
aos  AP:80.85, 67.20, 63.63
Car AP@0.70, 0.70, 0.70:
bbox AP:90.5939, 89.1638, 87.2511
bev  AP:89.9348, 86.5743, 85.1967
3d   AP:85.4118, 73.9780, 67.7630
aos  AP:90.37, 88.27, 86.07
Car AP@0.70, 0.50, 0.50:
bbox AP:90.5939, 89.1638, 87.2511
bev  AP:97.5800, 89.9874, 89.5006
3d   AP:90.7554, 89.8306, 89.2271
aos  AP:90.37, 88.27, 86.07

Overall AP@easy, moderate, hard:
bbox AP:78.4938, 73.4781, 70.3613
bev  AP:76.6485, 67.7609, 64.5605
3d   AP:72.0537, 60.1114, 55.8320
aos  AP:72.41, 66.23, 63.55

3、上面成功测试之后,你会看到测试使用open3d可视化的结果

在这里插入图片描述

叉掉按Q键按Esc键当前可视化窗口,会显示下一个测试数据的可视化结果:

在这里插入图片描述

5.3.2 关于 open3d可视化中的一些使用

在可视化界面,按下h / H就可以在命令行下看到open3d的一下使用操作:

[Open3D INFO]   -- Mouse view control --
[Open3D INFO]     Left button + drag         : Rotate.
[Open3D INFO]     Ctrl + left button + drag  : Translate.
[Open3D INFO]     Wheel button + drag        : Translate.
[Open3D INFO]     Shift + left button + drag : Roll.
[Open3D INFO]     Wheel                      : Zoom in/out.
[Open3D INFO] 
[Open3D INFO]   -- Keyboard view control --
[Open3D INFO]     [/]          : Increase/decrease field of view.
[Open3D INFO]     R            : Reset view point.
[Open3D INFO]     Ctrl/Cmd + C : Copy current view status into the clipboard.
[Open3D INFO]     Ctrl/Cmd + V : Paste view status from clipboard.
[Open3D INFO] 
[Open3D INFO]   -- General control --
[Open3D INFO]     Q, Esc       : Exit window.
[Open3D INFO]     H            : Print help message.
[Open3D INFO]     P, PrtScn    : Take a screen capture.
[Open3D INFO]     D            : Take a depth capture.
[Open3D INFO]     O            : Take a capture of current rendering settings.
[Open3D INFO]     Alt + Enter  : Toggle between full screen and windowed mode.
[Open3D INFO] 
[Open3D INFO]   -- Render mode control --
[Open3D INFO]     L            : Turn on/off lighting.
[Open3D INFO]     +/-          : Increase/decrease point size.
[Open3D INFO]     Ctrl + +/-   : Increase/decrease width of geometry::LineSet.
[Open3D INFO]     N            : Turn on/off point cloud normal rendering.
[Open3D INFO]     S            : Toggle between mesh flat shading and smooth shading.
[Open3D INFO]     W            : Turn on/off mesh wireframe.
[Open3D INFO]     B            : Turn on/off back face rendering.
[Open3D INFO]     I            : Turn on/off image zoom in interpolation.
[Open3D INFO]     T            : Toggle among image render:
[Open3D INFO]                    no stretch / keep ratio / freely stretch.
[Open3D INFO] 
[Open3D INFO]   -- Color control --
[Open3D INFO]     0..4,9       : Set point cloud color option.
[Open3D INFO]                    0 - Default behavior, render point color.
[Open3D INFO]                    1 - Render point color.
[Open3D INFO]                    2 - x coordinate as color.
[Open3D INFO]                    3 - y coordinate as color.
[Open3D INFO]                    4 - z coordinate as color.
[Open3D INFO]                    9 - normal as color.
[Open3D INFO]     Ctrl + 0..4,9: Set mesh color option.
[Open3D INFO]                    0 - Default behavior, render uniform gray color.
[Open3D INFO]                    1 - Render point color.
[Open3D INFO]                    2 - x coordinate as color.
[Open3D INFO]                    3 - y coordinate as color.
[Open3D INFO]                    4 - z coordinate as color.
[Open3D INFO]                    9 - normal as color.
[Open3D INFO]     Shift + 0..4 : Color map options.
[Open3D INFO]                    0 - Gray scale color.
[Open3D INFO]                    1 - JET color map.
[Open3D INFO]                    2 - SUMMER color map.
[Open3D INFO]                    3 - WINTER color map.
[Open3D INFO]                    4 - HOT color map.

1、鼠标可视化控制

  • 鼠标左键+ 拖拽:可以旋转3D图形
  • Ctrl+鼠标左键+拖拽:可以移动3D图形
  • 按下鼠标滚动键 + 拖拽:也是可以移动3D图形
  • Shift + 鼠标左键 + 拖拽:可以翻转3D图形
  • 滚动鼠标滚轮放大和缩小3D图形

2、快捷键控制

  • r / R:重置视图
  • Ctrl+C:将当前的视图状态复制到剪贴板
  • Ctrl+V:粘贴当前视图的状态

如下,是在可视化界面上,复制的当前视图的状态结果:

{
	"class_name" : "ViewTrajectory",
	"interval" : 29,
	"is_loop" : false,
	"trajectory" : 
	[
		{
			"boundingbox_max" : [ 15.840000152587891, 77.004997253417969, 2.0550000667572021 ],
			"boundingbox_min" : [ -32.341999053955078, -0.059999999999999998, -2.1480000019073486 ],
			"field_of_view" : 60.0,
			"front" : [ -0.29917197347768476, -0.57001121176759129, 0.76523417902280733 ],
			"lookat" : [ 6.7522963660420237, 29.49152986284853, -10.538585241474523 ],
			"up" : [ 0.40106739384242224, 0.65256627539464518, 0.64288583886566331 ],
			"zoom" : 0.45999999999999974
		}
	],
	"version_major" : 1,
	"version_minor" : 0
}

3、一般的控制

[Open3D INFO] – General control –
[Open3D INFO] Q, Esc : Exit window.
[Open3D INFO] H : Print help message.
[Open3D INFO] P, PrtScn : Take a screen capture.
[Open3D INFO] D : Take a depth capture.
[Open3D INFO] O : Take a capture of current rendering settings.
[Open3D INFO] Alt + Enter : Toggle between full screen and windowed mode.
[Open3D INFO]

  • QEsc:退出图形显示
  • H:打印帮助信息
  • PPrtScn:截图当前的视图
  • D:进行深度捕捉
  • O:捕获当前的渲染设置
  • Alt+Enter:在全屏和窗口模式之间切换

捕捉的深度图,会保存到当前目录下,即mmdetection3d目录下,如下,是我捕捉的深度图:

在这里插入图片描述

4、点云的颜色控制

  • 0:默认的点云颜色渲染,默认点云被渲染成灰色
  • 1:渲染点云颜色,和0的时候效果一样
  • 2:把x的坐标值作为颜色渲染
  • 3:把y的坐标值作为颜色渲染
  • 4:把z的坐标值作为颜色渲染
  • 9:正常颜色,也是灰色

1)为0、1、9时的渲染

在这里插入图片描述

2)按下2,使用x坐标值渲染

在这里插入图片描述

3)按下3,使用y坐标值渲染

在这里插入图片描述

4)按下4,使用z坐标值渲染

在这里插入图片描述

5、设置网格颜色选项

按下快捷键:

  • Ctrl + 0
  • Ctrl + 1
  • Ctrl + 2
  • Ctrl + 3
  • Ctrl + 4
  • Ctrl + 9

测试,只会改变右下角坐标系的颜色,可能是这个可视化的图中没有可视化网格

在这里插入图片描述

6、颜色地图选项

按下快捷键:

  • Shift + 0
  • Shift + 1
  • Shift + 2
  • Shift + 3
  • Shift + 4
  • Shift + 9

同样测试,只会改变右下角坐标系的颜色

5.4 绘制训练的分类、回归损失曲线和计算训练时间

5.4.1 绘制训练的分类、回归损失曲线

1、对训练的log文件,使用python tools/analysis_tools/analyze_logs.py plot_curve命令绘制损曲线,下面先看一下analyze_logs.py定义了哪些可选参数

(mmlab) shl@zhihui-mint:~/shl_res/mmlab/mmdetection3d$ python tools/analysis_tools/analyze_logs.py plot_curve -h
usage: analyze_logs.py plot_curve [-h] [--keys KEYS [KEYS ...]]
                                  [--title TITLE]
                                  [--legend LEGEND [LEGEND ...]]
                                  [--backend BACKEND] [--style STYLE]
                                  [--out OUT] [--mode MODE]
                                  [--interval INTERVAL]
                                  json_logs [json_logs ...]

positional arguments:
  json_logs             path of train log in json format

optional arguments:
  -h, --help            show this help message and exit
  --keys KEYS [KEYS ...]
                        the metric that you want to plot
  --title TITLE         title of figure
  --legend LEGEND [LEGEND ...]
                        legend of each plot
  --backend BACKEND     backend of plt
  --style STYLE         style of plt
  --out OUT
  --mode MODE
  --interval INTERVAL
(mmlab) shl@zhihui-mint:~/shl_res/mmlab/mmdetection3d$ 

2、绘制损失曲线命令

python tools/analysis_tools/analyze_logs.py plot_curve work_dirs/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class/20210429_140605.log.json --keys loss_cls loss_bbox --out losses.pdf


(mmlab) shl@zhihui-mint:~/shl_res/mmlab/mmdetection3d$ python tools/analysis_tools/analyze_logs.py plot_curve work_dirs/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class/20210429_140605.log.json --keys loss_cls loss_bbox --out losses.pdf
plot curve of work_dirs/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class/20210429_140605.log.json, metric is loss_cls
plot curve of work_dirs/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class/20210429_140605.log.json, metric is loss_bbox
save curve to: losses.pdf
(mmlab) shl@zhihui-mint:~/shl_res/mmlab/mmdetection3d$

3、命令执行结束,会在mmdetection3d目录下生成losses.pdf文件,绘制曲线如下所示:

在这里插入图片描述

5.4.2 计算训练时间

python tools/analysis_tools/analyze_logs.py cal_train_time work_dirs/hv_pointpillars_secfpn_6x8_160e_kitti-3d-3class/20210429_140605.log.json

5.5 关于多GPU的训练

由于我只有一块GPU,因此关于多GPU的训练我并没有测试,但是基本上和mmdetection那一套一脉相承的,具体参考官方文档

参考:https://www.cnblogs.com/notesbyY/p/13475806.html
参考:https://zhuanlan.zhihu.com/p/165647329
参考:https://blog.csdn.net/weixin_38362784/article/details/111397440 # nuScenes 数据集在mmdetection3d下的使用
参考:https://blog.csdn.net/sinat_41667032/article/details/110400334
参考:https://blog.csdn.net/qq_39732684/article/details/105762909 # 可视化参考
参考:https://blog.csdn.net/weixin_38362784/category_10543699.html # 他写了挺多关3D目标检测的博客,可以学习一下
参考:https://blog.csdn.net/qq_37534947/article/details/106628308 # kitti中的数据含义分析很详细


如果你觉得有帮助,那就点个赞、评个论再走呗 (๑◕ܫ←๑)

版权声明:本文为CSDN博主「点亮~黑夜」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/weixin_41010198/article/details/116133545

点亮~黑夜

我还没有学会写个人说明!

暂无评论

发表评论

相关推荐

yolov5安装

一、准备 YOLOv5环境要求Python版本大于等于3.7以及PyTorch大于等于1.7 PyTorch安装请移步 二、安装 # Clone the repository git clone https://github.com/ult