手把手教你快速上手人体姿态估计(MMPose)

最近在研究如何快速实现图像中人体姿态的估计,也就是常见的pose estimation任务。花了些时间,实际对比了AlphaPose、BlazePose和MMPose。BlazePose主要为移动端设计,AlphaPose安装配置比较麻烦,MMPose用起来更为方便,而且支持的模型数量也很多。因此,最终选定MMPose作为人体姿态估计的算法库。以下实测环境为Ubuntu20.04系统。

MMPose源码:

https://github.com/open-mmlab/mmpose

MMPose官方文档:

https://mmpose.readthedocs.io/en/latest

1. 安装MMPose

(1) 准备基础的环境:

conda create -n open-mmlab python=3.7 -y
conda activate open-mmlab
conda install pytorch torchvision -c pytorch

(2) 安装mmcv,需要选择对应的cuda和torch版本,我这里是cu103和torch1.10.0:

pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10.0/index.html

(3) 安装MMPose:

git clone git@github.com:open-mmlab/mmpose.git  #git报错的话加sudo
cd mmpose
pip install -r requirements.txt
pip install -v -e .

2. 安装MMDetection

由于MMPose需要依赖给定的检测框,因此可以通过MMDetection先检测出图像中的人体框,再通过MMPose来输出姿态估计结果。直接采用pip安装,非常方便。

pip install mmdet

3. 测试

以上安装完成之后,就可以在准备好的图像上测试姿态估计的效果了。这里对官方文档稍作修改,直接处理给定文件夹下的所有图像,并将结果存在另一个文件夹中。

修改之后的demo/pose_demo_mmdet.py代码如下:

# Copyright (c) OpenMMLab. All rights reserved.
import os
import warnings
from argparse import ArgumentParser

from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
                         process_mmdet_results, vis_pose_result)
from mmpose.datasets import DatasetInfo

try:
    from mmdet.apis import inference_detector, init_detector
    has_mmdet = True
except (ImportError, ModuleNotFoundError):
    has_mmdet = False


def main():
    """Visualize the demo images.

    Using mmdet to detect the human.
    """
    parser = ArgumentParser()
    parser.add_argument('det_config', help='Config file for detection')
    parser.add_argument('det_checkpoint', help='Checkpoint file for detection')
    parser.add_argument('pose_config', help='Config file for pose')
    parser.add_argument('pose_checkpoint', help='Checkpoint file for pose')
    parser.add_argument('--img-root', type=str, default='', help='Image root')
    parser.add_argument(
        '--show',
        action='store_true',
        default=False,
        help='whether to show img')
    parser.add_argument(
        '--out-img-root',
        type=str,
        default='',
        help='root of the output img file. '
        'Default not saving the visualization images.')
    parser.add_argument(
        '--device', default='cuda:0', help='Device used for inference')
    parser.add_argument(
        '--det-cat-id',
        type=int,
        default=1,
        help='Category id for bounding box detection model')
    parser.add_argument(
        '--bbox-thr',
        type=float,
        default=0.3,
        help='Bounding box score threshold')
    parser.add_argument(
        '--kpt-thr', type=float, default=0.3, help='Keypoint score threshold')
    parser.add_argument(
        '--radius',
        type=int,
        default=4,
        help='Keypoint radius for visualization')
    parser.add_argument(
        '--thickness',
        type=int,
        default=1,
        help='Link thickness for visualization')

    assert has_mmdet, 'Please install mmdet to run the demo.'

    args = parser.parse_args()

    assert args.show or (args.out_img_root != '')
    assert args.det_config is not None
    assert args.det_checkpoint is not None

    det_model = init_detector(
        args.det_config, args.det_checkpoint, device=args.device.lower())
    # build the pose model from a config file and a checkpoint file
    pose_model = init_pose_model(
        args.pose_config, args.pose_checkpoint, device=args.device.lower())

    dataset = pose_model.cfg.data['test']['type']
    dataset_info = pose_model.cfg.data['test'].get('dataset_info', None)
    if dataset_info is None:
        warnings.warn(
            'Please set `dataset_info` in the config.'
            'Check https://github.com/open-mmlab/mmpose/pull/663 for details.',
            DeprecationWarning)
    else:
        dataset_info = DatasetInfo(dataset_info)

    image_list = os.listdir(args.img_root)
    for img_name in image_list:
        image_name = os.path.join(args.img_root, img_name)

        # test a single image, the resulting box is (x1, y1, x2, y2)
        mmdet_results = inference_detector(det_model, image_name)

        # keep the person class bounding boxes.
        person_results = process_mmdet_results(mmdet_results, args.det_cat_id)

        # test a single image, with a list of bboxes.

        # optional
        return_heatmap = False

        # e.g. use ('backbone', ) to return backbone feature
        output_layer_names = None

        pose_results, returned_outputs = inference_top_down_pose_model(
            pose_model,
            image_name,
            person_results,
            bbox_thr=args.bbox_thr,
            format='xyxy',
            dataset=dataset,
            dataset_info=dataset_info,
            return_heatmap=return_heatmap,
            outputs=output_layer_names)

        if args.out_img_root == '':
            out_file = None
        else:
            os.makedirs(args.out_img_root, exist_ok=True)
            out_file = os.path.join(args.out_img_root, f'vis_{img_name}')

        # show the results
        vis_pose_result(
            pose_model,
            image_name,
            pose_results,
            dataset=dataset,
            dataset_info=dataset_info,
            kpt_score_thr=args.kpt_thr,
            radius=args.radius,
            thickness=args.thickness,
            show=args.show,
            out_file=out_file)


if __name__ == '__main__':
    main()

由于测试代码参数比较长,因此新建了一个script.sh文件如下:

python demo/pose_demo_mmdet.py \
    demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py \
    https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
    configs/wholebody/2d_kpt_sview_rgb_img/topdown_heatmap/coco-wholebody/hrnet_w48_coco_wholebody_384x288_dark_plus.py \
    https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_wholebody_384x288_dark-f5726563_20200918.pth \
    --img-root /img_path/images/ \
    --out-img-root /out_path/vis_results_whole

--img-root和--out-img-root这两项分别改为原图像和保存结果的路径。

通过bash来执行测试程序:

bash script.sh

4. 结果展示

最后,来看几张姿态估计的测试结果,效果还是很不错的。

 

 

版权声明:本文为CSDN博主「一个菜鸟的奋斗」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/u013685264/article/details/121476941

我还没有学会写个人说明!

暂无评论

发表评论

相关推荐

yolov5训练数据集划分

yolov5训练数据集划分 按照默认8:1:1划分训练集,测试集,验证集。 txt文件出现在imageset文件夹。 import os import randomtrainval_pe