mmcv安装,使用步骤

环境配置文件步骤

我的电脑版本:

Python 3.7.6 (anaconda 虚拟环境)
PyTorch 1.8.0
CUDA 11.1
VS 2019 (需要在path设置环境变量)
MMCV 1.4.0
mmdetection  2.19.0
mmsegmentation  0.19.0

CUDA环境下载步骤:https://blog.csdn.net/qq_46107892/article/details/121469597?spm=1001.2014.3001.5501在这里插入图片描述

在这里插入图片描述

torch1.80下载

pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html

cl查看vs版本

cl

在这里插入图片描述

ls env:查看cuda版本

ls env:

在这里插入图片描述

下载安装mmcv

git clone https://github.com/open-mmlab/mmcv.git
$env:CUDA_HOME = "E:\USEAPP\CUDA111\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1"
$env:CUDA_HOME = $env:CUDA_PATH_V11_1 # if CUDA_PATH_V11_1 is in envs:
$env:TORCH_CUDA_ARCH_LIST="8.6"                  #算力
$env:MMCV_WITH_OPS = 1
$env:MAX_JOBS = 8               

进行编译

python setup.py build_ext

在这里插入图片描述

进行安装

python setup.py develop

在这里插入图片描述
在这里插入图片描述

下载安装:mmdetection

git clone https://github.com/open-mmlab/mmdetection.git
pip install -r requirements/docs.txt
pip install -v -e .

在这里插入图片描述

from mmdet.apis import init_detector, inference_detector, show_result_pyplot

config_file = 'configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
# download the checkpoint from model zoo and put it in `checkpoints/`
# url: http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth
checkpoint_file = 'checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
device = 'cuda:0'
# init a detector
model = init_detector(config_file, checkpoint_file, device=device)
# inference the demo image
result = inference_detector(model, 'demo/demo.jpg')
# show the results
show_result_pyplot(model, 'demo/demo.jpg', result)

在这里插入图片描述

下载:mmsegmentation

git clone https://github.com/open-mmlab/mmsegmentation.git
pip install -e .

代码测试:

模型地址:https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth

from mmseg.apis import inference_segmentor, init_segmentor

import mmcv

config_file = 'configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py'
#https://download.openmmlab.com/mmsegmentation/v0.5/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth
checkpoint_file = 'checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth'

# build the model from a config file and a checkpoint file
model = init_segmentor(config_file, checkpoint_file, device='cuda:0')

# test a single image and show the results
img = 'demo/demo.png'  # or img = mmcv.imread(img), which will only load it once

result = inference_segmentor(model, img)

# visualize the results in a new window
model.show_result(img, result, show=True)

# or save the visualization results to image files
# you can change the opacity of the painted segmentation map in (0, 1].
model.show_result(img, result, out_file='result.jpg', opacity=0.5)

在这里插入图片描述

版权声明:本文为CSDN博主「枭玉龙」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/qq_46107892/article/details/121789453

枭玉龙

我还没有学会写个人说明!

暂无评论

发表评论

相关推荐

YOLO-V3-SPP详细解析

YOLO-V3-SPP 继前两篇简单的YOLO博文 YOLO-V1 论文理解《You Only Look Once: Unified, Real-Time Object Detection》YOLO-V2论文理解《YOLO9000: Bet

Yolox 训练自己的数据集

目录 Yolo系列因为其灵活性,一直是目标检测热门算法,近期旷视的研究者提出了Yolox高性能目标检测器,将Anchor free引入了Yolo算法,是除YOLOV1之后,