文章目录[隐藏]
第一步:下载源码
darknet的源码官方链接:https://github.com/AlexeyAB/darknet
我已经搬移到gitee:lishan/darknet
源码项目中有介绍如何在windows上安装darknet
————————————————————————————————————————
Requirements for Windows, Linux and macOS
-
CMake >= 3.18: Download | CMake
-
Powershell (already installed on windows): Install PowerShell on Windows, Linux, and macOS - PowerShell | Microsoft Docs
-
CUDA >= 10.2: CUDA Toolkit Archive | NVIDIA Developer (on Linux do Post-installation Actions)
-
OpenCV >= 2.4: use your preferred package manager (brew, apt), build from source using vcpkg or download from OpenCV official site (on Windows set system variable
OpenCV_DIR
=C:\opencv\build
- where are theinclude
andx64
folders image) -
cuDNN >= 8.0.2 cuDNN Archive | NVIDIA Developer (on Linux follow steps described here Installation Guide :: NVIDIA Deep Learning cuDNN Documentation , on Windows follow steps described here Installation Guide :: NVIDIA Deep Learning cuDNN Documentation)
-
GPU with CC >= 3.0: https://en.wikipedia.org/wiki/CUDA
————————————————————————————————————————
第二步:下载必要软件工具
下载CMake、CUDA、Opencv、cuDNN、Visual Studio 2015/1217/2019
使用GPU的前提:保证自己的显卡算力大于等于3.0,否则只能用CPU检测,无需下载CUDA和cuDNN
算力查询网站:CUDA GPUs | NVIDIA Developer
RTX3060的算力在官网查不到,但其算力为8.6
第三步:安装所有依赖并编译darknet
安装Cuda、cuDNN
注意Cuda和CuDNN的版本对应关系
安装cuda和cudnn可参考我的另一篇文章:笔记本RTX3060+Win10_x86_64位搭建Pytorch深度学习本地环境_乐观的lishan的博客-CSDN博客
————————————————————————
cuda10.2、11.0、11.1、11.2 <——>cudnn8.1.0
cuda10.2、11.0、11.1、11.2 <——>cudnn8.1.1
cuda10.2 、11.x<——>cudnn8.2.0
cuda10.2 、11.x<——>cudnn8.2.1
cuda10.2 、11.4<——>cudnn8.2.2
cuda10.2 、11.4<——>cudnn8.2.4
————————————————————————
安装Cmake,下载链接
- CMake GUI:
Windows win64-x64 Installer
https://cmake.org/download/
安装OpenCv,假设安装路径为 C:\opencv
新建系统环境变量OpenCV_DIR,值为:
C:\opencv\build\x64\vc14\lib
第三步:编译darknet
打开Cmake-gui,编译
Source code: -> darknet源码文件夹
Binaries -> 编译后输出二进制文件路径(即指定生成的darknet.exe位置)
Configure
Optional platform for generator: x64
Finish
如果没有设置opencv环境变量,或者设置出错,则编译时会出现以下错误:
在cmake中重新指定opencv路径即可
如果出现其他错误,例如cuda出错,一样的操作,直接在cmake中更改对应的环境变量路径即可
重新configure即可
点击Generate
最后点击Generate右侧的Open Project按钮,使用VS 2015/2017/2019打开项目
选择release,x64,点击生成->生成解决方案
出现成功
在darknet\build\darknet\x64\Release下找到darknet.exe即可
复制darknet\build\darknet\x64\Release下所有文件到darknet\build\darknet\x64文件夹下
下载yolov4.weights权值文件到darknet\build\darknet\x64文件夹下
yolov4.weights
file 245 MB: yolov4.weights (Google-drive mirror yolov4.weights )
第四步:测试
通过cmd进入darknet\build\darknet\x64文件夹下
测试指定路径的图片
.\darknet detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -thresh 0.25
提示输入图片路径:Enter Image Path:
例如输入:data/dog.jpg
测试data文件夹的指定图片
加-ext_output参数可以在控制台显示输出边框坐标信息
.\darknet detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -ext_output dog.jpg
测试data文件夹视频xxx.mp4(需要自己准备test.mp4视频文件)
.\darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights -ext_output test.mp4
测试本地摄像头视频
.\darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights -c 0
测试网络视频
./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights http://192.168.0.80:8080/video?dummy=param.mjpg
-
其他测试命令如下:
-
Yolo v4 COCO - image:
./darknet detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -thresh 0.25
-
Output coordinates of objects:
./darknet detector test cfg/coco.data yolov4.cfg yolov4.weights -ext_output dog.jpg
-
Yolo v4 COCO - video:
./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights -ext_output test.mp4
-
Yolo v4 COCO - WebCam 0:
./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights -c 0
-
Yolo v4 COCO for net-videocam - Smart WebCam:
./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights http://192.168.0.80:8080/video?dummy=param.mjpg
-
Yolo v4 - save result videofile res.avi:
./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights test.mp4 -out_filename res.avi
-
Yolo v3 Tiny COCO - video:
./darknet detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights test.mp4
-
JSON and MJPEG server that allows multiple connections from your soft or Web-browser
ip-address:8070
and 8090:./darknet detector demo ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights test50.mp4 -json_port 8070 -mjpeg_port 8090 -ext_output
-
Yolo v3 Tiny on GPU #1:
./darknet detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights -i 1 test.mp4
-
Alternative method Yolo v3 COCO - image:
./darknet detect cfg/yolov4.cfg yolov4.weights -i 0 -thresh 0.25
-
Train on Amazon EC2, to see mAP & Loss-chart using URL like:
http://ec2-35-160-228-91.us-west-2.compute.amazonaws.com:8090
in the Chrome/Firefox (Darknet should be compiled with OpenCV):./darknet detector train cfg/coco.data yolov4.cfg yolov4.conv.137 -dont_show -mjpeg_port 8090 -map
-
186 MB Yolo9000 - image:
./darknet detector test cfg/combine9k.data cfg/yolo9000.cfg yolo9000.weights
-
Remember to put data/9k.tree and data/coco9k.map under the same folder of your app if you use the cpp api to build an app
-
To process a list of images
data/train.txt
and save results of detection toresult.json
file use:./darknet detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -ext_output -dont_show -out result.json < data/train.txt
-
To process a list of images
data/train.txt
and save results of detection toresult.txt
use:./darknet detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -dont_show -ext_output < data/train.txt > result.txt
-
Pseudo-labelling - to process a list of images
data/new_train.txt
and save results of detection in Yolo training format for each image as label<image_name>.txt
(in this way you can increase the amount of training data) use:./darknet detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights -thresh 0.25 -dont_show -save_labels < data/new_train.txt
-
To calculate anchors:
./darknet detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416
-
To check accuracy mAP@IoU=50:
./darknet detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights
-
To check accuracy mAP@IoU=75:
./darknet detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights -iou_thresh 0.75
-
版权声明:本文为CSDN博主「乐观的lishan」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/lishan132/article/details/121398293
暂无评论