视觉伺服控制工具Visual Servoing Platform---VISP(4)----目标检测与跟踪

文章目录[隐藏]

使用ViSP,您可以使用vpDotvpDot2类跟踪blob。

#include <visp/vp1394CMUGrabber.h>
#include <visp/vp1394TwoGrabber.h>
#include <visp/vpDisplayGDI.h>
#include <visp/vpDisplayX.h>
#include <visp/vpDot2.h>
int main()
{
#if (defined(VISP_HAVE_DC1394_2) || defined(VISP_HAVE_CMU1394))
  try {
    vpImage<unsigned char> I; // Create a gray level image container
#if defined(VISP_HAVE_DC1394_2)
    vp1394TwoGrabber g(false);
#elif defined(VISP_HAVE_CMU1394)
    vp1394CMUGrabber g;
#endif
    g.open(I);
    g.acquire(I);
#if defined(VISP_HAVE_X11)
    vpDisplayX d(I, 0, 0, "Camera view");
#elif defined(VISP_HAVE_GDI)
    vpDisplayGDI d(I, 0, 0, "Camera view");
#else
    std::cout << "No image viewer is available..." << std::endl;
#endif
    vpDisplay::display(I);
    vpDisplay::flush(I);
    vpDot2 blob;
    blob.setGraphics(true);
    blob.setGraphicsThickness(2);
    blob.initTracking(I);
    while(1) {
      g.acquire(I); // Acquire an image
      vpDisplay::display(I);
      blob.track(I);
      vpDisplay::flush(I);
      if (vpDisplay::getClick(I, false))
        break;
    }
  }
  catch(vpException e) {
    std::cout << "Catch an exception: " << e << std::endl;
  }
#endif
}

首先创建blob跟踪器的一个实例。

vpDot2 d;

然后,我们正在修改一些默认设置,以允许图形覆盖轮廓像素和重心位置,厚度为2像素。

blob.setGraphics(true);
blob.setGraphicsThickness(2);

然后,我们等待用户初始化在blob中抛出鼠标单击事件进行跟踪。

blob.initTracking(I);

跟踪器现在已初始化。可以对新图像执行跟踪:

blob.track(I);

blob自动检测与跟踪

下面的示例演示如何在第一个图像中检测斑点,然后跟踪所有检测到的斑点。此功能仅适用于vpDot2类。

#include <visp/vpDisplayGDI.h>
#include <visp/vpDisplayX.h>
#include <visp/vpDot2.h>
#include <visp/vpImageIo.h>
int main()
{
  try {
    bool learn = false;
    vpImage<unsigned char> I; // Create a gray level image container
    vpImageIo::read(I, "./target.pgm");
#if defined(VISP_HAVE_X11)
    vpDisplayX d(I, 0, 0, "Camera view");
#elif defined(VISP_HAVE_GDI)
    vpDisplayGDI d(I, 0, 0, "Camera view");
#else
    std::cout << "No image viewer is available..." << std::endl;
#endif
    vpDisplay::display(I);
    vpDisplay::flush(I);
    vpDot2 blob;
    if (learn) {
      // Learn the characteristics of the blob to auto detect
      blob.setGraphics(true);
      blob.setGraphicsThickness(1);
      blob.initTracking(I);
      blob.track(I);
      std::cout << "Blob characteristics: " << std::endl;
      std::cout << " width : " << blob.getWidth() << std::endl;
      std::cout << " height: " << blob.getHeight() << std::endl;
#if VISP_VERSION_INT > VP_VERSION_INT(2,7,0)
      std::cout << " area: " << blob.getArea() << std::endl;
#endif
      std::cout << " gray level min: " << blob.getGrayLevelMin() << std::endl;
      std::cout << " gray level max: " << blob.getGrayLevelMax() << std::endl;
      std::cout << " grayLevelPrecision: " << blob.getGrayLevelPrecision() << std::endl;
      std::cout << " sizePrecision: " << blob.getSizePrecision() << std::endl;
      std::cout << " ellipsoidShapePrecision: " << blob.getEllipsoidShapePrecision() << std::endl;
    }
    else {
      // Set blob characteristics for the auto detection
      blob.setWidth(50);
      blob.setHeight(50);
#if VISP_VERSION_INT > VP_VERSION_INT(2,7,0)
      blob.setArea(1700);
#endif
      blob.setGrayLevelMin(0);
      blob.setGrayLevelMax(30);
      blob.setGrayLevelPrecision(0.8);
      blob.setSizePrecision(0.65);
      blob.setEllipsoidShapePrecision(0.65);
    }
    std::list<vpDot2> blob_list;
    blob.searchDotsInArea(I, 0, 0, I.getWidth(), I.getHeight(), blob_list);
    if (learn) {
      // The blob that is tracked by initTracking() is not in the list of auto detected blobs
      // We add it:
      blob_list.push_back(blob);
    }
    std::cout << "Number of auto detected blob: " << blob_list.size() << std::endl;
    std::cout << "A click to exit..." << std::endl;
    while(1) {
      vpDisplay::display(I);
      for(std::list<vpDot2>::iterator it=blob_list.begin(); it != blob_list.end(); ++it) {
        (*it).setGraphics(true);
        (*it).setGraphicsThickness(3);
        (*it).track(I);
      }
      vpDisplay::flush(I);
      if (vpDisplay::getClick(I, false))
        break;
      vpTime::wait(40);
    }
  }
  catch(vpException e) {
    std::cout << "Catch an exception: " << e << std::endl;
  }
}

以下是对源码的详细说明:
首先,我们创建一个跟踪器实例。

vpDot2 blob;

然后,处理了两个实例。当learn设置为true时,第一种情况是学习blob特性。用户必须单击用作参考blob的blob。大小、面积、灰度最小值和最大值以及一些精度参数将用于搜索整个图像中的相似blob。

f (learn) {
  // Learn the characteristics of the blob to auto detect
  blob.setGraphics(true);
  blob.setGraphicsThickness(1);
  blob.initTracking(I);
  blob.track(I);
  std::cout << "Blob characteristics: " << std::endl;
  std::cout << " width : " << blob.getWidth() << std::endl;
  std::cout << " height: " << blob.getHeight() << std::endl;
  std::cout << " area: " << blob.getArea() << std::endl;
  std::cout << " gray level min: " << blob.getGrayLevelMin() << std::endl;
  std::cout << " gray level max: " << blob.getGrayLevelMax() << std::endl;
  std::cout << " grayLevelPrecision: " << blob.getGrayLevelPrecision() << std::endl;
  std::cout << " sizePrecision: " << blob.getSizePrecision() << std::endl;
  std::cout << " ellipsoidShapePrecision: " << blob.getEllipsoidShapePrecision() << std::endl;
}

如果您对要搜索的blob的尺寸有精确的了解,则第二种情况是直接设置参考特征。

else {
  // Set blob characteristics for the auto detection
  blob.setWidth(50);
  blob.setHeight(50);
  blob.setArea(1700);
  blob.setGrayLevelMin(0);
  blob.setGrayLevelMax(30);
  blob.setGrayLevelPrecision(0.8);
  blob.setSizePrecision(0.65);
  blob.setEllipsoidShapePrecision(0.65);
}

已知blob特征后,只需通过以下方法搜索图像中的类似斑点:

std::list<vpDot2> blob_list;
blob.searchDotsInArea(I, 0, 0, I.getWidth(), I.getHeight(), auto_detected_blob_list);

此处blob_list包含图像I中检测到的blob列表。启用学习时,跟踪的blob不在自动检测到的blob列表中。我们将其添加到列表的末尾:

if (learn) {
  // The blob that is tracked by initTracking() is not in the list of auto detected blobs
  // We add it:
  blob_list.push_back(blob);
}

最后,当新图像可用时,我们将跟踪所有blob:

for(std::list<vpDot2>::iterator it=blob_list.begin(); it != blob_list.end(); ++it) {
  (*it).setGraphics(true);
  (*it).setGraphicsThickness(3);
  (*it).track(I);
}

版权声明:本文为CSDN博主「难受啊!马飞...」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/qq_33328642/article/details/122353739

难受啊!马飞...

我还没有学会写个人说明!

暂无评论

发表评论

相关推荐

《智能计算系统》实验-7-1-YOLOv3

在做《智能计算系统》综合实验7-1-YOLOv3时,遇到了很多问题,实验书过程不全,现将整个实验流程梳理如下,以对其他读者有所裨益: 一、搭建环境 新建容器v7&#xff0

YOLOX笔记

目录 1. 样本匹配 正负样本划分过程 2. yoloxwarmcos 学习率 3. 无法开启多gpu训练, 或者多gpu训练卡住? 1. 样本匹配 正负样本划分过程 说明: gt_centerbbox是在gt_bbox中心点向四周