您的位置:首页 > 编程语言 > Qt开发

OpenCv与Qt的结合

2013-03-20 17:05 218 查看
看了几天资料。

稍微总结下,OpenCv与Qt结合最主要的问题应该就是图像的显示了,即IplImage在Qt上的显示。

参考的几个资料:

资料1:http://www.qtcentre.org/threads/11655-OpenCV-integration

资料2:http://code.google.com/p/zarzamora/

资料3:http://www.morethantechnical.com/2009/03/05/qt-opencv-combined-for-face-detecting-qwidgets/

大多数的方案都是将IPLimage转到Qimage来显示,

资料2提供了QOpencv包,对里面的RGB的值交换下等去实现,实现下没问题,速度Ok;

但是包里提供的就是图像深度为IPL_DEPTH_8U的格式的转换,若需要其他格式的,可以参考资料1帖子中的10楼的那个函数,不错的。

资料3中通过设置QImage的格式,实现了IPLImage和QImage的共享Image Data Buffer,用作者的话说就是how awesome is that:) ,他也Believe比那个资料2中所说的实现速度快。

在自己的烂机子上测试了,的确跑的比QOpencv的快一些。

资料1中的sf大神还提供了基于OpenGL实现的方法,用的是bindTexture()等函数,据说快很多,有空去实现下。

目前用3中Roy提供的方法。赞个。

转自:/article/11668915.html

QT和OpenCV

2010-03-27 16:31:26| 分类:

linux | 标签:
|字号大中小
订阅

Qt开发的程序一般都要借助qmake生成makefile文件。因此为了加入opencv库就要修改.pro文件,下面是Linux下该文件的配置。(增加的部分)


INCLUDEPATH
+= .
/usr/local/include/opencv


LIBS
+= /usr/local/lib/libcv.so
\



/usr/local/lib/libcvaux.so
\



/usr/local/lib/libcxcore.so
\



/usr/local/lib/libhighgui.so
\



/usr/local/lib/libml.so

Opencv中通过摄像头捕捉到的每帧图像的数据结构是IplImage类型的,要把它显示到Qt窗口中就需要把它转化为QImage类型的图像。


#include
<QVector>


#include
<cstring>




QImage MyThread::IplImageToQImage(const IplImage
* iplImage,double mini,
double maxi)




...{


uchar
*qImageBuffer
= NULL;





int width
= iplImage->width;






/**//*
Note here that OpenCV image is stored so that each lined is


32-bits aligned thus


* explaining the necessity to "skip" the few last bytes of each


line of OpenCV image buffer.


*/



int widthStep
= iplImage->widthStep;



int height
= iplImage->height;





switch (iplImage->depth)





...{



case IPL_DEPTH_8U:



if(iplImage->nChannels
== 1)





...{





/**//* OpenCV image is stored with one byte grey pixel. We convert it


to an 8 bit depth QImage.



*/




qImageBuffer
= (uchar
*) malloc(width*height*sizeof(uchar));


uchar
*QImagePtr
= qImageBuffer;



const uchar
*iplImagePtr
= (const uchar
*) iplImage->imageData;





for(int y
= 0; y
< height; y++)





...{



// Copy line by line


memcpy(QImagePtr, iplImagePtr, width);


QImagePtr
+= width;


iplImagePtr
+= widthStep;


}




}



else if(iplImage->nChannels
== 3)





...{





/**//* OpenCV image is stored with 3 byte color pixels (3 channels).


We convert it to a 32 bit depth QImage.



*/


qImageBuffer
= (uchar
*) malloc(width*height*4*sizeof(uchar));


uchar
*QImagePtr
= qImageBuffer;



const uchar
*iplImagePtr
= (const uchar
*) iplImage->imageData;



for(int y
= 0; y
< height; y++)





...{



for (int x
= 0; x
< width; x++)





...{



// We cannot help but copy manually.


QImagePtr[0]
= iplImagePtr[0];


QImagePtr[1]
= iplImagePtr[1];


QImagePtr[2]
= iplImagePtr[2];


QImagePtr[3]
= 0;




QImagePtr
+= 4;


iplImagePtr
+= 3;


}


iplImagePtr
+= widthStep-3*width;


}




}



else





...{


qDebug("IplImageToQImage: image format is not supported : depth=8U
and %d channels ", iplImage->nChannels);


}



break;



case IPL_DEPTH_16U:



if(iplImage->nChannels
== 1)





...{





/**//* OpenCV image is stored with 2 bytes grey pixel. We convert it


to an 8 bit depth QImage.



*/


qImageBuffer
= (uchar
*) malloc(width*height*sizeof(uchar));


uchar
*QImagePtr
= qImageBuffer;



//const uint16_t *iplImagePtr = (const uint16_t *);



const unsigned
int *iplImagePtr
= (const unsigned
int *)iplImage->imageData;



for (int y
= 0; y
< height; y++)





...{



for (int x
= 0; x
< width; x++)





...{



// We take only the highest part of the 16 bit value. It is



//similar to dividing by 256.



*QImagePtr++
= ((*iplImagePtr++)
>> 8);


}


iplImagePtr
+= widthStep/sizeof(unsigned
int)-width;


}


}



else





...{


qDebug("IplImageToQImage: image format is not supported : depth=16U
and %d channels ", iplImage->nChannels);




}



break;



case IPL_DEPTH_32F:



if(iplImage->nChannels
== 1)





...{





/**//* OpenCV image is stored with float (4 bytes) grey pixel. We


convert it to an 8 bit depth QImage.



*/


qImageBuffer
= (uchar
*) malloc(width*height*sizeof(uchar));


uchar
*QImagePtr
= qImageBuffer;



const
float *iplImagePtr
= (const
float
*) iplImage->imageData;



for(int y
= 0; y
< height; y++)





...{



for(int x
= 0; x
< width; x++)





...{


uchar p;



float pf
= 255
* ((*iplImagePtr++)
- mini)
/ (maxi - mini);



if(pf
< 0) p
= 0;



else if(pf
> 255) p
= 255;



else p
= (uchar) pf;





*QImagePtr++
= p;


}


iplImagePtr
+= widthStep/sizeof(float)-width;


}


}



else





...{


qDebug("IplImageToQImage: image format is not supported : depth=32F
and %d channels ", iplImage->nChannels);


}



break;



case IPL_DEPTH_64F:



if(iplImage->nChannels
== 1)





...{





/**//* OpenCV image is stored with double (8 bytes) grey pixel. We


convert it to an 8 bit depth QImage.



*/


qImageBuffer
= (uchar
*) malloc(width*height*sizeof(uchar));


uchar
*QImagePtr
= qImageBuffer;



const
double *iplImagePtr
= (const
double
*) iplImage->imageData;



for(int y
= 0; y
< height; y++)





...{



for(int x
= 0; x
< width; x++)





...{


uchar p;



double pf
= 255
* ((*iplImagePtr++)
- mini)
/ (maxi - mini);





if(pf
< 0) p
= 0;



else if(pf
> 255) p
= 255;



else p
= (uchar) pf;





*QImagePtr++
= p;


}


iplImagePtr
+= widthStep/sizeof(double)-width;


}


}



else





...{


qDebug("IplImageToQImage: image format is not supported : depth=64F
and %d channels ", iplImage->nChannels);


}



break;



default:


qDebug("IplImageToQImage: image format is not supported : depth=%d
and %d channels ", iplImage->depth, iplImage->nChannels);


}




QImage qImage;


QVector<QRgb>
vcolorTable;



if(iplImage->nChannels
== 1)





...{



// We should check who is going to destroy this allocation.


QRgb
*colorTable
= new QRgb[256];



for(int i
= 0; i
< 256; i++)





...{


colorTable[i]
= qRgb(i, i, i);


vcolorTable[i]
= colorTable[i];


}


qImage
= QImage(qImageBuffer, width, height, QImage::Format_Indexed8).copy();


qImage.setColorTable(vcolorTable);


}



else





...{


qImage
= QImage(qImageBuffer, width, height, QImage::Format_RGB32).copy();


}


free(qImageBuffer);



return qImage;


}

于是你可以下面的代码(部分)测试一下:摄像头获取每一帧IplImage*类型的图像,转化为QImage类型的图像,用update()发出一个 paintEvent(QPaintEvent*)事件,如此不断更新图像。(IplImageToQImage中的mini和maxi默认初始化为0)




void
ImageViewer::paintEvent(QPaintEvent *)...{


QPainter painter(this);


painter.drawImage(QPoint(0,0),
*image);


}






bool
ImageViewer::ShowImage()...{


IplImage
*pImage
= NULL;


CvCapture
*pCapture
= NULL;





if((pCapture
= cvCaptureFromCAM(-1))
== NULL)...{


cout
<< "Open camera failed!
";



return
false;


}





while((pImage
= cvQueryFrame(pCapture))
!= NULL)...{


image
= IplImageToQImage(pImage);


update();


}


cvReleaseImage(&pImage);


cvReleaseCapture(&pCapture);



return
true;


}

转自:http://blog.163.com/lucien_cc/blog/static/130290562201022743126906/
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: