我想将opencv mat数据用作opengl纹理。我正在开发一个Qt4.8应用程序(但是我真的不需要通过qimage传递)扩展了QGLWidget。但是出了点问题。
首先是屏幕截图中的问题,然后是我正在使用的代码。
如果我不调整cv::Mat的大小(从视频中抓取),一切正常。如果将其缩放为尺寸的一半(scaleFactor = 2),一切正常。如果比例因子是2.8或2.9 ..一切正常。但是..在某种程度上Factor ..这是 buggy 。
这里的屏幕截图带有红色背景,用于理解opengl quad尺寸:
scaleFactor = 2
scaleFactor = 2.8
scaleFactor = 3
scaleFactor = 3.2
现在绘制方法的代码。我已经找到了将cv::Mat数据复制到this nice blog post的gl纹理中的代码。
void VideoViewer::paintGL()
{
glClear (GL_COLOR_BUFFER_BIT);
glClearColor (1.0, 0.0, 0.0, 1.0);
glEnable(GL_BLEND);
// Use a simple blendfunc for drawing the background
glBlendFunc(GL_ONE, GL_ZERO);
if (!cvFrame.empty()) {
glEnable(GL_TEXTURE_2D);
GLuint tex = matToTexture(cvFrame);
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
glTexCoord2f(1, 1); glVertex2f(0, cvFrame.size().height);
glTexCoord2f(1, 0); glVertex2f(0, 0);
glTexCoord2f(0, 0); glVertex2f(cvFrame.size().width, 0);
glTexCoord2f(0, 1); glVertex2f(cvFrame.size().width, cvFrame.size().height);
glEnd();
glDeleteTextures(1, &tex);
glDisable(GL_TEXTURE_2D);
glFlush();
}
}
GLuint VideoViewer::matToTexture(cv::Mat &mat, GLenum minFilter, GLenum magFilter, GLenum wrapFilter)
{
// http://r3dux.org/2012/01/how-to-convert-an-opencv-cvmat-to-an-opengl-texture/
// Generate a number for our textureID's unique handle
GLuint textureID;
glGenTextures(1, &textureID);
// Bind to our texture handle
glBindTexture(GL_TEXTURE_2D, textureID);
// Catch silly-mistake texture interpolation method for magnification
if (magFilter == GL_LINEAR_MIPMAP_LINEAR ||
magFilter == GL_LINEAR_MIPMAP_NEAREST ||
magFilter == GL_NEAREST_MIPMAP_LINEAR ||
magFilter == GL_NEAREST_MIPMAP_NEAREST)
{
std::cout << "VideoViewer::matToTexture > "
<< "You can't use MIPMAPs for magnification - setting filter to GL_LINEAR"
<< std::endl;
magFilter = GL_LINEAR;
}
// Set texture interpolation methods for minification and magnification
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, minFilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, magFilter);
// Set texture clamping method
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrapFilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrapFilter);
// Set incoming texture format to:
// GL_BGR for CV_CAP_OPENNI_BGR_IMAGE,
// GL_LUMINANCE for CV_CAP_OPENNI_DISPARITY_MAP,
// Work out other mappings as required ( there's a list in comments in main() )
GLenum inputColourFormat = GL_BGR;
if (mat.channels() == 1)
{
inputColourFormat = GL_LUMINANCE;
}
// Create the texture
glTexImage2D(GL_TEXTURE_2D, // Type of texture
0, // Pyramid level (for mip-mapping) - 0 is the top level
GL_RGB, // Internal colour format to convert to
mat.cols, // Image width i.e. 640 for Kinect in standard mode
mat.rows, // Image height i.e. 480 for Kinect in standard mode
0, // Border width in pixels (can either be 1 or 0)
inputColourFormat, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
GL_UNSIGNED_BYTE, // Image data type
mat.ptr()); // The actual image data itself
return textureID;
}
以及cv::Mat的加载和缩放方式:
void VideoViewer::retriveScaledFrame()
{
video >> cvFrame;
cv::Size s = cv::Size(cvFrame.size().width/scaleFactor, cvFrame.size().height/scaleFactor);
cv::resize(cvFrame, cvFrame, s);
}
有时有时无法正确渲染图像。为什么?当然,opencv和opengl之间的像素存储顺序有些不匹配。但是,如何解决呢?为什么有时还可以,有时不呢?
最佳答案
是的,这是像素存储在内存中的问题。 OpenCV和OpenGL可以以不同的方式存储像素,我必须更好地理解其工作原理。
在OpenGL中,您可以使用glPixelStorei
和GL_UNPACK_ALIGNMENT
,GL_UNPACK_ROW_LENGTH
指定这些参数。
可以找到一个很好的答案here。
关于c++ - OpenGL不喜欢OpenCV调整大小,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/19965146/