android - 如何使用 OpenGLES 2.0 在 libgdx 的背景上实时渲染 Android 的 YUV-YV12 相机图像?

标签 android opengl-es shader

这个问题是指这个问题:How to render Android's YUV-NV21 camera image on the background in libgdx with OpenGLES 2.0 in real-time?

在作者给出的最佳答案中有很好的解释,但我对 YV12 而不是 NV12 有一点不同的问题。 (这里有一些规范:https://wiki.videolan.org/YUVhttps://www.fourcc.org/yuv.php)

YUV-YV12 呢? Y 缓冲区是相同的,但 UV 没有被缠绕,所以我看起来像 V 和 U 的 2 个缓冲区。但是,谁可以将它们提供给着色器?我认为使用 Pixmap.Format.Intensity 纹理,设置 GL_LUMINANCE ?

我不明白如何使用 GL_LUMINANCE 将 NV12“UVUV”缓冲区转换为 RGB = V 和 A = U 的 RGBA 以及使用 GL_LUMINANCALPHA 的像素图格式?

YV12 使用“VVUU”缓冲区,所以很容易在 V 和 U 缓冲区中拆分,但是如何绑定(bind)它们并在着色器中获取 u 和 v?

感谢您的帮助,这个示例很棒!但我需要一些不同的东西,为此我需要深入了解着色器绑定(bind)行为的细节。

谢谢!

最佳答案

好的,我明白了: YUV-YV12 是每个像素 12 个字节:8 位 Y 平面后跟 8 位 2x2 子采样 V 和 U 平面。

基于这个答案(详细说明整个 YUV-NV12 到 RGB 着色器显示)https://stackoverflow.com/a/22456885/4311503 让我们做一些小改动。

所以,我们可以将缓冲区分成三部分

    yBuffer = ByteBuffer.allocateDirect(640*480);
    uBuffer = ByteBuffer.allocateDirect(640*480/4); //We have (width/2*height/2) pixels, each pixel is 2 bytes
    vBuffer = ByteBuffer.allocateDirect(640*480/4); //We have (width/2*height/2) pixels, each pixel is 2 bytes

然后获取数据

yBuffer.put(frame.getData(), 0, size);
yBuffer.position(0);
//YV12 : Y(8 bytes) then V(2 bytes) then U(2 bytes)
vBuffer.put(frame.getData(), size, size/4);
vBuffer.position(0);
uBuffer.put(frame.getData(), size  * 5 / 4, size/4);
uBuffer.position(0);

现在,准备纹理:

yTexture = new Texture(640, 480, Pixmap.Format.Intensity); //A 8-bit per pixel format
uTexture = new Texture(640 / 2, 480 / 2, Pixmap.Format.Intensity); //A 8-bit per pixel format
vTexture = new Texture(640 / 2, 480 / 2, Pixmap.Format.Intensity); //A 8-bit per pixel format

并且稍微改变一下绑定(bind),因为我们现在使用 3 个纹理而不是两个:

//Set texture slot 0 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE0);
yTexture.bind();

//Y texture is (width*height) in size and each pixel is one byte;
//by setting GL_LUMINANCE, OpenGL puts this byte into R,G and B
//components of the texture
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE,
        640, 480, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE, yBuffer);

//Use linear interpolation when magnifying/minifying the texture to
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);

/*
* Prepare the UV channel texture
*/

//Set texture slot 1 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE1);
uTexture.bind();

Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE,
        640 / 2, 480 / 2, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE,
        uBuffer);

//Use linear interpolation when magnifying/minifying the texture to
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);

//Set texture slot 1 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE2);
vTexture.bind();

//UV texture is (width/2*height/2) Using GL_Luminance, each pixel will match a buffer component
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE,
        640 / 2, 480 / 2, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE,
        vBuffer);

//Use linear interpolation when magnifying/minifying the texture to
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
        GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);


shader.begin();

//Set the uniform y_texture object to the texture at slot 0
shader.setUniformi("y_texture", 0);

//Set the uniform uv_texture object to the texture at slot 1
shader.setUniformi("u_texture", 1);
shader.setUniformi("v_texture", 2);

mesh.render(shader, GL20.GL_TRIANGLES);

shader.end();

最后使用下面的着色器(只是稍微改变了 fragment u 和 v 纹理部分)

    //Our vertex shader code; nothing special
    String vertexShader =
            "attribute vec4 a_position;                         \n" +
                    "attribute vec2 a_texCoord;                         \n" +
                    "varying vec2 v_texCoord;                           \n" +

                    "void main(){                                       \n" +
                    "   gl_Position = a_position;                       \n" +
                    "   v_texCoord = a_texCoord;                        \n" +
                    "}                                                  \n";

    //Our fragment shader code; takes Y,U,V values for each pixel and calculates R,G,B colors,
    //Effectively making YUV to RGB conversion
    String fragmentShader =
            "#ifdef GL_ES                                       \n" +
                    "precision highp float;                             \n" +
                    "#endif                                             \n" +

                    "varying vec2 v_texCoord;                           \n" +
                    "uniform sampler2D y_texture;                       \n" +
                    "uniform sampler2D u_texture;                       \n" +
                    "uniform sampler2D v_texture;                       \n" +

                    "void main (void){                                  \n" +
                    "   float r, g, b, y, u, v;                         \n" +

                    //We had put the Y values of each pixel to the R,G,B components by GL_LUMINANCE,
                    //that's why we're pulling it from the R component, we could also use G or B
                    //see https://stackoverflow.com/questions/12130790/yuv-to-rgb-conversion-by-fragment-shader/17615696#17615696
                    //and https://stackoverflow.com/questions/22456884/how-to-render-androids-yuv-nv21-camera-image-on-the-background-in-libgdx-with-o
                    "   y = texture2D(y_texture, v_texCoord).r;         \n" +

                    //Since we use GL_LUMINANCE, each compoentn it on it own map
                    "   u = texture2D(u_texture, v_texCoord).r - 0.5;  \n" +
                    "   v = texture2D(v_texture, v_texCoord).r - 0.5;  \n" +


                    //The numbers are just YUV to RGB conversion constants
                    "   r = y + 1.13983*v;                              \n" +
                    "   g = y - 0.39465*u - 0.58060*v;                  \n" +
                    "   b = y + 2.03211*u;                              \n" +

                    //We finally set the RGB color of our pixel
                    "   gl_FragColor = vec4(r, g, b, 1.0);              \n" +
                    "}                                                  \n";

在这里!

关于android - 如何使用 OpenGLES 2.0 在 libgdx 的背景上实时渲染 Android 的 YUV-YV12 相机图像?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/44031117/

相关文章:

ios - 如何更改渲染对象在屏幕上的放置位置,OpenGL Es 2.0 iOS

c# - 在 Unity 着色器中避免字符串属性

shader - SetInputLayout、VertexShader 和 PixelShader 之间不匹配

image - Sprite 的优化(数量 vs 冗余透明度)

android - 将多个帐户所有者添加到单个 Google Play 开发者帐户

android - 关于 getContentResolver() 查询 CallLog

Java/Android - 获取套接字输出流会中断执行,并且不会启动任何异常

android.content.res.Resources$NotFoundException : File res/drawable-v21/launch_background. xml

java - OpenGL es 2. Java for Android 中的顶点

ios - Core Image Kernel Language 的 OpenGL 坐标系