你知道 Apple 的示例代码 CameraRipple影响?好吧,我正在尝试在 openGL 完成水的所有炫酷效果后将相机输出记录在一个文件中。
我用 glReadPixels 完成了它,我在其中读取了 void * buffer 中的所有像素,创建了 CVPixelBufferRef 并将其附加到 AVAssetWriterInputPixelBufferAdaptor,但它太慢了,因为 readPixels 需要花费大量时间。我发现使用 FBO 和纹理现金你可以做同样的事情,但速度更快。这是我在 Apple 使用的 drawInRect 方法中的代码:
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)_context, NULL, &coreVideoTextureCashe);
if (err)
{
NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreate %d");
}
CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs2;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs2 = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs2,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
//CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);
CVPixelBufferRef pixiel_bufer4e = NULL;
CVPixelBufferCreate(kCFAllocatorDefault,
(int)_screenWidth,
(int)_screenHeight,
kCVPixelFormatType_32BGRA,
attrs2,
&pixiel_bufer4e);
CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
coreVideoTextureCashe, pixiel_bufer4e,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
(int)_screenWidth,
(int)_screenHeight,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture);
CFRelease(attrs2);
CFRelease(empty);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
CVPixelBufferLockBaseAddress(pixiel_bufer4e, 0);
if([pixelAdapter appendPixelBuffer:pixiel_bufer4e withPresentationTime:currentTime]) {
float result = currentTime.value;
NSLog(@"\n\n\4eta danni i current time e : %f \n\n",result);
currentTime = CMTimeAdd(currentTime, frameLength);
}
CVPixelBufferUnlockBaseAddress(pixiel_bufer4e, 0);
CVPixelBufferRelease(pixiel_bufer4e);
CFRelease(renderTexture);
CFRelease(coreVideoTextureCashe);
它录制视频并且速度非常快,但视频只是黑色我认为 textureCasheRef 不正确或者我填错了。
作为更新,这是我尝试过的另一种方法。我肯定错过了什么。在 viewDidLoad 中,在我设置 openGL 上下文之后我这样做:
CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)_context, NULL, &coreVideoTextureCashe);
if (err)
{
NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreate %d");
}
//creats the pixel buffer
pixel_buffer = NULL;
CVPixelBufferPoolCreatePixelBuffer (NULL, [pixelAdapter pixelBufferPool], &pixel_buffer);
CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, coreVideoTextureCashe, pixel_buffer,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
(int)screenWidth,
(int)screenHeight,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
然后在 drawInRect 中:我这样做:
if(isRecording&&writerInput.readyForMoreMediaData) {
CVPixelBufferLockBaseAddress(pixel_buffer, 0);
if([pixelAdapter appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) {
currentTime = CMTimeAdd(currentTime, frameLength);
}
CVPixelBufferLockBaseAddress(pixel_buffer, 0);
CVPixelBufferRelease(pixel_buffer);
}
然而它因 renderTexture 上的 bad_acsess 而崩溃,它不是 nil 而是 0x000000001。
更新
通过下面的代码,我实际上成功地拉取了视频文件,但是有一些绿色和红色的闪烁。我使用 BGRA pixelFormatType。
这里我创建了纹理缓存:
CVReturn err2 = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)_context, NULL, &coreVideoTextureCashe);
if (err2)
{
NSLog(@"Error at CVOpenGLESTextureCacheCreate %d", err);
return;
}
然后在 drawInRect 中我称之为:
if(isRecording&&writerInput.readyForMoreMediaData) {
[self cleanUpTextures];
CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs2;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs2 = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs2,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
//CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);
CVPixelBufferRef pixiel_bufer4e = NULL;
CVPixelBufferCreate(kCFAllocatorDefault,
(int)_screenWidth,
(int)_screenHeight,
kCVPixelFormatType_32BGRA,
attrs2,
&pixiel_bufer4e);
CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
coreVideoTextureCashe, pixiel_bufer4e,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
(int)_screenWidth,
(int)_screenHeight,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture);
CFRelease(attrs2);
CFRelease(empty);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
CVPixelBufferLockBaseAddress(pixiel_bufer4e, 0);
if([pixelAdapter appendPixelBuffer:pixiel_bufer4e withPresentationTime:currentTime]) {
float result = currentTime.value;
NSLog(@"\n\n\4eta danni i current time e : %f \n\n",result);
currentTime = CMTimeAdd(currentTime, frameLength);
}
CVPixelBufferUnlockBaseAddress(pixiel_bufer4e, 0);
CVPixelBufferRelease(pixiel_bufer4e);
CFRelease(renderTexture);
// CFRelease(coreVideoTextureCashe);
}
我知道我可以通过不在这里做所有这些事情来优化它,但我想让它发挥作用。在 cleanUpTextures 中,我使用以下命令刷新 textureCache:
CVOpenGLESTextureCacheFlush(coreVideoTextureCashe, 0);
RGBA 的东西可能有问题,或者我不知道,但它似乎仍然有点错误的缓存。
最佳答案
对于录制视频,这不是我使用的方法。您正在为每个渲染帧创建一个新的像素缓冲区,这会很慢,而且您永远不会释放它,因此您收到内存警告也就不足为奇了。
相反,请遵循我在 this answer 中描述的内容.我为缓存的纹理创建了一次像素缓冲区,将该纹理分配给我正在渲染的 FBO,然后在每一帧上使用 AVAssetWriter 的像素缓冲区输入附加该像素缓冲区。使用单个像素缓冲区比每帧重新创建一个像素缓冲区要快得多。您还希望保留与 FBO 的纹理目标关联的像素缓冲区,而不是在每一帧上都关联它。
我将这段录制代码封装在我开源的GPUImageMovieWriter中GPUImage框架,如果你想看看它在实践中是如何工作的。正如我在上面链接的答案中指出的那样,以这种方式进行录制可以实现极快的编码。
关于ios - OpenGL ES 到 iOS 中的视频(使用 iOS 5 纹理缓存渲染到纹理),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/11720954/