ios - 我如何处理 GPUImage 图像缓冲区,以便它们可用于 Tokbox 之类的东西?

标签 ios swift gpuimage opentok tokbox

我正在使用 OpenTok 并将其 Publisher 替换为我自己的包含 GPUImage 的子类版本。我的目标是添加过滤器。

应用程序构建并运行,但在此处崩溃:

   func willOutputSampleBuffer(sampleBuffer: CMSampleBuffer!) {
        let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
        CVPixelBufferLockBaseAddress(imageBuffer!, 0)
        videoFrame?.clearPlanes()
        for var i = 0 ; i < CVPixelBufferGetPlaneCount(imageBuffer!); i++ {
            print(i)
            videoFrame?.planes.addPointer(CVPixelBufferGetBaseAddressOfPlane(imageBuffer!, i))
        }
        videoFrame?.orientation = OTVideoOrientation.Left
        videoCaptureConsumer.consumeFrame(videoFrame) //comment this out to stop app from crashing. Otherwise, it crashes here.
        CVPixelBufferUnlockBaseAddress(imageBuffer!, 0)
    }

如果我注释掉该行,我就可以运行该应用程序而不会崩溃。事实上,我看到滤镜应用正确,但它在闪烁。不会向 Opentok 发布任何内容。

我的整个代码库都可以下载。具体文件点此查看:This is the specific file for the class .它实际上很容易运行 - 只需在运行之前执行 pod install。

经检查,可能是 videoCaptureConsumer 没有初始化。 Protocol reference

我不知道我的代码是什么意思。我直接从这个 objective-c 文件翻译它:Tokbox's sample project

最佳答案

我同时分析了您的 Swift 项目和 Objective-C 项目。 我发现,两者都不起作用。

在这篇文章中,我想进行首次更新并展示一个真正有效的演示,说明如何将 GPU 图像过滤器与 OpenTok 结合使用。

为什么您的 GPUImage 文件管理器实现不支持 OpenTok 的原因

#1 多目标规范

let sepia = GPUImageSepiaFilter()
videoCamera?.addTarget(sepia)
sepia.addTarget(self.view)        
videoCamera?.addTarget(self.view) // <-- This is wrong and produces the flickering
videoCamera?.startCameraCapture()

两个源试图渲染到同一个 View 中。使事物闪烁...

第一部分已解决。 下一步:为什么没有任何内容发布到 OpenTok?为了找到这个原因,我决定从“工作”Objective-C 版本开始。

#2 -Objective-C 原始代码库

原始 Objective-C 版本没有预期的功能。将 GPUImageVideoCamera 发布到 OpenTok 订阅者可以正常工作,但不涉及过滤。这就是您的核心要求。 关键是,添加过滤器并不像人们预期的那么简单,因为不同的图像格式和不同的异步编程机制。

那么原因 #2,为什么您的代码没有按预期工作: 您移植工作的引用代码库不正确。它不允许在发布 - 订阅者管道之间放置 GPU 过滤器。

一个有效的 Objective-C 实现

我修改了 Objective-C 版本。当前结果如下所示:

[![在此处输入图片描述][1]][1]

运行平稳。

最后的步骤

这是自定义 Tok 发布者的完整代码。它基本上是来自 [ https://github.com/JayTokBox/TokBoxGPUImage/blob/master/TokBoxGPUImage/ViewController.m][2] 的原始代码 (TokBoxGPUImagePublisher)具有以下显着修改:

OTVideoFrame 使用新格式实例化

    ...
    format = [[OTVideoFormat alloc] init];
    format.pixelFormat = OTPixelFormatARGB;
    format.bytesPerRow = [@[@(imageWidth * 4)] mutableCopy];
    format.imageWidth = imageWidth;
    format.imageHeight = imageHeight;
    videoFrame = [[OTVideoFrame alloc] initWithFormat: format];
    ...

替换WillOutputSampleBuffer回调机制

此回调仅在直接来自 GPUImageVideoCamera 的样本缓冲区准备就绪时触发,而不是来自您的自定义过滤器。 GPUImageFilters 不提供这样的回调/委托(delegate)机制。这就是为什么我们在两者之间放置一个 GPUImageRawDataOutput 并要求它提供准备好的图像。此管道在 initCapture 方法中实现,如下所示:

    videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];

    videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
    sepiaImageFilter = [[GPUImageSepiaFilter alloc] init];
    [videoCamera addTarget:sepiaImageFilter];
    // Create rawOut
    CGSize size = CGSizeMake(imageWidth, imageHeight);
    rawOut = [[GPUImageRawDataOutput alloc] initWithImageSize:size resultsInBGRAFormat:YES];

    // Filter into rawOut
    [sepiaImageFilter addTarget:rawOut];
    // Handle filtered images
    // We need a weak reference here to avoid a strong reference cycle.
    __weak GPUImageRawDataOutput* weakRawOut = self->rawOut;
    __weak OTVideoFrame* weakVideoFrame = self->videoFrame;
    __weak id<OTVideoCaptureConsumer> weakVideoCaptureConsumer = self.videoCaptureConsumer;
    //
    [rawOut setNewFrameAvailableBlock:^{
        [weakRawOut lockFramebufferForReading];
        // GLubyte is an uint8_t
        GLubyte* outputBytes = [weakRawOut rawBytesForImage];


        // About the video formats used by OTVideoFrame
        // --------------------------------------------
        // Both YUV video formats (i420, NV12) have the (for us) following important properties:
        //
        //  - Two planes
        //  - 8 bit Y plane
        //  - 8 bit 2x2 subsampled U and V planes (1/4 the pixels of the Y plane)
        //      --> 12 bits per pixel
        //
        // Further reading: www.fourcc.org/yuv.php
        //
        [weakVideoFrame clearPlanes];
        [weakVideoFrame.planes addPointer: outputBytes];
        [weakVideoCaptureConsumer consumeFrame: weakVideoFrame];
        [weakRawOut unlockFramebufferAfterReading];
    }];
    [videoCamera addTarget:self.view];
    [videoCamera startCameraCapture];

完整代码(真正重要的是initCapture)

//

//  TokBoxGPUImagePublisher.m

//  TokBoxGPUImage

//

//  Created by Jaideep Shah on 9/5/14.

//  Copyright (c) 2014 Jaideep Shah. All rights reserved.

//



#import "TokBoxGPUImagePublisher.h"

#import "GPUImage.h"

static size_t imageHeight = 480;

static size_t imageWidth = 640;





@interface TokBoxGPUImagePublisher() <GPUImageVideoCameraDelegate, OTVideoCapture> {

    GPUImageVideoCamera *videoCamera;

    GPUImageSepiaFilter *sepiaImageFilter;

    OTVideoFrame* videoFrame;

    GPUImageRawDataOutput* rawOut;

    OTVideoFormat* format;

}



@end



@implementation TokBoxGPUImagePublisher



@synthesize videoCaptureConsumer ;  // In OTVideoCapture protocol



- (id)initWithDelegate:(id<OTPublisherDelegate>)delegate name:(NSString*)name

{

    self = [super initWithDelegate:delegate name:name];

    if (self)

    {

        self.view = [[GPUImageView alloc] initWithFrame:CGRectMake(0, 0, 1, 1)];

        [self setVideoCapture:self];

        format = [[OTVideoFormat alloc] init];
        format.pixelFormat = OTPixelFormatARGB;
        format.bytesPerRow = [@[@(imageWidth * 4)] mutableCopy];
        format.imageWidth = imageWidth;
        format.imageHeight = imageHeight;
        videoFrame = [[OTVideoFrame alloc] initWithFormat: format];
    }

    return self;

}

#pragma mark GPUImageVideoCameraDelegate



- (void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer

{

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    CVPixelBufferLockBaseAddress(imageBuffer, 0);



    [videoFrame clearPlanes];



    for (int i = 0; i < CVPixelBufferGetPlaneCount(imageBuffer); i++) {

        [videoFrame.planes addPointer:CVPixelBufferGetBaseAddressOfPlane(imageBuffer, i)];

    }

    videoFrame.orientation = OTVideoOrientationLeft;



    [self.videoCaptureConsumer consumeFrame:videoFrame];



    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

}

#pragma mark OTVideoCapture



- (void) initCapture {
    videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480

                                                      cameraPosition:AVCaptureDevicePositionBack];

    videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
    sepiaImageFilter = [[GPUImageSepiaFilter alloc] init];
    [videoCamera addTarget:sepiaImageFilter];
    // Create rawOut
    CGSize size = CGSizeMake(imageWidth, imageHeight);
    rawOut = [[GPUImageRawDataOutput alloc] initWithImageSize:size resultsInBGRAFormat:YES];

    // Filter into rawOut
    [sepiaImageFilter addTarget:rawOut];
    // Handle filtered images
    // We need a weak reference here to avoid a strong reference cycle.
    __weak GPUImageRawDataOutput* weakRawOut = self->rawOut;
    __weak OTVideoFrame* weakVideoFrame = self->videoFrame;
    __weak id<OTVideoCaptureConsumer> weakVideoCaptureConsumer = self.videoCaptureConsumer;
    //
    [rawOut setNewFrameAvailableBlock:^{
        [weakRawOut lockFramebufferForReading];
        // GLubyte is an uint8_t
        GLubyte* outputBytes = [weakRawOut rawBytesForImage];


        // About the video formats used by OTVideoFrame
        // --------------------------------------------
        // Both YUV video formats (i420, NV12) have the (for us) following important properties:
        //
        //  - Two planes
        //  - 8 bit Y plane
        //  - 8 bit 2x2 subsampled U and V planes (1/4 the pixels of the Y plane)
        //      --> 12 bits per pixel
        //
        // Further reading: www.fourcc.org/yuv.php
        //
        [weakVideoFrame clearPlanes];
        [weakVideoFrame.planes addPointer: outputBytes];
        [weakVideoCaptureConsumer consumeFrame: weakVideoFrame];
        [weakRawOut unlockFramebufferAfterReading];
    }];
    [videoCamera addTarget:self.view];
    [videoCamera startCameraCapture];
}



- (void)releaseCapture

{

    videoCamera.delegate = nil;

    videoCamera = nil;

}

- (int32_t) startCapture {

    return 0;

}



- (int32_t) stopCapture {

    return 0;

}



- (BOOL) isCaptureStarted {

    return YES;

}

- (int32_t)captureSettings:(OTVideoFormat*)videoFormat {

    videoFormat.pixelFormat = OTPixelFormatNV12;

    videoFormat.imageWidth = imageWidth;

    videoFormat.imageHeight = imageHeight;

    return 0;

}

@end

关于ios - 我如何处理 GPUImage 图像缓冲区,以便它们可用于 Tokbox 之类的东西?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/33839047/

相关文章:

ios - 无法在后台执行操作并在 mainThread 中更新 progressView

swift - 待办事项列表 - "right detail"单元格样式以显示项目数

ios - 具有自动布局的 GPUImageView

ios - 由于内存不足警告,我的应用程序不断崩溃,如何以内存高效的方式使用 GPUImage?

ios - ios 上更快的卷积

ios - 存在 UIViewController subview ,但我看到黑屏

objective-c - [layer removeAllAnimations]调用时如何让动画 View 停留在当前位置

ios - UIBezierPath addLine : wrong length uppon draw

ios - 在图表的 XAxis 上显示三个标签

iphone - 使用 iphone 上的 HTML5 视频元素,如何检测 "pause"和 "done"之间的差异?