ios - 为什么 AVAssetWriter 会膨胀视频文件?

标签 ios ios5 avassetwriter core-video

奇怪的问题。我从视频文件 (.mov) 中获取帧,然后使用 AVAssetWriter 将它们写入另一个文件,而不进行任何显式处理。实际上,我只是将帧从一个内存缓冲区复制到另一个内存缓冲区,然后它们通过 PixelbufferAdaptor 刷新它们。然后我获取生成的文件,删除原始文件,将生成的文件替换为原始文件并执行相同的操作。有趣的是文件的大小不断增长!谁能解释一下为什么?

if(adaptor.assetWriterInput.readyForMoreMediaData==YES) {
            CVImageBufferRef cvimgRef=nil;
            CMTime lastTime=CMTimeMake(fcounter++, 30); 
            CMTime presentTime=CMTimeAdd(lastTime, frameTime);
            CMSampleBufferRef framebuffer=nil;
            CGImageRef frameImg=nil;
            if ( [asr status]==AVAssetReaderStatusReading ){
                framebuffer =   [asset_reader_output copyNextSampleBuffer];
                frameImg    =   [self imageFromSampleBuffer:framebuffer withColorSpace:rgbColorSpace];
            }
            if(frameImg && screenshot){
                //CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(framebuffer);
                CVReturn stat= CVPixelBufferLockBaseAddress(screenshot, 0);

                 pxdata=CVPixelBufferGetBaseAddress(screenshot);
                 bufferSize = CVPixelBufferGetDataSize(screenshot);
                // Get the number of bytes per row for the pixel buffer.
                 bytesPerRow = CVPixelBufferGetBytesPerRow(screenshot);
                // Get the pixel buffer width and height.
                 width = CVPixelBufferGetWidth(screenshot);
                 height = CVPixelBufferGetHeight(screenshot);
                 // Create a Quartz direct-access data provider that uses data we supply.
                 CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, pxdata, bufferSize, NULL);

                CGImageAlphaInfo ai=CGImageGetAlphaInfo(frameImg);
                size_t bpx=CGImageGetBitsPerPixel(frameImg);
                CGColorSpaceRef fclr=CGImageGetColorSpace(frameImg);

                 // Create a bitmap image from data supplied by the data provider.
                 CGImageRef cgImage = CGImageCreate(width, height, 8, 32, bytesPerRow,rgbColorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Big,dataProvider, NULL, true, kCGRenderingIntentDefault);
                 CGDataProviderRelease(dataProvider);

                stat= CVPixelBufferLockBaseAddress(finalPixelBuffer, 0);
                pxdata=CVPixelBufferGetBaseAddress(finalPixelBuffer);
                bytesPerRow = CVPixelBufferGetBytesPerRow(finalPixelBuffer);
                CGContextRef context = CGBitmapContextCreate(pxdata, imgsize.width,imgsize.height, 8, bytesPerRow, rgbColorSpace, kCGImageAlphaNoneSkipLast);
                CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(frameImg), CGImageGetHeight(frameImg)), frameImg);
                //CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
                //CGImageRef myMaskedImage;
                    const float myMaskingColors[6] = { 0, 0, 0, 1, 0, 0 };
                   CGImageRef  myColorMaskedImage = CGImageCreateWithMaskingColors (cgImage, myMaskingColors);
                 //CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(myColorMaskedImage), CGImageGetHeight(myColorMaskedImage)), myColorMaskedImage);

                [adaptor appendPixelBuffer:finalPixelBuffer withPresentationTime:presentTime];}

最佳答案

嗯,谜团似乎解开了。问题出在不适当的编解码器配置中。 这是我现在使用的一组配置选项,它似乎可以完成工作:

NSDictionary *codecSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   [NSNumber numberWithInt:1100000], AVVideoAverageBitRateKey,
                                   [NSNumber numberWithInt:5],AVVideoMaxKeyFrameIntervalKey,
                                   nil];
    NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   AVVideoCodecH264, AVVideoCodecKey,
                                   [NSNumber numberWithInt:[SharedApplicationData sharedData].overlayView.frame.size.width], AVVideoWidthKey,
                                   [NSNumber numberWithInt:[SharedApplicationData sharedData].overlayView.frame.size.height], AVVideoHeightKey,
                                   codecSettings,AVVideoCompressionPropertiesKey,
                                   nil];

    AVAssetWriterInput* writerInput = [AVAssetWriterInput
                                       assetWriterInputWithMediaType:AVMediaTypeVideo
                                       outputSettings:videoSettings];

现在文件大小仍在增长,但增长速度要慢得多。文件大小和视频质量之间存在权衡 - 减小大小会影响质量。

关于ios - 为什么 AVAssetWriter 会膨胀视频文件?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/10767918/

相关文章:

ios - Swift 多个按钮通过相同的 segue 发送字符串

ios - 设置状态栏色调颜色

iphone - 我希望今天支持哪些 iOS 版本?

code-signing - SDK 'Application' 中的产品类型 'iOS5.1' 需要代码签名

iphone - 自动布局,编程与界面生成器

ios - 将数据从一个 Collection View 传递到嵌入式 Collection View

objective-c - 测试 UIViews 的相等性,需要澄清

iphone - 如何在 iPhone 中将 CVPixelBufferPool 与 AVAssetWriterInputPixelBufferAdaptor 结合使用?

ios - 无法导出 AVPlayerItem

ios - 调用 init 时出现“调用中的额外参数”! AVAssetReader swift 上的初始值设定项