iphone - AVCaptureSession 只得到视频缓冲区

标签 iphone audio video camera avassetwriter

我正在尝试从 iphone 相机捕获视频和音频并通过 avassetwriter 输出为视频文件,但输出视频文件仅包含带音频的第一帧。 我已经检查了 AVCaptureSession 委托(delegate)方法,

 - (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
       fromConnection:(AVCaptureConnection *)connection 
{ 

似乎只有委托(delegate)方法一开始只得到一个视频样本缓冲区,然后一直接收音频样本缓冲区,就像跟踪日志一样。

- Video SampleBuffer captured!
- Audio SampleBuffer captured!
- Audio SampleBuffer captured!
- Audio SampleBuffer captured!

这是我如何设置音频/视频输入和输出的代码:

//初始化视音频采集设备组件 NSError *error = nil;

// Setup the video input
videoDevice = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];

// Setup the video output
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
videoOutput.alwaysDiscardsLateVideoFrames = NO;
videoOutput.minFrameDuration = CMTimeMake(20, 600);
videoOutput.videoSettings =
[NSDictionary dictionaryWithObject:
 [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];     

// Setup the audio input
audioDevice     = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeAudio];
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error ];     
// Setup the audio output
audioOutput = [[AVCaptureAudioDataOutput alloc] init];

// Create the session
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:videoInput];
[captureSession addInput:audioInput];
[captureSession addOutput:videoOutput];
[captureSession addOutput:audioOutput];

captureSession.sessionPreset = AVCaptureSessionPreset640x480;     

// Setup the queue
dispatch_queue_t videoBufferQueue = dispatch_queue_create("videoBufferQueue", NULL);
// dispatch_queue_t audioBufferQueue = dispatch_get_global_queue("audioBufferQueue",0);
[videoOutput setSampleBufferDelegate:self queue:videoBufferQueue];
[audioOutput setSampleBufferDelegate:self queue:videoBufferQueue];
dispatch_release(videoBufferQueue);
//  dispatch_release(audioBufferQueue);

这是我设置 AVAssetWriter 和 AssetWriterInput 的代码:

     NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

        // Add video input
        NSDictionary *videoCompressionProps = [NSDictionary dictionaryWithObjectsAndKeys:
                                               [NSNumber numberWithDouble:128.0*1024.0], AVVideoAverageBitRateKey,
                                               nil ];

        NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                       AVVideoCodecH264, AVVideoCodecKey,
                                       [NSNumber numberWithInt:480], AVVideoWidthKey,
                                       [NSNumber numberWithInt:320], AVVideoHeightKey,
                                       //videoCompressionProps, AVVideoCompressionPropertiesKey,
                                       nil];

        videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
                                                               outputSettings:videoSettings];


        NSParameterAssert(videoWriterInput);
        videoWriterInput.expectsMediaDataInRealTime = YES;


        // Add the audio input
        AudioChannelLayout acl;
        bzero( &acl, sizeof(acl));
        acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;


        NSDictionary* audioOutputSettings = nil;          
       audioOutputSettings = [ NSDictionary dictionaryWithObjectsAndKeys:                       
                                   [ NSNumber numberWithInt:kAudioFormatAppleLossless ], AVFormatIDKey,
                                   [ NSNumber numberWithInt: 16 ], AVEncoderBitDepthHintKey,
                                   [ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
                                   [ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,                                      
                                   [ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,
                                   nil ];

        audioWriterInput = [AVAssetWriterInput 
                             assetWriterInputWithMediaType: AVMediaTypeAudio 
                             outputSettings: audioOutputSettings ];

        audioWriterInput.expectsMediaDataInRealTime = YES;

         NSError *error = nil;
        NSString *betaCompressionDirectory = [NSHomeDirectory() stringByAppendingPathComponent:videoURL];    
        unlink([betaCompressionDirectory UTF8String]);

        videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory]
                                                  fileType:AVFileTypeQuickTimeMovie
                                                     error:&error];

        if(error)
            NSLog(@"error = %@", [error localizedDescription]);


        // add input
        [videoWriter addInput:videoWriterInput];
        [videoWriter addInput:audioWriterInput];

开始抓包的代码

NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys:
                                                           //[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], 
                                                           [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange],
                                                           kCVPixelBufferPixelFormatTypeKey, nil];

    adaptor = [[AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
                                                                                sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary] retain];

    NSLog(@"Adaptor init finished. Going to start capture Session...");

    /*We start the capture*/

    [self.captureSession startRunning]; 

来自 AVCaptureSession 委托(delegate) captureOutput 方法的代码:

lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
    if( !CMSampleBufferDataIsReady(sampleBuffer) )
    {
        NSLog( @"sample buffer is not ready. Skipping sample" );
        return;
    }
    if( isRecording == YES )
    {
        switch (videoWriter.status) {
            case AVAssetWriterStatusUnknown:
                NSLog(@"First time execute");
                if (CMTimeCompare(lastSampleTime, kCMTimeZero) == 0) {
                    lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
                }

                [videoWriter startWriting];
                [videoWriter startSessionAtSourceTime:lastSampleTime];

                //Break if not ready, otherwise fall through.
                if (videoWriter.status != AVAssetWriterStatusWriting) {
                    break ;
                }

            case AVAssetWriterStatusWriting:
                if( captureOutput == audioOutput) {
                    NSLog(@"Audio Buffer capped!");
                    if( ![audioWriterInput isReadyForMoreMediaData]) { 
                        break;
                    }

                    @try {
                        if( ![audioWriterInput appendSampleBuffer:sampleBuffer] ) {
                            NSLog(@"Audio Writing Error");
                        } else {
                            [NSThread sleepForTimeInterval:0.03];
                        } 
                    }
                    @catch (NSException *e) {
                        NSLog(@"Audio Exception: %@", [e reason]);
                    }
                }
                else if( captureOutput == videoOutput ) {
                    NSLog(@"Video Buffer capped!");

                    if( ![videoWriterInput isReadyForMoreMediaData]) { 
                        break;
                    }

                    @try {
                        CVImageBufferRef buffer = CMSampleBufferGetImageBuffer(sampleBuffer);
                        CMTime frameTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
                        if (buffer)
                        {    
                            if([videoWriterInput isReadyForMoreMediaData])  
                                if(![adaptor appendPixelBuffer:buffer withPresentationTime:frameTime]) //CMTimeMake(frame, fps)
                                    NSLog(@"FAIL");
                                else {
                                    [NSThread sleepForTimeInterval:0.03];

                                  //  NSLog(@"Success:%d, Time diff with Zero: ", frame);
//                                    CMTimeShow(frameTime);
                                }
                                else 
                                    NSLog(@"video writer input not ready for more data, skipping frame");
                        }
                        frame++;
                    }
                    @catch (NSException *e) {
                        NSLog(@"Video Exception Exception: %@", [e reason]);
                    }
                }

                break;
            case AVAssetWriterStatusCompleted:
                return;
            case AVAssetWriterStatusFailed: 
                NSLog(@"Critical Error Writing Queues");
                // bufferWriter->writer_failed = YES ;
                // _broadcastError = YES;
                return;
            case AVAssetWriterStatusCancelled:
                break;
            default:
                break;
        }

    }

最佳答案

CaptureSession 没有获得输出音频样本缓冲区,当它需要很长时间来处理视频输出时,这就是我的情况。视频和音频输出缓冲区在同一个队列中发送给您,因此在新缓冲区到来之前,您需要留出足够的时间来处理这两者。

很可能,这段代码是一个原因: [NSThread sleepForTimeInterval:0.03];

关于iphone - AVCaptureSession 只得到视频缓冲区,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/9257052/

相关文章:

ios - 将iOS应用崩溃报告从库发送到服务器

java - eclipse 找不到声音路径

javascript - 在播放其他音频时使音频静音

ios - 快速视频回放持续时间长

ios - 如何获取文档目录中文件的所有路径?

ios - swift ,将数据从类传递到 viewController

ios - 如何设置视频大小,使其适用于所有没有黑边的 iOS 设备

flash - Flex VideoDisplay 停止缓冲

iphone - Objective-C 的隐藏特性

algorithm - 如何寻找具有可变比特率(VBR)的音频/视频数据?