iPhone 相机使用 AVCaptureSession 拍摄视频并使用 ffmpeg CMSampleBufferRef 更改 h.264 格式是问题所在。请指教

标签 iphone ffmpeg video-streaming encode h.264

我的目标是 h.264/AAC , mpeg2-ts 从 iphone 设备流式传输到服务器。

目前我的源码是FFmpeg+libx264编译成功。我知道 gnu 许可证。我想要演示程序。

我想知道

1.CMSampleBufferRef到AVPicture数据是否成功?

 avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_RGB32, width, height);
  pFrame linesize and data is not null but pst -9233123123 . outpic also .
 Because of this I have to guess 'non-strictly-monotonic PTS' message 

2.这个日志是重复的。
encoding frame (size= 0)
encoding frame = "" , 'avcodec_encode_video' return 0 is success but always 0 . 

我不知道该怎么办...
2011-06-01 15:15:14.199 AVCam[1993:7303] pFrame = avcodec_alloc_frame(); 
2011-06-01 15:15:14.207 AVCam[1993:7303] avpicture_fill = 1228800
Video encoding
2011-0601 15:5:14.215 AVCam[1993:7303] codec = 5841844
[libx264 @ 0x1441e00] using cpu capabilities: ARMv6 NEON
[libx264 @ 0x1441e00] profile Constrained Baseline, level 2.0[libx264 @ 0x1441e00] non-strictly-monotonic PTS
encoding frame (size=    0)
encoding frame 
[libx264 @ 0x1441e00] final ratefactor: 26.74

3.我不得不猜测“非严格单调 PTS”消息是所有问题的原因。
这是什么'非严格单调的PTS'。

~~~~~~~~~这是源码~~~~~~~~~~~~~~~~~~~~~
(void)        captureOutput:(AVCaptureOutput *)captureOutput 
        didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
               fromConnection:(AVCaptureConnection *)connection
{

    if( !CMSampleBufferDataIsReady(sampleBuffer) )
    {
        NSLog( @"sample buffer is not ready. Skipping sample" );
        return;
    }


    if( [isRecordingNow isEqualToString:@"YES"] )
    {
        lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
        if( videoWriter.status != AVAssetWriterStatusWriting  )
        {
            [videoWriter startWriting];
            [videoWriter startSessionAtSourceTime:lastSampleTime];
        }

        if( captureOutput == videooutput )
        {
            [self newVideoSample:sampleBuffer];

            CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
            CVPixelBufferLockBaseAddress(pixelBuffer, 0); 

            // access the data 
            int width = CVPixelBufferGetWidth(pixelBuffer); 
            int height = CVPixelBufferGetHeight(pixelBuffer); 
            unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer); 

            AVFrame *pFrame; 
            pFrame = avcodec_alloc_frame(); 
            pFrame->quality = 0;

            NSLog(@"pFrame = avcodec_alloc_frame(); ");

//          int bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);

//          int bytesSize = height * bytesPerRow ;  

//          unsigned char *pixel = (unsigned char*)malloc(bytesSize);

//          unsigned char *rowBase = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);

//          memcpy (pixel, rowBase, bytesSize);


            int avpicture_fillNum = avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_RGB32, width, height);//PIX_FMT_RGB32//PIX_FMT_RGB8
            //NSLog(@"rawPixelBase = %i , rawPixelBase -s = %s",rawPixelBase, rawPixelBase); 
            NSLog(@"avpicture_fill = %i",avpicture_fillNum);
            //NSLog(@"width = %i,height = %i",width, height);



            // Do something with the raw pixels here 

            CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); 

            //avcodec_init();
            //avdevice_register_all();
            av_register_all();





            AVCodec *codec;
            AVCodecContext *c= NULL;
            int  out_size, size, outbuf_size;
            //FILE *f;
            uint8_t *outbuf;

            printf("Video encoding\n");

            /* find the mpeg video encoder */
            codec =avcodec_find_encoder(CODEC_ID_H264);//avcodec_find_encoder_by_name("libx264"); //avcodec_find_encoder(CODEC_ID_H264);//CODEC_ID_H264);
            NSLog(@"codec = %i",codec);
            if (!codec) {
                fprintf(stderr, "codec not found\n");
                exit(1);
            }

            c= avcodec_alloc_context();

            /* put sample parameters */
            c->bit_rate = 400000;
            c->bit_rate_tolerance = 10;
            c->me_method = 2;
            /* resolution must be a multiple of two */
            c->width = 352;//width;//352;
            c->height = 288;//height;//288;
            /* frames per second */
            c->time_base= (AVRational){1,25};
            c->gop_size = 10;//25; /* emit one intra frame every ten frames */
            //c->max_b_frames=1;
            c->pix_fmt = PIX_FMT_YUV420P;

            c ->me_range = 16;
            c ->max_qdiff = 4;
            c ->qmin = 10;
            c ->qmax = 51;
            c ->qcompress = 0.6f;

            /* open it */
            if (avcodec_open(c, codec) < 0) {
                fprintf(stderr, "could not open codec\n");
                exit(1);
            }


            /* alloc image and output buffer */
            outbuf_size = 100000;
            outbuf = malloc(outbuf_size);
            size = c->width * c->height;

            AVFrame* outpic = avcodec_alloc_frame();
            int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);

            //create buffer for the output image
            uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);

#pragma mark -  

            fflush(stdout);

<pre>//         int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
//          uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
//          
//          //UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"10%d", i]];
//          CGImageRef newCgImage = [self imageFromSampleBuffer:sampleBuffer];//[image CGImage];
//          
//          CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
//          CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
//          buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);   
//          
//          avpicture_fill((AVPicture*)pFrame, buffer, PIX_FMT_RGB8, c->width, c->height);
            avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);

            struct SwsContext* fooContext = sws_getContext(c->width, c->height, 
                                                           PIX_FMT_RGB8, 
                                                           c->width, c->height, 
                                                           PIX_FMT_YUV420P, 
                                                           SWS_FAST_BILINEAR, NULL, NULL, NULL);

            //perform the conversion
            sws_scale(fooContext, pFrame->data, pFrame->linesize, 0, c->height, outpic->data, outpic->linesize);
            // Here is where I try to convert to YUV

            /* encode the image */

            out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
            printf("encoding frame (size=%5d)\n", out_size);
            printf("encoding frame %s\n", outbuf);


            //fwrite(outbuf, 1, out_size, f);

            //              free(buffer);
            //              buffer = NULL;      



            /* add sequence end code to have a real mpeg file */
//          outbuf[0] = 0x00;
//          outbuf[1] = 0x00;
//          outbuf[2] = 0x01;
//          outbuf[3] = 0xb7;
            //fwrite(outbuf, 1, 4, f);
            //fclose(f);
            free(outbuf);

            avcodec_close(c);
            av_free(c);
            av_free(pFrame);
            printf("\n");

最佳答案

这是因为每次“captureOutput:”迭代都会启动 AVCodecContext。
AVCodecContext 在每一帧中连续保存编码的信息和状态
到达。因此,每个 session 应该只进行一次所有初始化,或者如果
高度和宽度或任何变化。它还可以节省您的处理时间。这
您收到的消息是完全有效的。他们只是通知您有关打开编解码器的信息
以及在哪些地方进行了编解码器方面的讨论。

关于iPhone 相机使用 AVCaptureSession 拍摄视频并使用 ffmpeg CMSampleBufferRef 更改 h.264 格式是问题所在。请指教,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/6197793/

相关文章:

ios - iOS 上类似 Google Now 的界面

iphone - shouldStartLoadWithRequest 未被调用

filter - FFMPEG,提取第一帧并保持 5 秒

audio - 如何使用 ffmpeg 命令合并多个音频文件(它们是不同的格式)?

audio - Agora 和 WebRTC(Web 实时通信)有什么区别?

iphone - UIDocumentInteractionController 的奇怪问题

iphone - 从推送转场返回后,Tableview 行保持选中状态

如果一个输出流失败,ffmpeg 就会退出 - 这可以预防吗?

android - 如何以 MPEG-2 TS 输出和 ACC 音频编码格式录制视频

java - 为存储在 Google Appengine 数据存储区中的实体创建 URL