ios - 在 AVFoundation 中进行帧间视频压缩的方法

标签 ios video avfoundation swift3 video-compression

我已经创建了一个流程,可以从我正在构建的应用程序中的照片和图像集合中生成视频“幻灯片”。该过程运行正常,但会创建不必要的大文件,因为视频中包含的任何照片都会重复 100 到 150 帧不变。我在 AVFoundation 中包含了我能找到的任何压缩,它主要应用帧内技术,并试图在 AVFoundation 中找到有关帧间压缩的更多信息。不幸的是,我只能找到一些引用资料,没有任何东西可以让我让它发挥作用。

我希望有人能引导我朝着正确的方向前进。视频生成器的代码包含在下面。我没有包含用于获取和准备单个帧的代码(下面称为 self.getFrame()),因为它似乎工作正常并且变得相当复杂,因为它处理照片、视频、添加标题帧和进行淡入淡出转换.对于重复的帧,它返回一个包含帧图像的结构和一个包含输出帧数的计数器。

        // Create a new AVAssetWriter Instance that will build the video

        assetWriter = createAssetWriter(path: filePathNew, size: videoSize!)
        guard assetWriter != nil else
        {
            print("Error converting images to video: AVAssetWriter not created.")
            inProcess = false
            return
        }

        let writerInput = assetWriter!.inputs.filter{ $0.mediaType == AVMediaTypeVideo }.first!

        let sourceBufferAttributes : [String : AnyObject] = [
            kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32ARGB) as AnyObject,
            kCVPixelBufferWidthKey as String : videoSize!.width as AnyObject,
            kCVPixelBufferHeightKey as String : videoSize!.height as AnyObject,
            AVVideoMaxKeyFrameIntervalKey as String : 50 as AnyObject,
            AVVideoCompressionPropertiesKey as String : [
                AVVideoAverageBitRateKey: 725000,
                AVVideoProfileLevelKey: AVVideoProfileLevelH264Baseline30,
                ] as AnyObject
        ]

        let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: sourceBufferAttributes)

        // Start the writing session

        assetWriter!.startWriting()

        assetWriter!.startSession(atSourceTime: kCMTimeZero)

        if (pixelBufferAdaptor.pixelBufferPool == nil) {
            print("Error converting images to video: pixelBufferPool nil after starting session")
            inProcess = false
            return
        }

        // -- Create queue for <requestMediaDataWhenReadyOnQueue>

        let mediaQueue = DispatchQueue(label: "mediaInputQueue")

        // Initialize run time values

        var presentationTime = kCMTimeZero
        var done = false
        var nextFrame: FramePack?                // The FramePack struct has the frame to output, noDisplays - the number of times that it will be output
                                                 // and an isLast flag that is true when it's the final frame

        writerInput.requestMediaDataWhenReady(on: mediaQueue, using: { () -> Void in    // Keeps invoking the block to get input until call markAsFinished

            nextFrame = self.getFrame()          // Get the next frame to be added to the output with its associated values
            let imageCGOut = nextFrame!.frame    // The frame to output
            if nextFrame!.isLast { done = true } // Identifies the last frame so can drop through to markAsFinished() below

            var frames = 0                       // Counts how often we've output this frame
            var waitCount = 0                    // Used to avoid an infinite loop if there's trouble with writer.Input

            while (frames < nextFrame!.noDisplays) && (waitCount < 1000000)  // Need to wait for writerInput to be ready - count deals with potential hung writer
            {
                waitCount += 1
                if waitCount == 1000000     // Have seen it go into 100s of thousands and succeed
                {
                    print("Exceeded waitCount limit while attempting to output slideshow frame.")
                    self.inProcess = false
                    return
                }

                if (writerInput.isReadyForMoreMediaData)
                {
                    waitCount = 0
                    frames += 1

                    autoreleasepool
                        {
                            if  let pixelBufferPool = pixelBufferAdaptor.pixelBufferPool
                            {
                                let pixelBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.allocate(capacity: 1)
                                let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(
                                    kCFAllocatorDefault,
                                    pixelBufferPool,
                                    pixelBufferPointer
                                )

                                if let pixelBuffer = pixelBufferPointer.pointee, status == 0
                                {
                                    CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: CVOptionFlags(0)))
                                    let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer)
                                    let rgbColorSpace = CGColorSpaceCreateDeviceRGB()

                                    // Set up a context for rendering using the PixelBuffer allocated above as the target

                                    let context = CGContext(
                                        data: pixelData,
                                        width: Int(self.videoWidth),
                                        height: Int(self.videoHeight),
                                        bitsPerComponent: 8,
                                        bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer),
                                        space: rgbColorSpace,
                                        bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue
                                    )

                                    // Draw the image into the PixelBuffer used for the context

                                    context?.draw(imageCGOut, in: CGRect(x: 0.0,y: 0.0,width: 1280, height: 720))

                                    // Append the image (frame) from the context pixelBuffer onto the video file

                                    _ = pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
                                    presentationTime = presentationTime + CMTimeMake(1, videoFPS)

                                    // We're done with the PixelBuffer, so unlock it

                                    CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: CVOptionFlags(0)))
                                }

                                pixelBufferPointer.deinitialize()
                                pixelBufferPointer.deallocate(capacity: 1)

                            } else {
                                NSLog("Error: Failed to allocate pixel buffer from pool")
                            }
                    }
                }
            }

在此先感谢您的任何建议。

最佳答案

看起来你是

  • 将一堆冗余帧添加到您的视频中,
  • 在误解下苦苦挣扎:视频文件必须具有恒定的高帧率,例如30 帧/秒。

  • 例如,如果您在 15 秒的时间内显示 3 张图像的幻灯片,那么您只需要输出 3 张图像,演示时间戳为 0s、5s、10s 和 assetWriter.endSession(atSourceTime:。 ) 15 秒,而不是 15 秒 * 30 FPS = 450 帧。

    换句话说,您的帧速率太高了——为了获得最好的帧间压缩,请将您的帧速率降低到您需要的最低帧数,一切都会好起来的*。

    *我看到一些视频服务/播放器因异常低的帧率而窒息,
    所以你可能需要一个最低帧率和一些冗余帧,例如1帧/5秒,ymmv

    关于ios - 在 AVFoundation 中进行帧间视频压缩的方法,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39934290/

    相关文章:

    objective-c - 覆盖在 MPMoviePlauerViewController 之上的问题

    ios - 无缝循环 AVPlayer

    html - 在回退到 <Object> 中嵌入的 IE8 时调整 <Video> 的大小,视频被裁剪

    ios - AVPlayer 失败并显示 AVPlayerItemStatusFailed(OSStatus 错误 -12983)

    ios - 如何在Xcode 8中测试3.5英寸的屏幕布局?

    ios - 将 NSArray 值添加到 NSDictionary 而不是引用

    ios - 关闭(或弹出)我手动添加的 NavigationController 不起作用

    javascript - 减慢用户下载/保存 html5 视频的速度

    ios - 专门使用这行代码时无法解包值

    ios - 使用 Airplay 时如何为 tvOS 信息面板填充元数据信息?