ios - 如何使用 AVFoundation 延迟几秒同时录制和播放捕获的视频?

标签 ios avfoundation

我正在考虑让我的 Swift iOS 应用程序录制视频并在同一屏幕上延迟 30 秒播放。

我一直在使用 official example录制视频。然后我添加了一个按钮,该按钮将在屏幕上的单独 View 中使用 AVPlayer 触发播放 self.movi​​eFileOutput?.outputFileURL。它接近我想要的,但显然一旦到达写入磁盘的文件末尾它就会停止播放,并且在写入下一个缓冲 block 时不会继续播放。

我可以每 30 秒停止一次视频录制并保存每个文件的 URL,以便我可以播放它,但这意味着视频捕获和播放会中断。

我怎样才能让视频录制永不停止,播放始终在屏幕上出现我想要的任何延迟?

我见过一个类似的问题,所有答案都指向 AVFoundation 文档。我找不到如何让 AVFoundation 在录制时将可预测的视频 block 从内存写入磁盘。

最佳答案

您可以通过录制 30 秒的视频 block 来实现您想要的效果,然后将它们排队到 AVQueuePlayer 以进行无缝播放。在 macOS 上使用 AVCaptureFileOutput 录制视频 block 会非常容易,但遗憾的是,在 iOS 上你不能在不丢帧的情况下创建新 block ,所以你必须使用更冗长、更低级别的 AVAssetWriter API:

import UIKit
import AVFoundation

// TODO: delete old videos
// TODO: audio

class ViewController: UIViewController {
    // capture
    let captureSession = AVCaptureSession()

    // playback
    let player = AVQueuePlayer()
    var playerLayer: AVPlayerLayer! = nil

    // output. sadly not AVCaptureMovieFileOutput
    var assetWriter: AVAssetWriter! = nil
    var assetWriterInput: AVAssetWriterInput! = nil

    var chunkNumber = 0
    var chunkStartTime: CMTime! = nil
    var chunkOutputURL: URL! = nil

    override func viewDidLoad() {
        super.viewDidLoad()

        playerLayer = AVPlayerLayer(player: player)
        view.layer.addSublayer(playerLayer)

        // inputs
        let videoCaptureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
        let videoInput = try! AVCaptureDeviceInput(device: videoCaptureDevice)
        captureSession.addInput(videoInput)

        // outputs
        // iOS AVCaptureFileOutput/AVCaptureMovieFileOutput still don't support dynamically
        // switching files (?) so we have to re-implement with AVAssetWriter
        let videoOutput = AVCaptureVideoDataOutput()
        // TODO: probably something else
        videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue.main)
        captureSession.addOutput(videoOutput)

        captureSession.startRunning()
    }

    override func viewDidLayoutSubviews() {
        super.viewDidLayoutSubviews()
        playerLayer.frame = view.layer.bounds
    }

    func createWriterInput(for presentationTimeStamp: CMTime) {
        let fileManager = FileManager.default
        chunkOutputURL = fileManager.urls(for: .documentDirectory, in: .userDomainMask)[0].appendingPathComponent("chunk\(chunkNumber).mov")
        try? fileManager.removeItem(at: chunkOutputURL)

        assetWriter = try! AVAssetWriter(outputURL: chunkOutputURL, fileType: AVFileTypeQuickTimeMovie)
        // TODO: get dimensions from image CMSampleBufferGetImageBuffer(sampleBuffer)
        let outputSettings: [String: Any] = [AVVideoCodecKey:AVVideoCodecH264, AVVideoWidthKey: 1920, AVVideoHeightKey: 1080]
        assetWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: outputSettings)
        assetWriterInput.expectsMediaDataInRealTime = true
        assetWriter.add(assetWriterInput)

        chunkNumber += 1
        chunkStartTime = presentationTimeStamp

        assetWriter.startWriting()
        assetWriter.startSession(atSourceTime: chunkStartTime)
    }
}

extension ViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
    func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
        let presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)

        if assetWriter == nil {
            createWriterInput(for: presentationTimeStamp)
        } else {
            let chunkDuration = CMTimeGetSeconds(CMTimeSubtract(presentationTimeStamp, chunkStartTime))

            if chunkDuration > 30 {
                assetWriter.endSession(atSourceTime: presentationTimeStamp)

                // make a copy, as finishWriting is asynchronous
                let newChunkURL = chunkOutputURL!
                let chunkAssetWriter = assetWriter!

                chunkAssetWriter.finishWriting {
                    print("finishWriting says: \(chunkAssetWriter.status.rawValue, chunkAssetWriter.error)")
                    print("queuing \(newChunkURL)")
                    self.player.insert(AVPlayerItem(url: newChunkURL), after: nil)
                    self.player.play()
                }
                createWriterInput(for: presentationTimeStamp)
            }
        }

        if !assetWriterInput.append(sampleBuffer) {
            print("append says NO: \(assetWriter.status.rawValue, assetWriter.error)")
        }
    }
}

附注很想知道你 30 秒前在做什么。你到底在做什么?

关于ios - 如何使用 AVFoundation 延迟几秒同时录制和播放捕获的视频?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45900415/

相关文章:

ios - 无法将类型 'collectionScannerViewController' 的值转换为预期的参数类型 'AVCaptureMetadataOutputObjectsDelegate!'

ios - 堆栈 View 中的角半径变得扭曲

iphone - 使用键访问数组/对象集

IOS如何检查我是否完成循环?

ios - 尝试使用 AV Foundation 捕获静止图像时不断出现 "inactive/invalid connection passed"错误

iphone - 裁剪 AVCaptureSession 捕获的图像

ios - 立即更新 CALayer 子层

ios - 在 Xib 文件中实例化 View Controller 如何导致难以理解和调试的代码?

objective-c - 如何隐藏 AVFoundation 调试日志?

ios - 可以在单例中设置我的 AVFoundation 相机吗?