ios - Swift Merge AVasset-Videos 数组

标签 ios arrays video avasset avmutablecomposition

我想将 AVAsset-arrayVideos 合并成一个视频并将其保存在相机胶卷中。 Raywenderlich.com 有一个很棒的 tutorial两个视频合并为一个。我创建了以下代码,但是导出到相机胶卷后我获得的视频仅包含数组中的第一个和最后一个视频(不包括 arrayVideos 中间的其余视频) .我在这里遗漏了什么吗?

var arrayVideos = [AVAsset]() //Videos Array    
var atTimeM: CMTime = CMTimeMake(0, 0)
var lastAsset: AVAsset!
var layerInstructionsArray = [AVVideoCompositionLayerInstruction]()
var completeTrackDuration: CMTime = CMTimeMake(0, 1)
var videoSize: CGSize = CGSize(width: 0.0, height: 0.0)

func mergeVideoArray(){

    let mixComposition = AVMutableComposition()
    for videoAsset in arrayVideos{
        let videoTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
        do {
            if videoAsset == arrayVideos.first{
                atTimeM = kCMTimeZero
            } else{
                atTimeM = lastAsset!.duration
            }
            try videoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), of: videoAsset.tracks(withMediaType: AVMediaTypeVideo)[0], at: atTimeM)  
            videoSize = videoTrack.naturalSize
        } catch let error as NSError {
            print("error: \(error)")
        }
        completeTrackDuration = CMTimeAdd(completeTrackDuration, videoAsset.duration)
        let videoInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
        if videoAsset != arrayVideos.last{
            videoInstruction.setOpacity(0.0, at: videoAsset.duration)
        }
        layerInstructionsArray.append(videoInstruction)
        lastAsset = videoAsset            
    }

    let mainInstruction = AVMutableVideoCompositionInstruction()
    mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, completeTrackDuration)
    mainInstruction.layerInstructions = layerInstructionsArray        

    let mainComposition = AVMutableVideoComposition()
    mainComposition.instructions = [mainInstruction]
    mainComposition.frameDuration = CMTimeMake(1, 30)
    mainComposition.renderSize = CGSize(width: videoSize.width, height: videoSize.height)

    let documentDirectory = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
    let dateFormatter = DateFormatter()
    dateFormatter.dateStyle = .long
    dateFormatter.timeStyle = .short
    let date = dateFormatter.string(from: NSDate() as Date)
    let savePath = (documentDirectory as NSString).appendingPathComponent("mergeVideo-\(date).mov")
    let url = NSURL(fileURLWithPath: savePath)

    let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)
    exporter!.outputURL = url as URL
    exporter!.outputFileType = AVFileTypeQuickTimeMovie
    exporter!.shouldOptimizeForNetworkUse = true
    exporter!.videoComposition = mainComposition
    exporter!.exportAsynchronously {

        PHPhotoLibrary.shared().performChanges({
            PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: exporter!.outputURL!)
        }) { saved, error in
            if saved {
                let alertController = UIAlertController(title: "Your video was successfully saved", message: nil, preferredStyle: .alert)
                let defaultAction = UIAlertAction(title: "OK", style: .default, handler: nil)
                alertController.addAction(defaultAction)
                self.present(alertController, animated: true, completion: nil)
            } else{
                print("video erro: \(error)")

            }
        }
    }
} 

最佳答案

您需要跟踪所有 Assets 的总时间并为每个视频更新它。

您问题中的代码是用当前视频重写 atTimeM。这就是为什么只包括第一个和最后一个。

它看起来像这样:

...
var totalTime : CMTime = CMTimeMake(0, 0)

func mergeVideoArray() {

    let mixComposition = AVMutableComposition()
    for videoAsset in arrayVideos {
        let videoTrack = 
            mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, 
                                           preferredTrackID: Int32(kCMPersistentTrackID_Invalid))          
        do {
            if videoAsset == arrayVideos.first {
                atTimeM = kCMTimeZero
            } else {
                atTimeM = totalTime // <-- Use the total time for all the videos seen so far.
            }
            try videoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), 
                                           of: videoAsset.tracks(withMediaType: AVMediaTypeVideo)[0], 
                                           at: atTimeM)  
            videoSize = videoTrack.naturalSize
        } catch let error as NSError {
            print("error: \(error)")
        }
        totalTime += videoAsset.duration // <-- Update the total time for all videos.
...

您可以删除 lastAsset 的使用。

关于ios - Swift Merge AVasset-Videos 数组,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/38972829/

相关文章:

ios - 如何在我想要的时间执行部分代码?

iphone - 在 UILabel 中使用带有粗体文本的自定义字体

ios - 使用 iOS 应用传送核心数据

c++ - 小的 C++ 代码刺激了我的头脑

python-3.x - 连接具有不同分辨率(通常为 1080 或 720)的视频的最佳方法

ios - 更漂亮的 if 语句行为方式

arrays - 如何正确解析字典数组

javascript - 双for循环性能问题

ios - UIWebview 没有在 iFrame 中加载 youtube 视频

video - ffmpeg 将现有的 filter_complex 与 upscaleto 结合起来,例如720p