ios - 在 Swift 3 中向视频添加叠加层

标签 ios swift video avfoundation

我正在学习 AVFoundation,我在尝试在 Swift 3 中保存带有叠加图像的视频时遇到问题。使用 AVMutableComposition 我可以将图像添加到视频中,但是视频被放大并且不局限于拍摄视频的纵向尺寸。我试过:

  • 通过 AVAssetTrack 设置自然大小。
  • AVMutableVideoComposition renderFrame 中将视频限制为纵向大小。
  • 将新视频的边界锁定到录制的视频宽度和高度。

下面的代码与我需要帮助的问题无关。我要添加的图像覆盖了整个纵向 View ,并且在边缘周围都有边框。该应用程序还只允许纵向。

func processVideoWithWatermark(video: AVURLAsset, watermark: UIImage, completion: @escaping (Bool) -> Void) {

    let composition = AVMutableComposition()
    let asset = AVURLAsset(url: video.url, options: nil)

    let track =  asset.tracks(withMediaType: AVMediaTypeVideo)
    let videoTrack:AVAssetTrack = track[0] as AVAssetTrack
    let timerange = CMTimeRangeMake(kCMTimeZero, asset.duration)

    let compositionVideoTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())

    do {
        try compositionVideoTrack.insertTimeRange(timerange, of: videoTrack, at: kCMTimeZero)
        compositionVideoTrack.preferredTransform = videoTrack.preferredTransform
    } catch {
        print(error)
    }

//      let compositionAudioTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
//      
//      for audioTrack in asset.tracks(withMediaType: AVMediaTypeAudio) {
//          do {
//              try compositionAudioTrack.insertTimeRange(audioTrack.timeRange, of: audioTrack, at: kCMTimeZero)
//          } catch {
//              print(error)
//          }
//          
//      }
//      
    let size = videoTrack.naturalSize

    let watermark = watermark.cgImage
    let watermarklayer = CALayer()
    watermarklayer.contents = watermark
    watermarklayer.frame = CGRect(x: 0, y: 0, width: screenWidth, height: screenHeight)
    watermarklayer.opacity = 1

    let videolayer = CALayer()
    videolayer.frame = CGRect(x: 0, y: 0, width: screenWidth, height: screenHeight)

    let parentlayer = CALayer()
    parentlayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
    parentlayer.addSublayer(videolayer)
    parentlayer.addSublayer(watermarklayer)

    let layercomposition = AVMutableVideoComposition()
    layercomposition.frameDuration = CMTimeMake(1, 30)
    layercomposition.renderSize = CGSize(width: screenWidth, height: screenHeight)
    layercomposition.renderScale = 1.0
    layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videolayer, in: parentlayer)

    let instruction = AVMutableVideoCompositionInstruction()
    instruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration)

    let videotrack = composition.tracks(withMediaType: AVMediaTypeVideo)[0] as AVAssetTrack
    let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)

    layerinstruction.setTransform(videoTrack.preferredTransform, at: kCMTimeZero)

    instruction.layerInstructions = [layerinstruction]
    layercomposition.instructions = [instruction]

    let filePath = NSTemporaryDirectory() + self.fileName()
    let movieUrl = URL(fileURLWithPath: filePath)

    guard let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality) else {return}
    assetExport.videoComposition = layercomposition
    assetExport.outputFileType = AVFileTypeMPEG4
    assetExport.outputURL = movieUrl

    assetExport.exportAsynchronously(completionHandler: {

        switch assetExport.status {
        case .completed:
            print("success")
            print(video.url)
            self.saveVideoToUserLibrary(fileURL: movieUrl, completion: { (success, error) in
                if success {
                    completion(true)
                } else {
                    completion(false)

                }
            })

            break
        case .cancelled:
            print("cancelled")
            break
        case .exporting:
            print("exporting")
            break
        case .failed:
            print(video.url)
            print("failed: \(assetExport.error!)")
            break
        case .unknown:
            print("unknown")
            break
        case .waiting:
            print("waiting")
            break
        }
    })

}

最佳答案

如果视频层应填充父层,则您的 videoLayer 的 frame 不正确。您需要将大小设置为 size 而不是 screenSize

关于ios - 在 Swift 3 中向视频添加叠加层,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/45535896/

相关文章:

ios - Swift Alert 弹出窗口出现在模拟器 iphone 4s(8.1) 中,而不是 iphone 4s (7.1)

ios - 在 iOS 中实时访问视频帧

iOS 应用程序卡住,在 webview 中加载视频时不会崩溃

ios - Objective-c:在 ios 11 更新后 swrevealcontroller 中的黑色状态栏

ios - 使用 AVPlayerseekToTime 时 Apple Mach-O 链接器错误 :kCMTimeZero

ios - 获取用户位置

ios - 为什么背景图像覆盖了 SKCropNode child ?

ios - 对于 iOS,如何在按钮内创建可拖动按钮?

ios - Realm 迁移问题。如何更新应该在其中的数据

video - Lustre、Gluster 还是 MogileFS??用于视频存储、编码和流媒体