ios - 如何在 ios swift 中将相机预览 View 添加到三个自定义 uiview

标签 ios iphone video avfoundation video-processing

我需要创建一个具有视频处理功能的应用。

我的要求是必须创建 3 个带有相机预览层的 View 。第一个 View 应显示原始捕获视频,第二个 View 应显示原始捕获视频的翻转,最后一个 View 应显示原始捕获视频的反转颜色。

我是带着这个需求开始开发的。 首先我创建了 3 个 View 和 Camera Capture 所需的属性

    @IBOutlet weak var captureView: UIView!
    @IBOutlet weak var flipView: UIView!
    @IBOutlet weak var InvertView: UIView!
    
    //Camera Capture requiered properties
    var videoDataOutput: AVCaptureVideoDataOutput!
    var videoDataOutputQueue: DispatchQueue!
    var previewLayer:AVCaptureVideoPreviewLayer!
    var captureDevice : AVCaptureDevice!
    let session = AVCaptureSession()
    var replicationLayer: CAReplicatorLayer!

enter image description here

现在我调用 AVCaptureVideoDataOutputSampleBufferDelegate 来启动相机 session 。

extension ViewController:  AVCaptureVideoDataOutputSampleBufferDelegate{
    func setupAVCapture(){
        session.sessionPreset = AVCaptureSessionPreset640x480
        guard let device = AVCaptureDevice
            .defaultDevice(withDeviceType: .builtInWideAngleCamera,
                           mediaType: AVMediaTypeVideo,
                           position: .back) else{
                            return
        }
        captureDevice = device
        beginSession()
    }
    
    func beginSession(){
        var err : NSError? = nil
        var deviceInput:AVCaptureDeviceInput?
        do {
            deviceInput = try AVCaptureDeviceInput(device: captureDevice)
        } catch let error as NSError {
            err = error
            deviceInput = nil
        }
        if err != nil {
            print("error: \(err?.localizedDescription)");
        }
        if self.session.canAddInput(deviceInput){
            self.session.addInput(deviceInput);
        }
        
        videoDataOutput = AVCaptureVideoDataOutput()
        videoDataOutput.alwaysDiscardsLateVideoFrames=true
        videoDataOutputQueue = DispatchQueue(label: "VideoDataOutputQueue")
        videoDataOutput.setSampleBufferDelegate(self, queue:self.videoDataOutputQueue)
        if session.canAddOutput(self.videoDataOutput){
            session.addOutput(self.videoDataOutput)
        }
        videoDataOutput.connection(withMediaType: AVMediaTypeVideo).isEnabled = true
        
        self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session)
        self.previewLayer.frame = self.captureView.bounds
        self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspect
        
        self.replicationLayer = CAReplicatorLayer()
        self.replicationLayer.frame = self.captureView.bounds
        self.replicationLayer.instanceCount = 1 //
        self.replicationLayer.instanceTransform = CATransform3DMakeTranslation(0.0, self.captureView.bounds.size.height / 1, 0.0)
        
        self.replicationLayer.addSublayer(self.previewLayer)
        self.captureView.layer.addSublayer(self.replicationLayer)
        self.flipView.layer.addSublayer(self.replicationLayer)
        self.InvertView.layer.addSublayer(self.replicationLayer)
        
        session.startRunning()
    }
    
    func captureOutput(_ captureOutput: AVCaptureOutput!,
                       didOutputSampleBuffer sampleBuffer: CMSampleBuffer!,
                       from connection: AVCaptureConnection!) {
        // do stuff here
    }
    
    // clean up AVCapture
    func stopCamera(){
        session.stopRunning()
    }
    
}

这里我使用 CAReplicatorLayer 以 3 个 View 显示捕获视频。我将 self.replicationLayer.instanceCount 指定为 1,然后我得到了这样的输出。

enter image description here

如果我将 self.replicationLayer.instanceCount 指定为 3,那么我会得到这样的输出。

enter image description here

请指导我如何在 3 种不同的 View 中显示捕获的视频。并提供一些将原始捕获视频转换为翻转和反转颜色的想法。提前致谢。

最佳答案

最后我在 JohnnySlagle/Multiple-Camera-Feeds 的帮助下找到了答案代码。

我创建了三个 View

@property (weak, nonatomic) IBOutlet UIView *video1;
@property (weak, nonatomic) IBOutlet UIView *video2;
@property (weak, nonatomic) IBOutlet UIView *video3;

然后稍微改变一下setUpFeedViews

- (void)setupFeedViews {
    NSUInteger numberOfFeedViews = 3;

    for (NSUInteger i = 0; i < numberOfFeedViews; i++) {
        VideoFeedView *feedView = [self setupFeedViewWithFrame:CGRectMake(0, 0, self.video1.frame.size.width, self.video1.frame.size.height)];
        feedView.tag = i+1;
        switch (i) {
            case 0:
                [self.video1 addSubview:feedView];
                break;
            case 1:
                [self.video2 addSubview:feedView];
                break;
            case 2:
                [self.video3 addSubview:feedView];
                break;
            default:
                break;
        }
        [self.feedViews addObject:feedView];
    }
}

然后在 AVCaptureVideoDataOutputSampleBufferDelegate 中应用过滤器

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer);

    // update the video dimensions information
    _currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDesc);

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:nil];

    CGRect sourceExtent = sourceImage.extent;

    CGFloat sourceAspect = sourceExtent.size.width / sourceExtent.size.height;


    for (VideoFeedView *feedView in self.feedViews) {
        CGFloat previewAspect = feedView.viewBounds.size.width  / feedView.viewBounds.size.height;
        // we want to maintain the aspect radio of the screen size, so we clip the video image
        CGRect drawRect = sourceExtent;
        if (sourceAspect > previewAspect) {
            // use full height of the video image, and center crop the width
            drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0;
            drawRect.size.width = drawRect.size.height * previewAspect;
        } else {
            // use full width of the video image, and center crop the height
            drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
            drawRect.size.height = drawRect.size.width / previewAspect;
        }
        [feedView bindDrawable];

        if (_eaglContext != [EAGLContext currentContext]) {
            [EAGLContext setCurrentContext:_eaglContext];
        }

        // clear eagl view to grey
        glClearColor(0.5, 0.5, 0.5, 1.0);
        glClear(GL_COLOR_BUFFER_BIT);

        // set the blend mode to "source over" so that CI will use that
        glEnable(GL_BLEND);
        glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        // This is necessary for non-power-of-two textures
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

        if (feedView.tag == 1) {
            if (sourceImage) {
                [_ciContext drawImage:sourceImage inRect:feedView.viewBounds fromRect:drawRect];
            }
        } else if (feedView.tag == 2) {
            sourceImage = [sourceImage imageByApplyingTransform:CGAffineTransformMakeScale(1, -1)];
            sourceImage = [sourceImage imageByApplyingTransform:CGAffineTransformMakeTranslation(0, sourceExtent.size.height)];
            if (sourceImage) {
                [_ciContext drawImage:sourceImage inRect:feedView.viewBounds fromRect:drawRect];
            }
        } else {
            CIFilter *effectFilter = [CIFilter filterWithName:@"CIColorInvert"];
            [effectFilter setValue:sourceImage forKey:kCIInputImageKey];
            CIImage *invertImage = [effectFilter outputImage];
            if (invertImage) {
                [_ciContext drawImage:invertImage inRect:feedView.viewBounds fromRect:drawRect];
            }
        }
        [feedView display];
    }
}

就是这样。成功地满足了我的要求。

关于ios - 如何在 ios swift 中将相机预览 View 添加到三个自定义 uiview,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/43881197/

相关文章:

ios - 设置 keyWindow rootViewController 在 iOS8 中不起作用

ios - iOS AppStore上的CCATS和加密

iphone - UILocalNotification iOS5 问题(未显示警报)

ios - 当手机处于横向模式时隐藏标签

android - FFmpeg - 混合视频和音频时大文件的 ANR

python - 为什么我的代码只写最后一个变量?

ios - UITabBar Lifecycle 的方法不会从后台启动

ios - 在 AppDelegate 中以编程方式设置 App 入口点

iphone - 关于 FoneMonkey

ruby-on-rails - ffmpeg 在将 mp4 转换为 ogg、flv 或 webm 时丢弃 moov atom