ios - 如何快速从 AVCaptureStillImageOutput 创建 CIImage?

标签 ios objective-c swift avcaptureoutput

所以我使用了一些在 Objective C 中执行此操作的代码,并且我一直在将其转换为 swift,并且我正在努力从 AVCaptureStillImageOutput 创建一个 CIImage。因此,如果有人可以查看此代码并告诉我哪里出错了,那就太好了。

这是 objective-c 代码

- (void)captureImageWithCompletionHander:(void(^)(NSString *fullPath))completionHandler
{ 
dispatch_suspend(_captureQueue); 

AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in self.stillImageOutput.connections)
{
    for (AVCaptureInputPort *port in connection.inputPorts)
    {
        if ([port.mediaType isEqual:AVMediaTypeVideo] )
        {
            videoConnection = connection;
            break;
        }
    }
    if (videoConnection) break;
}

__weak typeof(self) weakSelf = self;

[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
 {
     if (error)
     {
         dispatch_resume(_captureQueue);
         return;
     }

     __block NSArray *filePath = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); //create an array and store result of our search for the documents directory in it

     NSString *documentsDirectory = [filePath objectAtIndex:0]; //create NSString object, that holds our exact path to the documents directory

     NSString *fullPath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:@"/iScan_img_%i.pdf",(int)[NSDate date].timeIntervalSince1970]];


     @autoreleasepool
     {
         NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
         CIImage *enhancedImage = [[CIImage alloc] initWithData:imageData options:@{kCIImageColorSpace:[NSNull null]}];
         imageData = nil;

         if (weakSelf.cameraViewType == DocScannerCameraViewTypeBlackAndWhite)
         {
             enhancedImage = [self filteredImageUsingEnhanceFilterOnImage:enhancedImage];
         }
         else
         {
             enhancedImage = [self filteredImageUsingContrastFilterOnImage:enhancedImage];
         }

         if (weakSelf.isBorderDetectionEnabled && rectangleDetectionConfidenceHighEnough(_imageDedectionConfidence))
         {
             CIRectangleFeature *rectangleFeature = [self biggestRectangleInRectangles:[[self highAccuracyRectangleDetector] featuresInImage:enhancedImage]];

             if (rectangleFeature)
             {
                 enhancedImage = [self correctPerspectiveForImage:enhancedImage withFeatures:rectangleFeature];
             }
         }

         CIFilter *transform = [CIFilter filterWithName:@"CIAffineTransform"];
         [transform setValue:enhancedImage forKey:kCIInputImageKey];
         NSValue *rotation = [NSValue valueWithCGAffineTransform:CGAffineTransformMakeRotation(-90 * (M_PI/180))];
         [transform setValue:rotation forKey:@"inputTransform"];
         enhancedImage = transform.outputImage;

         if (!enhancedImage || CGRectIsEmpty(enhancedImage.extent)) return;

         static CIContext *ctx = nil;
         if (!ctx)
         {
             ctx = [CIContext contextWithOptions:@{kCIContextWorkingColorSpace:[NSNull null]}];
         }

         CGSize bounds = enhancedImage.extent.size;
         bounds = CGSizeMake(floorf(bounds.width / 4) * 4,floorf(bounds.height / 4) * 4);
         CGRect extent = CGRectMake(enhancedImage.extent.origin.x, enhancedImage.extent.origin.y, bounds.width, bounds.height);

         static int bytesPerPixel = 8;
         uint rowBytes = bytesPerPixel * bounds.width;
         uint totalBytes = rowBytes * bounds.height;
         uint8_t *byteBuffer = malloc(totalBytes);

         CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

         [ctx render:enhancedImage toBitmap:byteBuffer rowBytes:rowBytes bounds:extent format:kCIFormatRGBA8 colorSpace:colorSpace];

         CGContextRef bitmapContext = CGBitmapContextCreate(byteBuffer,bounds.width,bounds.height,bytesPerPixel,rowBytes,colorSpace,kCGImageAlphaNoneSkipLast);
         CGImageRef imgRef = CGBitmapContextCreateImage(bitmapContext);
         CGColorSpaceRelease(colorSpace);
         CGContextRelease(bitmapContext);
         free(byteBuffer);

         if (imgRef == NULL)
         {
             CFRelease(imgRef);
             return;
         }
         saveCGImageAsJPEGToFilePath(imgRef, fullPath);



         CFRelease(imgRef);

         dispatch_async(dispatch_get_main_queue(), ^
                        {
                            completionHandler(fullPath);

                            dispatch_resume(_captureQueue);
                        });

         _imageDedectionConfidence = 0.0f;
     }
 }];

现在基本上它捕获内容,如果某些 if 语句为真,那么它捕获显示的 CIRectangleFeature 中的内容,然后转换 CIImage 到要在保存函数中调用的 CGImage

我已经把它翻译成这样了。

func captureImage(completionHandler: @escaping (_ imageFilePath: String) -> Void) {

    self.captureQueue?.suspend()
    var videoConnection: AVCaptureConnection!
    for connection in self.stillImageOutput.connections{
        for port in (connection as! AVCaptureConnection).inputPorts {
            if (port as! AVCaptureInputPort).mediaType.isEqual(AVMediaTypeVideo) {
                videoConnection = connection as! AVCaptureConnection
                break
            }
        }
        if videoConnection != nil {
            break
        }
    }
    weak var weakSelf = self
    self.stillImageOutput.captureStillImageAsynchronously(from: videoConnection) { (sampleBuffer, error) -> Void in
        if error != nil {
            self.captureQueue?.resume()
            return
        }
        let filePath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
        let documentsDirectory: String = filePath[0]
        let fullPath: String = URL(fileURLWithPath: documentsDirectory).appendingPathComponent("iScan_img_\(Int(Date().timeIntervalSince1970)).pdf").absoluteString
        autoreleasepool {
            let imageData = Data(AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer))
            var enhancedImage = CIImage(data: imageData, options: [kCIImageColorSpace: NSNull()])


            if weakSelf?.cameraViewType == DocScannerCameraViewType.blackAndWhite {
                enhancedImage = self.filteredImageUsingEnhanceFilter(on: enhancedImage!)
            }
            else {
                enhancedImage = self.filteredImageUsingContrastFilter(on: enhancedImage!)
            }
            if (weakSelf?.isEnableBorderDetection == true) && self.rectangleDetectionConfidenceHighEnough(confidence: self.imageDedectionConfidence) {
                let rectangleFeature: CIRectangleFeature? = self.biggestRectangles(rectangles: self.highAccuracyRectangleDetector().features(in: enhancedImage!))
                if rectangleFeature != nil {
                    enhancedImage = self.correctPerspective(for: enhancedImage!, withFeatures: rectangleFeature!)
                }
            }
            let transform = CIFilter(name: "CIAffineTransform")
            let rotation = NSValue(cgAffineTransform: CGAffineTransform(rotationAngle: -90 * (.pi / 180)))
            transform?.setValue(rotation, forKey: "inputTransform")
            enhancedImage = transform?.outputImage
            if (enhancedImage == nil) || (enhancedImage?.extent.isEmpty)! {
                return
            }
            var ctx: CIContext?
            if (ctx != nil) {
                ctx = CIContext(options: [kCIContextWorkingColorSpace: NSNull()])
            }
            var bounds: CGSize = (enhancedImage?.extent.size)!
            bounds = CGSize(width: CGFloat((floorf(Float(bounds.width)) / 4) * 4), height: CGFloat((floorf(Float(bounds.height)) / 4) * 4))
            let extent = CGRect(x: CGFloat((enhancedImage?.extent.origin.x)!), y: CGFloat((enhancedImage?.extent.origin.y)!), width: CGFloat(bounds.width), height: CGFloat(bounds.height))
            let bytesPerPixel: CGFloat = 8
            let rowBytes = bytesPerPixel * bounds.width
            let totalBytes = rowBytes * bounds.height
            let byteBuffer = malloc(Int(totalBytes))
            let colorSpace = CGColorSpaceCreateDeviceRGB()
            ctx!.render(enhancedImage!, toBitmap: byteBuffer!, rowBytes: Int(rowBytes), bounds: extent, format: kCIFormatRGBA8, colorSpace: colorSpace)
            let bitmapContext = CGContext(data: byteBuffer, width: Int(bounds.width), height: Int(bounds.height), bitsPerComponent: Int(bytesPerPixel), bytesPerRow: Int(rowBytes), space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue)
            let imgRef = bitmapContext?.makeImage()
            free(byteBuffer)

            self.saveCGImageAsJPEGToFilePath(imgRef: imgRef!, filePath: fullPath)
            DispatchQueue.main.async(execute: {() -> Void in
                completionHandler(fullPath)
                self.captureQueue?.resume()
            })
            self.imageDedectionConfidence = 0.0
        }
    }
}

因此它需要 AVCaptureStillImageOutput 将其转换为 CIImage 用于所有需要的用途,然后将其转换为 CGImage 以进行保存。我在翻译中到底做错了什么?或者有更好的方法吗?

我真的不想问这个,但我似乎找不到任何像这样的问题,或者至少找不到任何涉及从 AVCaptureStillImageOutput 捕获为 CIImage 的问题

感谢您的帮助!

最佳答案

这是 swift 中的正确翻译 再次感谢 Prientus 帮助我找到我的错误

func captureImage(completionHandler: @escaping (_ imageFilePath: String) -> Void) {

    self.captureQueue?.suspend()
    var videoConnection: AVCaptureConnection!
    for connection in self.stillImageOutput.connections{
        for port in (connection as! AVCaptureConnection).inputPorts {
            if (port as! AVCaptureInputPort).mediaType.isEqual(AVMediaTypeVideo) {
                videoConnection = connection as! AVCaptureConnection
                break
            }
        }
        if videoConnection != nil {
            break
        }
    }
    weak var weakSelf = self
    self.stillImageOutput.captureStillImageAsynchronously(from: videoConnection) { (sampleBuffer: CMSampleBuffer?, error) -> Void in
        if error != nil {
            self.captureQueue?.resume()
            return
        }
        let filePath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
        let documentsDirectory: String = filePath[0]
        let fullPath: String = documentsDirectory.appending("/iScan_img_\(Int(Date().timeIntervalSince1970)).pdf")
        autoreleasepool {

            let imageData = Data(AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer))
            var enhancedImage = CIImage(data: imageData, options: [kCIImageColorSpace: NSNull()])


            if weakSelf?.cameraViewType == DocScannerCameraViewType.blackAndWhite {
                enhancedImage = self.filteredImageUsingEnhanceFilter(on: enhancedImage!)
            }
            else {
                enhancedImage = self.filteredImageUsingContrastFilter(on: enhancedImage!)
            }
            if (weakSelf?.isEnableBorderDetection == true) && self.rectangleDetectionConfidenceHighEnough(confidence: self.imageDedectionConfidence) {
                let rectangleFeature: CIRectangleFeature? = self.biggestRectangles(rectangles: self.highAccuracyRectangleDetector().features(in: enhancedImage!))
                if rectangleFeature != nil {
                    enhancedImage = self.correctPerspective(for: enhancedImage!, withFeatures: rectangleFeature!)
                }
            }
            let transform = CIFilter(name: "CIAffineTransform")
            transform?.setValue(enhancedImage, forKey: kCIInputImageKey)
            let rotation = NSValue(cgAffineTransform: CGAffineTransform(rotationAngle: -90 * (.pi / 180)))
            transform?.setValue(rotation, forKey: "inputTransform")
            enhancedImage = (transform?.outputImage)!
            if (enhancedImage == nil) || (enhancedImage?.extent.isEmpty)! {
                return
            }
            var ctx: CIContext?
            if (ctx == nil) {
                ctx = CIContext(options: [kCIContextWorkingColorSpace: NSNull()])
            }
            var bounds: CGSize = (enhancedImage!.extent.size)
            bounds = CGSize(width: CGFloat((floorf(Float(bounds.width)) / 4) * 4), height: CGFloat((floorf(Float(bounds.height)) / 4) * 4))
            let extent = CGRect(x: CGFloat((enhancedImage?.extent.origin.x)!), y: CGFloat((enhancedImage?.extent.origin.y)!), width: CGFloat(bounds.width), height: CGFloat(bounds.height))
            let bytesPerPixel: CGFloat = 8
            let rowBytes = bytesPerPixel * bounds.width
            let totalBytes = rowBytes * bounds.height
            let byteBuffer = malloc(Int(totalBytes))
            let colorSpace = CGColorSpaceCreateDeviceRGB()
            ctx!.render(enhancedImage!, toBitmap: byteBuffer!, rowBytes: Int(rowBytes), bounds: extent, format: kCIFormatRGBA8, colorSpace: colorSpace)
            let bitmapContext = CGContext(data: byteBuffer, width: Int(bounds.width), height: Int(bounds.height), bitsPerComponent: Int(bytesPerPixel), bytesPerRow: Int(rowBytes), space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue)
            let imgRef = bitmapContext?.makeImage()
            free(byteBuffer)
            if imgRef == nil {
                return
            }
            self.saveCGImageAsJPEGToFilePath(imgRef: imgRef!, filePath: fullPath)
            DispatchQueue.main.async(execute: {() -> Void in
                completionHandler(fullPath)
                self.captureQueue?.resume()
            })
            self.imageDedectionConfidence = 0.0
        }
    }
}

关于ios - 如何快速从 AVCaptureStillImageOutput 创建 CIImage?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/42539424/

相关文章:

ios - 无法摆脱 "An iPhone Retina (4-inch) launch image for iOS 7.0 and later is required"错误

c++ - iOS 上的 OpenCV : binary size, 加载时间、速度等

ios - 使用 Objective C 和 GLKit 存储模型信息的数据结构

iphone - 真假 : There is no way to set the file owner of a cell loaded using registerNib:forCellReuseIdentifier:

objective-c - 'NSMutableArray' 没有可见的 @ 接口(interface)声明选择器 'appendData:'

ios - 使用 NSDictionary 元素缓慢创建 NSMutableArray

ios - 如何在 block 中使用 NSTextView?

swift - 使用分配给 swift 3.0 中的可选变量?运算符返回 nil

swift - UILabel 在单独的 swift 文件中运行

ios - xcode 7 beta 在进行谷歌集成时显示链接器错误