ios - 使用面部检测裁剪到面部

标签 ios

我正在修改 Apple SquareCam 示例面部检测应用程序,以便它在写入相机胶卷之前裁剪面部,而不是在面部周围绘制红色方 block 。我使用与绘制红色正方形相同的 CGRect 来进行裁剪。但是行为是不同的。在纵向模式下,如果脸部位于屏幕的水平中心,它会按预期裁剪脸部(与红色方 block 所在的位置相同)。如果脸部偏左或偏右,裁剪似乎总是从屏幕中间开始,而不是红色方 block 所在的位置。

苹果原代码如下:

- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features 
                                            inCGImage:(CGImageRef)backgroundImage 
                                      withOrientation:(UIDeviceOrientation)orientation 
                                          frontFacing:(BOOL)isFrontFacing
{
    CGImageRef returnImage = NULL;
    CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(backgroundImage), CGImageGetHeight(backgroundImage));
    CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size);
    CGContextClearRect(bitmapContext, backgroundImageRect);
    CGContextDrawImage(bitmapContext, backgroundImageRect, backgroundImage);
    CGFloat rotationDegrees = 0.;

    switch (orientation) {
        case UIDeviceOrientationPortrait:
            rotationDegrees = -90.;
            break;
        case UIDeviceOrientationPortraitUpsideDown:
            rotationDegrees = 90.;
            break;
        case UIDeviceOrientationLandscapeLeft:
            if (isFrontFacing) rotationDegrees = 180.;
            else rotationDegrees = 0.;
            break;
        case UIDeviceOrientationLandscapeRight:
            if (isFrontFacing) rotationDegrees = 0.;
            else rotationDegrees = 180.;
            break;
        case UIDeviceOrientationFaceUp:
        case UIDeviceOrientationFaceDown:
        default:
            break; // leave the layer in its last known orientation
    }
    UIImage *rotatedSquareImage = [square imageRotatedByDegrees:rotationDegrees];

    // features found by the face detector
    for ( CIFaceFeature *ff in features ) {
        CGRect faceRect = [ff bounds];
        NSLog(@"faceRect=%@", NSStringFromCGRect(faceRect));
        CGContextDrawImage(bitmapContext, faceRect, [rotatedSquareImage CGImage]);
    }
    returnImage = CGBitmapContextCreateImage(bitmapContext);
    CGContextRelease (bitmapContext);

    return returnImage;
}

和我的替补:

- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features 
                                            inCGImage:(CGImageRef)backgroundImage 
                                      withOrientation:(UIDeviceOrientation)orientation 
                                          frontFacing:(BOOL)isFrontFacing
{
    CGImageRef returnImage = NULL;

    //I'm only taking pics with one face. This is just for testing
    for ( CIFaceFeature *ff in features ) {
        CGRect faceRect = [ff bounds];
        returnImage = CGImageCreateWithImageInRect(backgroundImage, faceRect);
    }

    return returnImage;
}

更新*

根据 Wains 的输入,我尝试使我的代码更像原来的代码,但结果是一样的:

- (NSArray*)extractFaceImages:(NSArray *)features
              fromCGImage:(CGImageRef)sourceImage
          withOrientation:(UIDeviceOrientation)orientation
              frontFacing:(BOOL)isFrontFacing
{
 NSMutableArray *faceImages = [[[NSMutableArray alloc] initWithCapacity:1] autorelease];


CGImageRef returnImage = NULL;
CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(sourceImage), CGImageGetHeight(sourceImage));
CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size);
CGContextClearRect(bitmapContext, backgroundImageRect);
CGContextDrawImage(bitmapContext, backgroundImageRect, sourceImage);
CGFloat rotationDegrees = 0.;

switch (orientation) {
    case UIDeviceOrientationPortrait:
        rotationDegrees = -90.;
        break;
    case UIDeviceOrientationPortraitUpsideDown:
        rotationDegrees = 90.;
        break;
    case UIDeviceOrientationLandscapeLeft:
        if (isFrontFacing) rotationDegrees = 180.;
        else rotationDegrees = 0.;
        break;
    case UIDeviceOrientationLandscapeRight:
        if (isFrontFacing) rotationDegrees = 0.;
        else rotationDegrees = 180.;
        break;
    case UIDeviceOrientationFaceUp:
    case UIDeviceOrientationFaceDown:
    default:
        break; // leave the layer in its last known orientation
}

// features found by the face detector
for ( CIFaceFeature *ff in features ) {
    CGRect faceRect = [ff bounds];

    NSLog(@"faceRect=%@", NSStringFromCGRect(faceRect));

    returnImage = CGBitmapContextCreateImage(bitmapContext);
    returnImage = CGImageCreateWithImageInRect(returnImage, faceRect);
    UIImage *clippedFace = [UIImage imageWithCGImage:returnImage];
    [faceImages addObject:clippedFace];
}

CGContextRelease (bitmapContext);

return faceImages;

我拍了三张照片并用这些结果记录了 faceRect;

照片拍摄时脸部靠近设备的左边缘。捕获图像完全错过了右边的脸: faceRect={{972, 43.0312}, {673.312, 673.312}}

在设备中间拍摄的照片。捕获图像良好: faceRect={{1060.59, 536.625}, {668.25, 668.25}}

照片拍摄时脸部靠近设备的右边缘。捕获图像完全错过了左边的脸: faceRect={{982.125, 999.844}, {804.938, 804.938}}

所以看起来“x”和“y”是颠倒的。我以纵向方式握住设备,但 faceRect 似乎是基于横向的。但是,我无法弄清楚苹果原始代码的哪一部分是造成这种情况的原因。该方法中的方向代码似乎只影响红色方形叠加图像本身。

最佳答案

您应该保留所有原始代码并在返回之前添加一行(通过调整将图像生成放在循环中,因为您只裁剪了第一张脸):

returnImage = CGImageCreateWithImageInRect(returnImage, faceRect);

这允许以正确的方向渲染图像,这意味着面部矩形将位于正确的位置。

关于ios - 使用面部检测裁剪到面部,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/17266910/

相关文章:

javascript - PhoneGap : Unable to change volume of html5 Audio object in ios

ios - 如何使用 Core Motion 通过 SwiftUI 输出磁力计数据?

ios - 通过委托(delegate)将数据传递给各种 View Controller

ios - AVAudioPlayer无法播放声音

ios - Swift 为单选按钮创建自定义逻辑

ios - 如何在 UITableView 的部分标题中设置字体颜色?

ios - 多语言复数本地化

ios - 在 Storyboard 中,UIScrollview 无法正常使用自动调整大小。没有代码如何管理?

iphone - 尝试设置手势识别时崩溃

ios - 使用 Facebook iOS 应用程序将 Xamarin 应用程序与 Facebook SSO 集成