我正在尝试编写一个例程,它接受一个 UIImage 并返回一个仅包含面部的新 UIImage。这看起来非常简单,但我的大脑在处理 CoreImage 与 UIImage 空间时遇到了问题。
基础知识:
- (UIImage *)imageFromImage:(UIImage *)image inRect:(CGRect)rect {
CGImageRef sourceImageRef = [image CGImage];
CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
return newImage;
}
-(UIImage *)getFaceImage:(UIImage *)picture {
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil
options:[NSDictionary dictionaryWithObject: CIDetectorAccuracyHigh forKey: CIDetectorAccuracy]];
CIImage *ciImage = [CIImage imageWithCGImage: [picture CGImage]];
NSArray *features = [detector featuresInImage:ciImage];
// For simplicity, I'm grabbing the first one in this code sample,
// and we can all pretend that the photo has one face for sure. :-)
CIFaceFeature *faceFeature = [features objectAtIndex:0];
return imageFromImage:picture inRect:faceFeature.bounds;
}
返回的图像来自翻转后的图像。我试过使用这样的方式调整 faceFeature.bounds
:
CGAffineTransform t = CGAffineTransformMakeScale(1.0f,-1.0f);
CGRect newRect = CGRectApplyAffineTransform(faceFeature.bounds,t);
...但这给了我图像之外的结果。
我敢肯定有一些简单的方法可以解决这个问题,但是如果没有计算自下而上然后使用它作为 X 创建一个新的矩形,是否有“正确”的方法来做到这一点?
谢谢!
最佳答案
仅使用 CIContext 从图像中裁剪脸部会更容易,也不会那么困惑。像这样:
CGImageRef cgImage = [_ciContext createCGImage:[CIImage imageWithCGImage:inputImage.CGImage] fromRect:faceFeature.bounds];
UIImage *croppedFace = [UIImage imageWithCGImage:cgImage];
inputImage 是您的 UIImage 对象,faceFeature 对象是您从[CIDetector featuresInImage:] 方法获得的 CIFaceFeature 类型。 p>
关于ios - UIImage 人脸检测,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/9420188/