ios - 人脸对比

标签 ios objective-c image-comparison

我正在尝试制作一个人脸识别应用程序,一旦检测到人脸就可以识别该人。我已经完成了人脸检测部分,但我找不到将人脸与应用程序中存储的相册中的照片进行比较的方法。

下面是人脸检测代码:

-(void)markFaces:(UIImageView *)facePicture

{

// draw a CI image with the previously loaded face detection picture

CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];



// create a face detector

CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace

                                          context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];



// create an array containing all the detected faces from the detector

NSArray* features = [detector featuresInImage:image];


for(CIFaceFeature* faceFeature in features)

{

    // get the width of the face

    CGFloat faceWidth = faceFeature.bounds.size.width;



    // create a UIView using the bounds of the face

    UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];



    // add a border around the newly created UIView

    faceView.layer.borderWidth = 1;

    faceView.layer.borderColor = [[UIColor redColor] CGColor];



    // add the new view to create a box around the face

    [self.view addSubview:faceView];



    if(faceFeature.hasLeftEyePosition)

    {

        // create a UIView with a size based on the width of the face

        UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15, faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];

        // change the background color of the eye view

        [leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];

        // set the position of the leftEyeView based on the face

        [leftEyeView setCenter:faceFeature.leftEyePosition];

        // round the corners

        leftEyeView.layer.cornerRadius = faceWidth*0.15;

        // add the view to the window

        [self.view addSubview:leftEyeView];

    }



    if(faceFeature.hasRightEyePosition)

    {

        // create a UIView with a size based on the width of the face

        UIView* leftEye = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15, faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];

        // change the background color of the eye view

        [leftEye setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];

        // set the position of the rightEyeView based on the face

        [leftEye setCenter:faceFeature.rightEyePosition];

        // round the corners

        leftEye.layer.cornerRadius = faceWidth*0.15;

        // add the new view to the window

        [self.view addSubview:leftEye];

    }



    if(faceFeature.hasMouthPosition)

    {

        // create a UIView with a size based on the width of the face

        UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];

        // change the background color for the mouth to green

        [mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]];

        // set the position of the mouthView based on the face

        [mouth setCenter:faceFeature.mouthPosition];

        // round the corners

        mouth.layer.cornerRadius = faceWidth*0.2;

        // add the new view to the window

        [self.view addSubview:mouth];

    }

}

}



-(void)faceDetector

{

// Load the picture for face detection

UIImageView* image = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"testpicture.png"]];



// Draw the face detection image

[self.view addSubview:image];



// Execute the method used to markFaces in background

[self performSelectorInBackground:@selector(markFaces:) withObject:image];



// flip image on y-axis to match coordinate system used by core image

[image setTransform:CGAffineTransformMakeScale(1, -1)];



// flip the entire window to make everything right side up

[self.view setTransform:CGAffineTransformMakeScale(1, -1)];





}

最佳答案

来自文档:

Core Image can analyze and find human faces in an image. It performs face detection, not recognition. Face detection is the identification of rectangles that contain human face features, whereas face recognition is the identification of specific human faces (John, Mary, and so on). After Core Image detects a face, it can provide information about face features, such as eye and mouth positions. It can also track the position an identified face in a video.

遗憾的是目前苹果还没有提供api来识别人脸。您可能会查看 third party libraries.

关于ios - 人脸对比,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34678730/

相关文章:

ios - iOS 如何运行使用以前的 SDK 版本构建的应用程序?

ios - FBConnect:如何显示标准 FB 对话框以发布图片和标题?

iOS Swift 4 - 如何将数据传递给 splitviewcontrollers master

ios - 如何在 IOS 7 的 UISearchBar 中更改位置或隐藏放大镜图标?

python - 将灰色背景转换为白色背景而不干扰原始图像

ios - SQLite.swift:在事件搜索中点击 UITableViewCell 时,由于强制展开 nil 值,应用程序崩溃

ios - 键盘打开时如何滚动屏幕?

objective-c - XCode 不会将我的 iPhone 设置为事件方案

c++ - 'compareHist' 不适用于类似图像

opencv - 比较两个图像特定区域的直方图?打开简历