ios - 如何在人脸的两个点集之间添加度量以将其用于数字图像中的对象检测以进行人脸识别

标签 ios objective-c face-detection face-recognition cidetector

我想在面部的两个点集之间添加度量,以将其用于数字图像中的对象检测,我们将其限制为二维,如下所示

我可以使用以下图像识别面部特征:

 -(void)markFaces:(UIImageView *)facePicture
 {
     // draw a CI image with the previously loaded face detection picture
     CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];

     // create a face detector - since speed is not an issue we'll use a high accuracy
     // detector
     CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                          context:nil options:     [NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];

     // create an array containing all the detected faces from the detector
     NSArray* features = [detector featuresInImage:image];

     // we'll iterate through every detected face.  CIFaceFeature provides us
     // with the width for the entire face, and the coordinates of each eye
     // and the mouth if detected.  Also provided are BOOL's for the eye's and
     // mouth so we can check if they already exist.
     for(CIFaceFeature* faceFeature in features)
     {
         // get the width of the face
         CGFloat faceWidth = faceFeature.bounds.size.width;

         // create a UIView using the bounds of the face
         UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];

         // add a border around the newly created UIView
         faceView.layer.borderWidth = 1;
         faceView.layer.borderColor = [[UIColor redColor] CGColor];

         // add the new view to create a box around the face
    [self.view addSubview:faceView];

         if(faceFeature.hasLeftEyePosition)
         {
             // create a UIView with a size based on the width of the face
             UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15, faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
             // change the background color of the eye view
             [leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
             // set the position of the leftEyeView based on the face
             [leftEyeView setCenter:faceFeature.leftEyePosition];

           // round the corners
             leftEyeView.layer.cornerRadius = faceWidth*0.15;
             // add the view to the window
             [self.view addSubview:leftEyeView];
         }

         if(faceFeature.hasRightEyePosition)
         {
             // create a UIView with a size based on the width of the face
             UIView* leftEye = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15, faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
             // change the background color of the eye view
             [leftEye setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
             // set the position of the rightEyeView based on the face
             [leftEye setCenter:faceFeature.rightEyePosition];
             // round the corners
             leftEye.layer.cornerRadius = faceWidth*0.15;
             // add the new view to the window
             [self.view addSubview:leftEye];
         }     

         if(faceFeature.hasMouthPosition)
         {
             // create a UIView with a size based on the width of the face
             UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
             // change the background color for the mouth to green
             [mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]];

             // set the position of the mouthView based on the face
             [mouth setCenter:faceFeature.mouthPosition];

              // round the corners
             mouth.layer.cornerRadius = faceWidth*0.2;

             // add the new view to the window
             [self.view addSubview:mouth];
              }
          }
      }

      -(void)faceDetector
      {
          // Load the picture for face detection
          //UIImageView* image = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"facedetectionpic.jpg"]];
          UIImageView* image = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"timthumb.png"]];
          // Draw the face detection image
          [self.view addSubview:image];

          // Execute the method used to markFaces in background
          [self performSelectorInBackground:@selector(markFaces:) withObject:image];

          // flip image on y-axis to match coordinate system used by core image
          [image setTransform:CGAffineTransformMakeScale(1, -1)];

          // flip the entire window to make everything right side up
          [self.view setTransform:CGAffineTransformMakeScale(1, -1)];


      }

现在我想在上传到数据库之前添加点来定位眼睛、 Nose 等的引用。稍后可以根据这些度量点位置将这些图像与现有图像进行比较,如下所示

enter image description here enter image description here

我提到了This Link但无法实现这个..如果有人知道这个请给我建议

谢谢

最佳答案

恐怕这并不简单。查看文档 CIDetector 不包括用于其他面部标志的检测器。您将需要在一组手动注释的图像上训练自己的图像。有几个开源项目可以做到这一点。一个非常好的(准确且快速)是 dlib:http://blog.dlib.net/2014/08/real-time-face-pose-estimation.html

关于ios - 如何在人脸的两个点集之间添加度量以将其用于数字图像中的对象检测以进行人脸识别,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/32710137/

相关文章:

ios - 面部追踪器寻找眼睛 iphone

ios - 使用 ARKit 3 进行手指和面部跟踪

ios - 在 MessageKit 中点击时放大 ImageView - Swift

ios - NSManagedObject 保存或插入检查

ios - 如何截取 pickerViewController 和 UIView 的屏幕截图?

iphone - UITableViewDataSource 委托(delegate)

objective-c - 单击后从 NSMenuItem 中删除突出显示?

android - Android 中无需用户交互的人脸检测

ios - IOS深度链接的URL格式,

ios - 如何在 Swift 中的自定义数组中按 NSDate 排序