我希望能够从摄像头源中跟踪用户的面部。我看过this所以发帖。我使用了答案中给出的代码,但它似乎没有做任何事情。听说过
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)
已在 swift 4 中更改为其他内容。这可能是代码的问题吗?
在进行面部跟踪时,我还想使用 CIFaceFeature 监控面部特征。我该怎么做?
最佳答案
我在这里找到了一个起点:https://github.com/jeffreybergier/Blog-Getting-Started-with-Vision .
基本上,您可以像这样声明一个惰性变量来实例化视频捕获 session :
private lazy var captureSession: AVCaptureSession = {
let session = AVCaptureSession()
session.sessionPreset = AVCaptureSession.Preset.photo
guard
let frontCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front),
let input = try? AVCaptureDeviceInput(device: frontCamera)
else { return session }
session.addInput(input)
return session
}()
然后在 viewDidLoad
中启动 session
self.captureSession.startRunning()
最后你可以在里面执行你的请求
func captureOutput(_ output: AVCaptureOutput,
didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
}
例如:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer:
CMSampleBuffer, from connection: AVCaptureConnection) {
guard
// make sure the pixel buffer can be converted
let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
else { return }
let faceRequest = VNDetectFaceRectanglesRequest(completionHandler: self.faceDetectedRequestUpdate)
// perform the request
do {
try self.visionSequenceHandler.perform([faceRequest], on: pixelBuffer)
} catch {
print("Throws: \(error)")
}
}
然后定义您的 faceDetectedRequestUpdate
函数。
无论如何,我不得不说,我无法弄清楚如何从这里创建一个工作示例。我找到的最好的工作示例在 Apple 的文档中:https://developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time
关于ios - 在 swift 4 中使用摄像头进行实时面部跟踪,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48116256/