swift - 如何在AR场景运行时提取SceneKit Depth Buffer?

标签 swift scenekit augmented-reality arkit depth-buffer

如何提取 SceneKit 深度缓冲区?我制作了一个运行 Metal 的基于 AR 的应用程序,我真的很难找到有关如何提取 2D 深度缓冲区的任何信息,以便我可以渲染出场景的精美 3D 照片。非常感谢任何帮助。

最佳答案

你的问题不清楚,但我会尽力回答。

VR View 的深度传递

如果您需要从 SceneKit 的 3D 环境渲染深度 channel ,那么您应该使用,例如,SCNGeometrySource.Semantic结构体。有vertex , normal , texcoord , colortangent类型属性。让我们看看什么是vertex类型属性是:

static let vertex: SCNGeometrySource.Semantic

This semantic identifies data containing the positions of each vertex in the geometry. For a custom shader program, you use this semantic to bind SceneKit’s vertex position data to an input attribute of the shader. Vertex position data is typically an array of three- or four-component vectors.

这是来自 iOS Depth Sample 的代码摘录项目。

更新:使用此代码,您可以获得 SCNScene 中每个点的位置并为这些点分配颜色(这就是 zDepth channel 的真正含义):

import SceneKit

struct PointCloudVertex {
    var x: Float, y: Float, z: Float
    var r: Float, g: Float, b: Float
}

@objc class PointCloud: NSObject {
    
    var pointCloud : [SCNVector3] = []
    var colors: [UInt8] = []
    
    public func pointCloudNode() -> SCNNode {
        let points = self.pointCloud
        var vertices = Array(repeating: PointCloudVertex(x: 0,
                                                         y: 0,
                                                         z: 0,
                                                         r: 0,
                                                         g: 0,
                                                         b: 0), 
                                                     count: points.count)
        
        for i in 0...(points.count-1) {
            let p = points[i]
            vertices[i].x = Float(p.x)
            vertices[i].y = Float(p.y)
            vertices[i].z = Float(p.z)
            vertices[i].r = Float(colors[i * 4]) / 255.0
            vertices[i].g = Float(colors[i * 4 + 1]) / 255.0
            vertices[i].b = Float(colors[i * 4 + 2]) / 255.0
        }
        
        let node = buildNode(points: vertices)
        return node
    }
    
    private func buildNode(points: [PointCloudVertex]) -> SCNNode {
        let vertexData = NSData(
            bytes: points,
            length: MemoryLayout<PointCloudVertex>.size * points.count
        )
        let positionSource = SCNGeometrySource(
            data: vertexData as Data,
            semantic: SCNGeometrySource.Semantic.vertex,
            vectorCount: points.count,
            usesFloatComponents: true,
            componentsPerVector: 3,
            bytesPerComponent: MemoryLayout<Float>.size,
            dataOffset: 0,
            dataStride: MemoryLayout<PointCloudVertex>.size
        )
        let colorSource = SCNGeometrySource(
            data: vertexData as Data,
            semantic: SCNGeometrySource.Semantic.color,
            vectorCount: points.count,
            usesFloatComponents: true,
            componentsPerVector: 3,
            bytesPerComponent: MemoryLayout<Float>.size,
            dataOffset: MemoryLayout<Float>.size * 3,
            dataStride: MemoryLayout<PointCloudVertex>.size
        )
        let element = SCNGeometryElement(
            data: nil,
            primitiveType: .point,
            primitiveCount: points.count,
            bytesPerIndex: MemoryLayout<Int>.size
        )

        element.pointSize = 1
        element.minimumPointScreenSpaceRadius = 1
        element.maximumPointScreenSpaceRadius = 5

        let pointsGeometry = SCNGeometry(sources: [positionSource, colorSource], elements: [element])
        
        return SCNNode(geometry: pointsGeometry)
    }
}

AR View 的深度传递

如果您需要从 ARSCNView 渲染深度 channel ,只有在您使用 ARFaceTrackingConfiguration 的情况下才有可能对于前置摄像头。如果是这样,那么您可以雇用 capturedDepthData为您带来深度图的实例属性,与视频帧一起捕获。

var capturedDepthData: AVDepthData? { get }

但是这个深度图图像是 only 15 fps and of lower resolution比 60 fps 的相应 RGB 图像

Face-based AR uses the front-facing, depth-sensing camera on compatible devices. When running such a configuration, frames vended by the session contain a depth map captured by the depth camera in addition to the color pixel buffer (see capturedImage) captured by the color camera. This property’s value is always nil when running other AR configurations.

真正的代码可能是这样的:

extension ViewController: ARSCNViewDelegate {
    
    func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
        
        DispatchQueue.global().async {

            guard let frame = self.sceneView.session.currentFrame else {
                return
            }
            if let depthImage = frame.capturedDepthData {
                self.depthImage = (depthImage as! CVImageBuffer)
            }
        }
    }
}

视频 View 的深度传递

此外,您还可以使用 2 个后置摄像头和 AVFoundation 提取真正的深度 channel 。框架。

Image Depth Map向您介绍差异概念的教程。

关于swift - 如何在AR场景运行时提取SceneKit Depth Buffer?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55917084/

相关文章:

IOS 13 Combine Framework - @Published 不工作 ("Unknown attribute ' Published'")

ios - 当用户在 ios 13.0 中单击登录按钮时,Facebook 登录总是被取消,但在 ios 12.0 或更小的 swift 中完全工作正常

ios - 存在 Web View Controller 时,Sceneview 卡住

ios - 当状态更改时,如何防止 SceneKit 场景重新渲染不佳?

c# - Holotoolkit 中的 HandDraggable 和 GestureAction 脚本有什么区别?

swift - 如何在 RealityKit 中仅围绕一个轴旋转对象?

swift - 如何在 Swift 中使用 Objective-C 枚举

ios - Swift 中的无效定时器

swift - 旋转对象,使点击的点面向相机

android - 适用于 Android 的计算机视觉和 AR 库?