In WWDC24, visionOS hand tracking has a new function that can make an entity track the hand faster (but at the expense of a certain degree of accuracy), and the video only explains how to implement ARKit, so please ask how to implement the anchorEntiy in the reality view.
ARKit
RSS for tagIntegrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.
Posts under ARKit tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
In visionOS, the virtual content is covered by the hand by default, so I want to know that in the hybrid space, if the distance of an entity is behind a real object, how can the object in the room be covered like the virtual content is covered by the hand?
I've created a minimal iOS RealityKit/ARKit app with only this view controller:
import UIKit
import RealityKit
import ARKit
class ViewController: UIViewController {
var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
arView = ARView(frame: view.bounds, cameraMode: .nonAR, automaticallyConfigureSession: true)
arView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
view.addSubview(arView)
}
}
When I run this app in the iOS 18.1 simulator using Xcode 16.1 beta 2 (16B5014f), RealityKit logs the warnings included below to the console.
I see similar warnings on the device, running iOS 18.0.
Should I be concerned about any of these warnings?
Please let me know if I should submit feedback reporting this issue.
Thank you.
Could not locate file 'default-binaryarchive.metallib' in bundle. Registering library (/Library/Developer/CoreSimulator/Volumes/iOS_22B5045f/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 18.1.simruntime/Contents/Resources/RuntimeRoot/System/Library/PrivateFrameworks/CoreRE.framework/default.metallib) that already exists in shader manager. Library will be overwritten. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/suFeatheringCreateMergedOcclusionMask.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arKitPassthrough.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arSegmentationComposite.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute0.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute1.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute2.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute3.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute4.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute5.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute6.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute7.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute8.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute9.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute10.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute11.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute12.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. ... Compiler failed to build request makeRenderPipelineState failed [reading from a rendertarget is not supported]. Pipeline for technique meshShadowCasterProgrammableBlending failed compilation!
I'm porting over some code that uses ARKit to Swift 6 (with Complete Strict Concurrency Checking enabled).
Some methods on ARSCNViewDelegate, namely Coordinator.renderer(_:didAdd:for:) among at least one other is causing a consistent crash. On Swift 5 this code works absolutely fine.
The above method consistently crashes with _dispatch_assert_queue_fail. My assumption is that in Swift 6 a trap has been inserted by the compiler to validate that my downstream code is running on the main thread.
In Implementing a Main Actor Protocol That’s Not @MainActor, Quinn “The Eskimo!” seems to address scenarios of this nature with 3 proposed workarounds yet none of them seem feasible here.
For #1, marking ContentView.addPlane(renderer:node:anchor:) nonisolated and using @preconcurrency import ARKit compiles but still crashes :(
For #2, applying @preconcurrency to the ARSCNViewDelegate conformance declaration site just yields this warning: @preconcurrency attribute on conformance to 'ARSCNViewDelegate' has no effect
For #3, as Quinn recognizes, this is a non-starter as ARSCNViewDelegate is out of our control.
The minimal reproducible set of code is below. Simply run the app, scan your camera back and forth across a well lit environment and the app should crash within a few seconds. Switch over to Swift Language Version 5 in build settings, retry and you'll see the current code works fine.
import ARKit
import SwiftUI
struct ContentView: View {
@State private var arViewProxy = ARSceneProxy()
private let configuration: ARWorldTrackingConfiguration
@State private var planeFound = false
init() {
configuration = ARWorldTrackingConfiguration()
configuration.worldAlignment = .gravityAndHeading
configuration.planeDetection = [.horizontal]
}
var body: some View {
ARScene(proxy: arViewProxy)
.onAddNode { renderer, node, anchor in
addPlane(renderer: renderer, node: node, anchor: anchor)
}
.onAppear {
arViewProxy.session.run(configuration)
}
.onDisappear {
arViewProxy.session.pause()
}
.overlay(alignment: .top) {
if !planeFound {
Text("Slowly move device horizontally side to side to calibrate")
} else {
Text("Plane found!")
.bold()
.foregroundStyle(.green)
}
}
}
private func addPlane(renderer: SCNSceneRenderer, node: SCNNode, anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor,
let device = renderer.device,
let planeGeometry = ARSCNPlaneGeometry(device: device)
else { return }
planeFound = true
planeGeometry.update(from: planeAnchor.geometry)
let material = SCNMaterial()
material.isDoubleSided = true
material.diffuse.contents = UIColor.white.withAlphaComponent(0.65)
planeGeometry.materials = [material]
let planeNode = SCNNode(geometry: planeGeometry)
node.addChildNode(planeNode)
}
}
struct ARScene {
private(set) var onAddNodeAction: ((SCNSceneRenderer, SCNNode, ARAnchor) -> Void)?
private let proxy: ARSceneProxy
init(proxy: ARSceneProxy) {
self.proxy = proxy
}
func onAddNode(
perform action: @escaping (SCNSceneRenderer, SCNNode, ARAnchor) -> Void
) -> Self {
var view = self
view.onAddNodeAction = action
return view
}
}
extension ARScene: UIViewRepresentable {
func makeUIView(context: Context) -> ARSCNView {
let arView = ARSCNView()
arView.delegate = context.coordinator
arView.session.delegate = context.coordinator
proxy.arView = arView
return arView
}
func updateUIView(_ uiView: ARSCNView, context: Context) {
context.coordinator.onAddNodeAction = onAddNodeAction
}
func makeCoordinator() -> Coordinator {
Coordinator()
}
}
extension ARScene {
class Coordinator: NSObject, ARSCNViewDelegate, ARSessionDelegate {
var onAddNodeAction: ((SCNSceneRenderer, SCNNode, ARAnchor) -> Void)?
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
onAddNodeAction?(renderer, node, anchor)
}
}
}
@MainActor
class ARSceneProxy: NSObject, @preconcurrency ARSessionProviding {
fileprivate var arView: ARSCNView!
@objc dynamic var session: ARSession {
arView.session
}
}
Any help is greatly appreciated!
Hello,
I am a developer currently working on an AR application using ARKit. I aim to implement a Zoom feature that allows users to enlarge and reduce objects within the AR scene while simultaneously measuring the distance to those objects. Specifically, I want to incorporate Optical Zoom to provide a more natural and precise user experience. I have considered several approaches and would appreciate your advice on the most effective methods.
Approaches Being Considered:
Using UIPinchGestureRecognizer to Adjust the Camera's Field of View Modifying the scale Property of SCNNode to Enlarge/Reduce Specific Objects Leveraging AVFoundation to Control the Camera's Optical Zoom Questions:
Compatibility Between ARKit and Optical Zoom: Is it feasible to control the camera's optical zoom using AVFoundation while utilizing ARKit's features? What should be considered when integrating these two frameworks?
Integrating Object Distance Measurement with Zoom Functionality: What is the most effective approach to measure and display the distance to an object in real-time when a user zooms in on it?
User Experience Considerations: Do you have any UI/UX design tips for implementing optical zoom to ensure a natural and intuitive experience? For example, how can visual feedback for zoom actions and distance measurements be effectively presented to users?
Performance Optimization: What optimization strategies can minimize potential performance issues when implementing both optical zoom and distance measurement features simultaneously?
Example Code and Reference Materials: Could you share any example code or reference materials that demonstrate similar functionalities?
Thank you.
Example Code Request:
If possible, providing sample code that integrates optical zoom with distance measurement would be extremely helpful.
Reference Links:
Please share any tutorials or resources that demonstrate the combined use of ARKit and AVFoundation.
Hi,
I currently have Enterprise API access and have observed that the main camera API only provides RGB data. I am trying to access point cloud information from LIDAR, but it seems ARKit doesn't offer this directly via the standard APIs that iPad uses.
I wanted to ask if there are any possible options to access depth data or enhanced camera capabilities using the Enterprise API.
Specifically:
Does having Enterprise API access unlock any additional camera-related APIs in AVFoundation that could provide depth information or more advanced control over the camera?
Are there any workarounds or alternative methods to obtain depth data from the camera?
Hey, captureHighResolutionFrame() produces the normal camera shutter sound and that really doesn't fit the ARKit context. I can't override it the usual way because there's no AVCaptureSession object in ARSession. Any ideas on what to do? Thanks!
I am working on a RealityView on iOS 18 that needs to render objects farther away than 1,000 meters. My app is used outside in open areas. I am using RealityView with content.camera = .spatialTracking and I have turned off occlusion, collisions, and plane detection with a simple scene understanding like this.
let configuration = SpatialTrackingSession.Configuration(
tracking: [.camera],
sceneUnderstanding: [], //We don't want occlusions, collisions, etc
camera: .back)
let session = SpatialTrackingSession()
if let unavailable = await session.run(configuration) {
print("unavailable \(unavailable)")
}
Is this possible with spatialTracking with RealityView or with ARView?
I have my RealityView working on visionOS inside an ImmersiveSpace. On visionOS I don't have the camera as a passthrough, it is virtual scene and it has wold tracking set up via the WorldTrackingProvider and I can render objects father away than 1000 meters. I would like to do the same thing on iOS. I don't need to have the camera pass through, but I do need to have the world tracking.
I see that PerspectiveCameraComponent lets me set the near and far clipping planes, but I don't see how I can use that camera with world tracking.
I'm working on a school project that allows users to open a .USDZ file (using Quick Look) on the webpage while using Apple Vision Pro to put the object in their physical envirnment, the project is deployed on Vercel. I'm testing the page with my apple vision pro, when I click open the .USDZ file, I'm seeing a triangle with an exclamation mark while it's trying to load, but it won't load. Does anybody know how to troubleshoot this issue?
How to make a specified entity in RealityView be captured by users:
This entity has physical and collision components, and the user will not change when he does not grasp the action. However, when the user makes a grab hand gesture and is very close to the entity (there can be a small deviation), an Anchor component will be enabled to bind the entity to the hand, but when the user lets go, he will fall along the y-axis of the current position (affected by the physical component).
I hope you can help me. Thank you.
Now I'm developing a 3D motion capture app by using ARKit.
So I tested this sample code, but in iOS18, hand's and leg's orientations seems to be wrong.
Forrowing image is sample app's screen captures in iOS17 and iOS18.
To set the stage:
I made a prototype of an app for a company, the app is to be used internally right now. Prototype runs perfectly on iOS, so now I got VP to port the app to its final destination. The first thing I found out is that the image tracking on VP is useless for moving images (and that's the core of my app). Also distance at which image is lost seems to be way shorter on VP. Now I'm trying to figure out if it's possible to fix/work around it in any way and I'm wondering if Enterprise API would change anything.
So:
Is it possible to request Enterprise API access as a single person with basic Apple Developer subscription? I looked around the forum and only got more confused.
Does QR code detection and tracking work any better than image detection, or anchor updates are the same?
Does the increased "object detection" frequency affect in any way image/QR tracking, or is it (as name implies) only for object tracking?
Would increasing the CPU/GPU headroom make any change to image/QR detection frequency?
Is there something to disable to make anchor updates more frequent? I don't need complex models, shadows, physics, etc.
Greetings
Michal
How can I create a 3D model of clothing that behaves like real fabric, with realistic physics? Is it possible to achieve this model by photogrammetry? I want to use this model in the Apple Vision Pro and interact with it using hand gestures.
https://developer.apple.com/documentation/realitykit/model3d/ontapgesture(count:coordinatespace:perform:)
link -> double tap gesture deprecated in visionOS2.0. use only watchOS. right..?
so how can i make a double tap gesture in visionOS??
In my Vision OS app I am using plane detection and I want to create planes that have physics I want to create an effect that my reality kit entities rest on real world detected planes.
I was curious to see that the code below that I found in the Samples is the most efficient way of doing this.
func processPlaneDetectionUpdates() async {
for await anchorUpdate in planeTracking.anchorUpdates {
let anchor = anchorUpdate.anchor
if anchorUpdate.event == .removed {
planeAnchors.removeValue(forKey: anchor.id)
if let entity = planeEntities.removeValue(forKey: anchor.id) {
entity.removeFromParent()
}
return
}
planeAnchors[anchor.id] = anchor
let entity = Entity()
entity.name = "Plane \(anchor.id)"
entity.setTransformMatrix(anchor.originFromAnchorTransform, relativeTo: nil)
// Generate a mesh for the plane (for occlusion).
var meshResource: MeshResource? = nil
do {
let contents = MeshResource.Contents(planeGeometry: anchor.geometry)
meshResource = try MeshResource.generate(from: contents)
} catch {
print("Failed to create a mesh resource for a plane anchor: \(error).")
return
}
var material = UnlitMaterial(color: .red)
material.blending = .transparent(opacity: .init(floatLiteral: 0))
if let meshResource {
// Make this plane occlude virtual objects behind it.
entity.components.set(ModelComponent(mesh: meshResource, materials: [material]))
}
// Generate a collision shape for the plane (for object placement and physics).
var shape: ShapeResource? = nil
do {
let vertices = anchor.geometry.meshVertices.asSIMD3(ofType: Float.self)
shape = try await ShapeResource.generateStaticMesh(positions: vertices,
faceIndices: anchor.geometry.meshFaces.asUInt16Array())
} catch {
print("Failed to create a static mesh for a plane anchor: \(error).")
return
}
if let shape {
entity.components.set(CollisionComponent(shapes: [shape], isStatic: true))
let physics = PhysicsBodyComponent(mode: .static)
entity.components.set(physics)
}
let existingEntity = planeEntities[anchor.id]
planeEntities[anchor.id] = entity
contentEntity.addChild(entity)
existingEntity?.removeFromParent()
}
}
}
extension MeshResource.Contents {
init(planeGeometry: PlaneAnchor.Geometry) {
self.init()
self.instances = [MeshResource.Instance(id: "main", model: "model")]
var part = MeshResource.Part(id: "part", materialIndex: 0)
part.positions = MeshBuffers.Positions(planeGeometry.meshVertices.asSIMD3(ofType: Float.self))
part.triangleIndices = MeshBuffer(planeGeometry.meshFaces.asUInt32Array())
self.models = [MeshResource.Model(id: "model", parts: [part])]
}
}
extension GeometrySource {
func asArray<T>(ofType: T.Type) -> [T] {
assert(MemoryLayout<T>.stride == stride, "Invalid stride \(MemoryLayout<T>.stride); expected \(stride)")
return (0..<count).map {
buffer.contents().advanced(by: offset + stride * Int($0)).assumingMemoryBound(to: T.self).pointee
}
}
func asSIMD3<T>(ofType: T.Type) -> [SIMD3<T>] {
asArray(ofType: (T, T, T).self).map { .init($0.0, $0.1, $0.2) }
}
subscript(_ index: Int32) -> (Float, Float, Float) {
precondition(format == .float3, "This subscript operator can only be used on GeometrySource instances with format .float3")
return buffer.contents().advanced(by: offset + (stride * Int(index))).assumingMemoryBound(to: (Float, Float, Float).self).pointee
}
}
extension GeometryElement {
subscript(_ index: Int) -> [Int32] {
precondition(bytesPerIndex == MemoryLayout<Int32>.size,
"""
This subscript operator can only be used on GeometryElement instances with bytesPerIndex == \(MemoryLayout<Int32>.size).
This GeometryElement has bytesPerIndex == \(bytesPerIndex)
"""
)
var data = [Int32]()
data.reserveCapacity(primitive.indexCount)
for indexOffset in 0 ..< primitive.indexCount {
data.append(buffer
.contents()
.advanced(by: (Int(index) * primitive.indexCount + indexOffset) * MemoryLayout<Int32>.size)
.assumingMemoryBound(to: Int32.self).pointee)
}
return data
}
func asInt32Array() -> [Int32] {
var data = [Int32]()
let totalNumberOfInt32 = count * primitive.indexCount
data.reserveCapacity(totalNumberOfInt32)
for indexOffset in 0 ..< totalNumberOfInt32 {
data.append(buffer.contents().advanced(by: indexOffset * MemoryLayout<Int32>.size).assumingMemoryBound(to: Int32.self).pointee)
}
return data
}
func asUInt16Array() -> [UInt16] {
asInt32Array().map { UInt16($0) }
}
public func asUInt32Array() -> [UInt32] {
asInt32Array().map { UInt32($0) }
}
}
I was also curious to know if I can do this without ARKit using SpatialTrackingSession. My understanding is that using SpatialTrackingSession in RealityKit I can only get the transforms of the AnchorEntities but it won't have geometry information to create the collision shapes.
We are developing VisionOS app now, we have applied the Enterprise API for visionOS, including Main Camera Access for Vision Pro, and already get the "Enterprise.license" in the mail apple sent us, we use the developer account import the license file into Xcode:
but in Xcode, we cannot find the entitlement of Enterprise API:
if we put com.apple.developer.arkit.main-camera-access.allow into Entitlement file of the project manually,Xcode will alarm:
and we find that the app itself dont have "Additional Capabilities" which include the Enterprise API:
what should we do to have the entitlement file for the Enterprise API, so we can use the enterprise API?
I stumbled across the function setWorldOrigin(relativeTransform:) from the ARSession which is documented here:
https://developer.apple.com/documentation/arkit/arsession/2942278-setworldorigin
I made a custom ARSession where i override this function and print and modify the relativeTransform parameter. The print shows that this function is called with an updated relativeTransform value but it seems that it has no impact e.g. on the world origin when starting or continuing a scan, the tiny puppet house in RoomPlan or any tracking position that i get from ARKit.
Has anybody experience with this method or knows what parts are influenced by setWorldOrigin()?
We tried out our Unity-based AR app for the very first time under iOS 18 and noticed an immediate, repeatable crash.
When run in Xcode 16, we get this error message:
Assert: /Library/Caches/com.apple.xbs/Sources/AppleCV3D/library/VIO/CAPI/src/SlamAnchor.cpp:37 : HasValidPose()
Assert: /Library/Caches/com.apple.xbs/Sources/AppleCV3D/library/VIO/CAPI/src/SlamAnchor.cpp:37 : HasValidPose()
That's a blocker to us.
We're using Unity 2022.3.27f1.
Dear all,
We are building an XR application demonstrating our research on open-vocabulary 3D instance segmentation for assistive technology. We intend on bringing it to visionOS using the new Enterprise APIs. Our method was trained on datasets resembling ScanNet which contain the following:
localized (1) RGB camera frames (2) with Depth (3) and camera intrinsics (4)
point cloud (5)
I understand, we can query (1), (2), and (4) from the CameraFrameProvider. As for (3) and (4), it is unclear to me if/how we can obtain that data.
In handheld ARKit, this example project demos how the depthMap can be used to simulate raw point clouds. However, this property doesn't seem to be available in visionOS.
Is there some way for us to obtain depth data associated with camera frames?
"Faking" depth data from the SceneReconstructionProvider-generated meshes is too coarse for our method. I hope I'm just missing some detail and there's some way to configure CameraFrameProvider to also deliver depth and/or point clouds.
Thanks for any help or pointer in the right direction!
~ Alex
I'm experiencing an issue with QuickLook in iOS 18 where.reality files with audio playback are affected. When I open a.reality file that includes audio, the audio track plays twice: once from the moment the file is opened, and again from the start of the animation. This results in a duplicate audio playback.
I've tested this issue on multiple devices running iOS 16, 17, and 18, and the problem only occurs on iOS 18. I've tried restarting the devices and checking for any software updates, but the issue persists.
Steps to reproduce:
Open a.reality file with audio playback in QuickLook on an iOS 18 device.
Observe the audio playback.
Expected result:
The audio track should play only once, from the start of the animation.
Actual result:
The audio track plays twice, once from the moment the file is opened and again from the start of the animation.
Device and iOS version:
I've tested this issue on iPhone 12 Pro, iPhone 13 Pro running iOS 18, iPhone 13 running iOS 16 and iPhone 11 Pro running iOS 17,