Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

Posts under ARKit tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

RealityKit crashes randomly in the simulator but not on the device
I'm writing a RealityKit/ARKit app that runs on iOS. Starting with Xcode 16.0 beta 1, at least through Xcode 16.1 beta 2 (16B5014f), in the iOS 18 simulator, my app randomly crashes in about 20% of app sessions the first time it attempts to present an ARView. The crashes seem to occur at multiple points within RealityKit and Metal. Below, I've included screenshots of the call stacks of the crashes, which occur as a result of both EXC_BAD_ACCESS and assertion failures within RealityKit. The app only crashes in the iOS 18 simulator, and does not crash in the iOS 17 simulator or earlier. The app only crashes in the simulator, and does not crash on a device running iOS 18. Before I investigate further, I'd appreciate it if an Apple engineer could give me a sense of if these crashes are most likely the result of known issues within RealityKit and/or the simulator, or if your opinion is that there are probably bugs in my app's code. I've submitted several feedback issues in the past, and I'd love to submit this issue too, but I expect that I would spend many hours attempting to create a repro case in a sample app. Understandably, I'd rather not spend this time if an Apple engineer could tell me this is a known issue, for example. Thank you.
1
0
102
1d
Issue with tracking multiple images using ARKit on VisionOS
We are using the ARKit image tracking feature on visionOS 2.0 with three pre-registered images. The image tracking works, but only one image is actively tracked at a time. When more than one target image is visible to the camera, it has difficulty detecting and tracking the other images. Is this the expected behavior in visionOS, or is there something we need to do to resolve this issue?
0
0
87
1d
Capturing External Object Images via Vision Pro Passthrough Camera with Enterprise APIs
We are currently working with the Enterprise APIs for visionOS 2 and have successfully obtained the necessary entitlements for passthrough camera access. Our goal is to capture images of external real-world objects using the passthrough camera of the Vision Pro, not just take screenshots or screen captures. Our specific use case involves: 1. Accessing the raw passthrough camera feed. 2. Capturing high-resolution images of objects in the real world through the camera. 3. Processing and saving these images for further analysis within our custom enterprise app. We would greatly appreciate any guidance, tutorials, or sample code that could help us achieve this functionality. If there are specific APIs or best practices for handling real-world image capture via passthrough cameras with the Enterprise APIs, please let us know.
0
0
68
1d
How is the HandTracking in Happy Beam avoiding data racing?
I am new to learning about concurrency and I am working on an app that uses the HandTrackingProvider class. In the Happy Beam sample code, there is a HearGestureModel which has a reference to the HandTrackingProvider() and this seems to write to a struct called HandUpdates inside the HeartGestureModel class through the publishHandTrackingUpdates() function. On another thread, there is a function called computeTransformofUserPerformedHeartGesture() which reads the values of the HandUpdates to determine whether the user is making the appropriate gesture. My question is, how is the code handling the constant read and write to the HandUpdates struct?
1
0
105
2d
Are these RealityKit warnings a cause for concern?
I've created a minimal iOS RealityKit/ARKit app with only this view controller: import UIKit import RealityKit import ARKit class ViewController: UIViewController { var arView: ARView! override func viewDidLoad() { super.viewDidLoad() arView = ARView(frame: view.bounds, cameraMode: .nonAR, automaticallyConfigureSession: true) arView.autoresizingMask = [.flexibleWidth, .flexibleHeight] view.addSubview(arView) } } When I run this app in the iOS 18.1 simulator using Xcode 16.1 beta 2 (16B5014f), RealityKit logs the warnings included below to the console. I see similar warnings on the device, running iOS 18.0. Should I be concerned about any of these warnings? Please let me know if I should submit feedback reporting this issue. Thank you. Could not locate file 'default-binaryarchive.metallib' in bundle. Registering library (/Library/Developer/CoreSimulator/Volumes/iOS_22B5045f/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 18.1.simruntime/Contents/Resources/RuntimeRoot/System/Library/PrivateFrameworks/CoreRE.framework/default.metallib) that already exists in shader manager. Library will be overwritten. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/suFeatheringCreateMergedOcclusionMask.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arKitPassthrough.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arSegmentationComposite.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute0.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute1.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute2.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute3.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute4.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute5.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute6.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute7.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute8.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute9.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute10.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute11.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. Could not resolve material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute12.rematerial' in bundle at '/Users/drew/Library/Developer/CoreSimulator/Devices/6BCC578D-5046-49E6-B149-390576C7241D/data/Containers/Bundle/Application/75E15C63-F1D2-487F-A057-EBA2D45582C3/ARViewFailApp.app'. Loading via asset path. ... Compiler failed to build request makeRenderPipelineState failed [reading from a rendertarget is not supported]. Pipeline for technique meshShadowCasterProgrammableBlending failed compilation!
1
0
128
3d
ARKit delegate code broken by Swift 6
I'm porting over some code that uses ARKit to Swift 6 (with Complete Strict Concurrency Checking enabled). Some methods on ARSCNViewDelegate, namely Coordinator.renderer(_:didAdd:for:) among at least one other is causing a consistent crash. On Swift 5 this code works absolutely fine. The above method consistently crashes with _dispatch_assert_queue_fail. My assumption is that in Swift 6 a trap has been inserted by the compiler to validate that my downstream code is running on the main thread. In Implementing a Main Actor Protocol That’s Not @MainActor, Quinn “The Eskimo!” seems to address scenarios of this nature with 3 proposed workarounds yet none of them seem feasible here. For #1, marking ContentView.addPlane(renderer:node:anchor:) nonisolated and using @preconcurrency import ARKit compiles but still crashes :( For #2, applying @preconcurrency to the ARSCNViewDelegate conformance declaration site just yields this warning: @preconcurrency attribute on conformance to 'ARSCNViewDelegate' has no effect For #3, as Quinn recognizes, this is a non-starter as ARSCNViewDelegate is out of our control. The minimal reproducible set of code is below. Simply run the app, scan your camera back and forth across a well lit environment and the app should crash within a few seconds. Switch over to Swift Language Version 5 in build settings, retry and you'll see the current code works fine. import ARKit import SwiftUI struct ContentView: View { @State private var arViewProxy = ARSceneProxy() private let configuration: ARWorldTrackingConfiguration @State private var planeFound = false init() { configuration = ARWorldTrackingConfiguration() configuration.worldAlignment = .gravityAndHeading configuration.planeDetection = [.horizontal] } var body: some View { ARScene(proxy: arViewProxy) .onAddNode { renderer, node, anchor in addPlane(renderer: renderer, node: node, anchor: anchor) } .onAppear { arViewProxy.session.run(configuration) } .onDisappear { arViewProxy.session.pause() } .overlay(alignment: .top) { if !planeFound { Text("Slowly move device horizontally side to side to calibrate") } else { Text("Plane found!") .bold() .foregroundStyle(.green) } } } private func addPlane(renderer: SCNSceneRenderer, node: SCNNode, anchor: ARAnchor) { guard let planeAnchor = anchor as? ARPlaneAnchor, let device = renderer.device, let planeGeometry = ARSCNPlaneGeometry(device: device) else { return } planeFound = true planeGeometry.update(from: planeAnchor.geometry) let material = SCNMaterial() material.isDoubleSided = true material.diffuse.contents = UIColor.white.withAlphaComponent(0.65) planeGeometry.materials = [material] let planeNode = SCNNode(geometry: planeGeometry) node.addChildNode(planeNode) } } struct ARScene { private(set) var onAddNodeAction: ((SCNSceneRenderer, SCNNode, ARAnchor) -> Void)? private let proxy: ARSceneProxy init(proxy: ARSceneProxy) { self.proxy = proxy } func onAddNode( perform action: @escaping (SCNSceneRenderer, SCNNode, ARAnchor) -> Void ) -> Self { var view = self view.onAddNodeAction = action return view } } extension ARScene: UIViewRepresentable { func makeUIView(context: Context) -> ARSCNView { let arView = ARSCNView() arView.delegate = context.coordinator arView.session.delegate = context.coordinator proxy.arView = arView return arView } func updateUIView(_ uiView: ARSCNView, context: Context) { context.coordinator.onAddNodeAction = onAddNodeAction } func makeCoordinator() -> Coordinator { Coordinator() } } extension ARScene { class Coordinator: NSObject, ARSCNViewDelegate, ARSessionDelegate { var onAddNodeAction: ((SCNSceneRenderer, SCNNode, ARAnchor) -> Void)? func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { onAddNodeAction?(renderer, node, anchor) } } } @MainActor class ARSceneProxy: NSObject, @preconcurrency ARSessionProviding { fileprivate var arView: ARSCNView! @objc dynamic var session: ARSession { arView.session } } Any help is greatly appreciated!
1
7
172
2d
Compatibility Between ARKit and Optical Zoom
Hello, I am a developer currently working on an AR application using ARKit. I aim to implement a Zoom feature that allows users to enlarge and reduce objects within the AR scene while simultaneously measuring the distance to those objects. Specifically, I want to incorporate Optical Zoom to provide a more natural and precise user experience. I have considered several approaches and would appreciate your advice on the most effective methods. Approaches Being Considered: Using UIPinchGestureRecognizer to Adjust the Camera's Field of View Modifying the scale Property of SCNNode to Enlarge/Reduce Specific Objects Leveraging AVFoundation to Control the Camera's Optical Zoom Questions: Compatibility Between ARKit and Optical Zoom: Is it feasible to control the camera's optical zoom using AVFoundation while utilizing ARKit's features? What should be considered when integrating these two frameworks? Integrating Object Distance Measurement with Zoom Functionality: What is the most effective approach to measure and display the distance to an object in real-time when a user zooms in on it? User Experience Considerations: Do you have any UI/UX design tips for implementing optical zoom to ensure a natural and intuitive experience? For example, how can visual feedback for zoom actions and distance measurements be effectively presented to users? Performance Optimization: What optimization strategies can minimize potential performance issues when implementing both optical zoom and distance measurement features simultaneously? Example Code and Reference Materials: Could you share any example code or reference materials that demonstrate similar functionalities? Thank you. Example Code Request: If possible, providing sample code that integrates optical zoom with distance measurement would be extremely helpful. Reference Links: Please share any tutorials or resources that demonstrate the combined use of ARKit and AVFoundation.
1
0
89
3d
Raw point cloud access
Hi, I currently have Enterprise API access and have observed that the main camera API only provides RGB data. I am trying to access point cloud information from LIDAR, but it seems ARKit doesn't offer this directly via the standard APIs that iPad uses. I wanted to ask if there are any possible options to access depth data or enhanced camera capabilities using the Enterprise API. Specifically: Does having Enterprise API access unlock any additional camera-related APIs in AVFoundation that could provide depth information or more advanced control over the camera? Are there any workarounds or alternative methods to obtain depth data from the camera?
1
0
59
4d
Change zFar in RealityView
I am working on a RealityView on iOS 18 that needs to render objects farther away than 1,000 meters. My app is used outside in open areas. I am using RealityView with content.camera = .spatialTracking and I have turned off occlusion, collisions, and plane detection with a simple scene understanding like this. let configuration = SpatialTrackingSession.Configuration( tracking: [.camera], sceneUnderstanding: [], //We don't want occlusions, collisions, etc camera: .back) let session = SpatialTrackingSession() if let unavailable = await session.run(configuration) { print("unavailable \(unavailable)") } Is this possible with spatialTracking with RealityView or with ARView? I have my RealityView working on visionOS inside an ImmersiveSpace. On visionOS I don't have the camera as a passthrough, it is virtual scene and it has wold tracking set up via the WorldTrackingProvider and I can render objects father away than 1000 meters. I would like to do the same thing on iOS. I don't need to have the camera pass through, but I do need to have the world tracking. I see that PerspectiveCameraComponent lets me set the near and far clipping planes, but I don't see how I can use that camera with world tracking.
0
0
164
1w
USDZ file not loading on Apple Vision Pro
I'm working on a school project that allows users to open a .USDZ file (using Quick Look) on the webpage while using Apple Vision Pro to put the object in their physical envirnment, the project is deployed on Vercel. I'm testing the page with my apple vision pro, when I click open the .USDZ file, I'm seeing a triangle with an exclamation mark while it's trying to load, but it won't load. Does anybody know how to troubleshoot this issue?
4
0
213
1w
Seize the entity
How to make a specified entity in RealityView be captured by users: This entity has physical and collision components, and the user will not change when he does not grasp the action. However, when the user makes a grab hand gesture and is very close to the entity (there can be a small deviation), an Anchor component will be enabled to bind the entity to the hand, but when the user lets go, he will fall along the y-axis of the current position (affected by the physical component). I hope you can help me. Thank you.
1
0
170
1w
VisionOS slow image tracking and enterprise API questions
To set the stage: I made a prototype of an app for a company, the app is to be used internally right now. Prototype runs perfectly on iOS, so now I got VP to port the app to its final destination. The first thing I found out is that the image tracking on VP is useless for moving images (and that's the core of my app). Also distance at which image is lost seems to be way shorter on VP. Now I'm trying to figure out if it's possible to fix/work around it in any way and I'm wondering if Enterprise API would change anything. So: Is it possible to request Enterprise API access as a single person with basic Apple Developer subscription? I looked around the forum and only got more confused. Does QR code detection and tracking work any better than image detection, or anchor updates are the same? Does the increased "object detection" frequency affect in any way image/QR tracking, or is it (as name implies) only for object tracking? Would increasing the CPU/GPU headroom make any change to image/QR detection frequency? Is there something to disable to make anchor updates more frequent? I don't need complex models, shadows, physics, etc. Greetings Michal
3
0
271
1w
Is this the easiest way create scene planes that allow for collision with Realitykit entities
In my Vision OS app I am using plane detection and I want to create planes that have physics I want to create an effect that my reality kit entities rest on real world detected planes. I was curious to see that the code below that I found in the Samples is the most efficient way of doing this. func processPlaneDetectionUpdates() async { for await anchorUpdate in planeTracking.anchorUpdates { let anchor = anchorUpdate.anchor if anchorUpdate.event == .removed { planeAnchors.removeValue(forKey: anchor.id) if let entity = planeEntities.removeValue(forKey: anchor.id) { entity.removeFromParent() } return } planeAnchors[anchor.id] = anchor let entity = Entity() entity.name = "Plane \(anchor.id)" entity.setTransformMatrix(anchor.originFromAnchorTransform, relativeTo: nil) // Generate a mesh for the plane (for occlusion). var meshResource: MeshResource? = nil do { let contents = MeshResource.Contents(planeGeometry: anchor.geometry) meshResource = try MeshResource.generate(from: contents) } catch { print("Failed to create a mesh resource for a plane anchor: \(error).") return } var material = UnlitMaterial(color: .red) material.blending = .transparent(opacity: .init(floatLiteral: 0)) if let meshResource { // Make this plane occlude virtual objects behind it. entity.components.set(ModelComponent(mesh: meshResource, materials: [material])) } // Generate a collision shape for the plane (for object placement and physics). var shape: ShapeResource? = nil do { let vertices = anchor.geometry.meshVertices.asSIMD3(ofType: Float.self) shape = try await ShapeResource.generateStaticMesh(positions: vertices, faceIndices: anchor.geometry.meshFaces.asUInt16Array()) } catch { print("Failed to create a static mesh for a plane anchor: \(error).") return } if let shape { entity.components.set(CollisionComponent(shapes: [shape], isStatic: true)) let physics = PhysicsBodyComponent(mode: .static) entity.components.set(physics) } let existingEntity = planeEntities[anchor.id] planeEntities[anchor.id] = entity contentEntity.addChild(entity) existingEntity?.removeFromParent() } } } extension MeshResource.Contents { init(planeGeometry: PlaneAnchor.Geometry) { self.init() self.instances = [MeshResource.Instance(id: "main", model: "model")] var part = MeshResource.Part(id: "part", materialIndex: 0) part.positions = MeshBuffers.Positions(planeGeometry.meshVertices.asSIMD3(ofType: Float.self)) part.triangleIndices = MeshBuffer(planeGeometry.meshFaces.asUInt32Array()) self.models = [MeshResource.Model(id: "model", parts: [part])] } } extension GeometrySource { func asArray<T>(ofType: T.Type) -> [T] { assert(MemoryLayout<T>.stride == stride, "Invalid stride \(MemoryLayout<T>.stride); expected \(stride)") return (0..<count).map { buffer.contents().advanced(by: offset + stride * Int($0)).assumingMemoryBound(to: T.self).pointee } } func asSIMD3<T>(ofType: T.Type) -> [SIMD3<T>] { asArray(ofType: (T, T, T).self).map { .init($0.0, $0.1, $0.2) } } subscript(_ index: Int32) -> (Float, Float, Float) { precondition(format == .float3, "This subscript operator can only be used on GeometrySource instances with format .float3") return buffer.contents().advanced(by: offset + (stride * Int(index))).assumingMemoryBound(to: (Float, Float, Float).self).pointee } } extension GeometryElement { subscript(_ index: Int) -> [Int32] { precondition(bytesPerIndex == MemoryLayout<Int32>.size, """ This subscript operator can only be used on GeometryElement instances with bytesPerIndex == \(MemoryLayout<Int32>.size). This GeometryElement has bytesPerIndex == \(bytesPerIndex) """ ) var data = [Int32]() data.reserveCapacity(primitive.indexCount) for indexOffset in 0 ..< primitive.indexCount { data.append(buffer .contents() .advanced(by: (Int(index) * primitive.indexCount + indexOffset) * MemoryLayout<Int32>.size) .assumingMemoryBound(to: Int32.self).pointee) } return data } func asInt32Array() -> [Int32] { var data = [Int32]() let totalNumberOfInt32 = count * primitive.indexCount data.reserveCapacity(totalNumberOfInt32) for indexOffset in 0 ..< totalNumberOfInt32 { data.append(buffer.contents().advanced(by: indexOffset * MemoryLayout<Int32>.size).assumingMemoryBound(to: Int32.self).pointee) } return data } func asUInt16Array() -> [UInt16] { asInt32Array().map { UInt16($0) } } public func asUInt32Array() -> [UInt32] { asInt32Array().map { UInt32($0) } } } I was also curious to know if I can do this without ARKit using SpatialTrackingSession. My understanding is that using SpatialTrackingSession in RealityKit I can only get the transforms of the AnchorEntities but it won't have geometry information to create the collision shapes.
2
0
171
1w