Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

Presenting immersive content in UIKit app
I have a UIKit app and would like to provide spacial experience when run on VisionOS. I added VisionOS support, but not sure how to present an immersive view. All tutorials are in SwiftUI, but my app is in UIKit. This is an example from a SwiftUI project, but how how do I declare this ImmersiveView in UIKit? struct VirtualApp: App { var body: some Scene { WindowGroup { ContentView() }.windowStyle(.volumetric) ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() } } } and in UIKit how do I make the call to open the ImmersiveView?
5
1
1.5k
Jul ’23
How to display stereo images in Apple Vision Pro?
Hi community, I have a pair of stereo images, one for each eye. How should I render it on visionOS? I know that for 3D videos, the AVPlayerViewController could display them in fullscreen mode. But I couldn't find any docs relating to 3D stereo images. I guess my question can be brought up in a more general way: Is there any method that we can render different content for each eye? This could also be helpful to someone who only has sight on one eye.
8
0
3k
Jul ’23
Graphic Engineer
Hey There, I recently tried out the iOS 17 photogrammetry sample app, The results are very promising when compared to the iOS 16 apps The real world scale retention works amazing. However, my use case involves making the camera still and rotating the object instead, which was an option in iOS 16 but unfortunately removed in iOS 17 I wonder if there's a way to do so in iOS 17 app!
1
0
610
Sep ’23
With in Apple PhotogrammterySession, Variable related with real scale.
In ARKit, I took few Color CVPixelBuffers and Depth CVPixelBuffers, ran PhotogrammetrySession with PhotogrammetrySamples. In my service, precise real scale is important, so I tried to figure out what is related to the rate of real scale model created. I did some experiments, and I set same number of images(10 pics), same object, same shot angles, distance to object(30cm, 50cm, 100cm). But even with above same controlled variables, sometimes, it generate real scale, and sometimes not. Because I couldn't get to source code of photogrammetry and how it work inside, I wonder do I miss and how can I create real scale every time if it's possible.
1
0
676
Sep ’23
Question about Checkpoint Directory
Hello! I have a question about usage snapshots from ios17 sample app on macOS 14. I tried to export folders "Photos" and "Snapshots" captured from ios and then wrote like: let checkpointDirectoryPath = "/path/to/the/Snapshots/" let checkpointDirectoryURL = URL(fileURLWithPath: checkpointDirectoryPath, isDirectory: true) if #available(macOS 14.0, *) { configuration.checkpointDirectory = checkpointDirectoryURL } else { // Fallback on earlier versions } But I didn't notice any speed or performance improvements. It looks like "Snapshots" folder was simply ignored. Please advise what I can do so that "Snapshots" folder is affected during calculations.
0
0
434
Oct ’23
Downgrade iPad OS
I have several ipads that have been upgraded to 17.0.3 but I need to be able to back them up to 16.6.1 version. We have apps that do not work currently on 17. I have downloaded the 16.6.1 .ipsw file and every time I try to use it I get OS cannot be restored on "iPad". Personalization failed. Any way to get an os file that would work?
0
0
430
Oct ’23
Object Capture : Pose Information
Hi, In the newly released Object Capture API, for a PhotogrammetrySession, we can get the poses of the saved images, and the same images will be used to create the model. But in the sample project, https://developer.apple.com/documentation/realitykit/guided-capture-sample Only the 3D model that's generated will be saved, but for the others, pose, poses, bounds, point cloud, and model entity, there was a comment added, saying that it is // Not supported yet When will this be available for the developers ?? Can you give us a tentative date at least???
1
0
560
Nov ’23
View frame / bounds incorrect (iOS app on visionOS)
I'm running into an issue with the frame bounds of a Metal-based iOS app on the visionOS simulator. Here's a snapshot: That's the result of downloading Apple's sample code and running it in the simulator (Apple Vision Pro (Designed for iPad)). Is it a bug in the simulator / iOS->visionOS emulation, or is that sample code doing something odd that isn't compatible with visionOS? Thanks! Eddy
1
0
514
Nov ’23
How to replicate ObjectCaptureSession's boundary restriction?
Hello, I want to use Apple's PhotogrammetrySession to scan a window. However, ObjectCaptureSession seems to be a monotasker and won't allow capture to occur with anything but a small object on a flat surface. So, I need to manually feed data into PhotogrammetrySession. But when I do, it focuses way too much on the scene behind the window, sacrificing detail on the window itself. Is there a way for me to either coax ObjectCaptureSession into capturing an area on the wall, or for me to restrict PhotogrammetrySession's target bounding box manually? How does ObjectCaptureSession communicate the limited bounding box to PhotogrammetrySession? Thanks, Sebastian
1
0
534
Dec ’23
VisionOS RealityView 中全景球体 中放入实体后控制问题
在 Full 模式下, 我创建一球体 半径 10 ,给球添加 CollisionComponent 与 InputTargetComponent 我接着创建一个0.2 正方体 也添加了 上面的两组件 又添加。一个 attrach 的附件信息 代码如下 ` RealityView{content,attachments in let meshgenerate = MeshResource.generateSphere(radius: 10) let collisionShape = ShapeResource.generateSphere(radius: 10 ) var sp = ModelEntity(mesh: meshgenerate) sp.components.set(CollisionComponent(shapes: [collisionShape])) sp.components.set(InputTargetComponent()) sp.transform.scale *= .init(-1, 1, 1) sp.name = "sp" content.add(sp) let ont = ModelEntity(mesh: MeshResource.generateBox(size: 0.2) ) ont.components.set(CollisionComponent(shapes: [ShapeResource.generateBox(size: .init(x: 0.2, y: 0.2, z: 0.2))])) ont.components.set(InputTargetComponent()) ont.name = "ont" ont.position = .init(x: 0, y: 0, z: -2) content.add(ont) if let stack = attachments.entity(for: "aid") { stack.name = "sssssss" stack.setPosition(.init(x: 0, y: 1.5, z: -1), relativeTo: nil) // stack.generateCollisionShapes(recursive: false) //stack.components.set(InputTargetComponent()) content.add(stack) } } attachments: { let rostion = Rotation3D(angle: Angle2D(degrees: 30), axis: .x) Attachment(id: "aid") { Button { print("sss","Button") } label: { Text("New Color") .font(.extraLargeTitle) .padding(40) } .background(.yellow) } } .gesture(TapGesture().targetedToAnyEntity().onEnded({ value in print("sss" ,"TapGesture",value.entity.name) //openwind(id: "main") })) 只有球台可以出发 gesture 其他的 EntityModel 及 附加的信息 都无法触发 gestrue 我知道问题出在 其他实体放到了球内,同时因为球体有 InputTargetComponent 组件我如果想 不求出 InputTargetComponent 情况下 希望他的附件信息也能触发gesture,应该如何解决
0
0
413
Dec ’23
Using Vision Pro in multiple rooms
Suppose I want to use the Vision Pro device in multiple rooms in my home. I have worn the device when I entered my home, checked some notifications on the device, closed the apps. With the device still on my head, I move to my bedroom. Now I want to open some other application without removing the headset and wearing it again. Is this possible?
1
0
542
Jan ’24
TestFlight for iPad (iOS) App in Apple Vision Pro issue (visionOS)
I have an iPad App that works/available on visionOS store. However, TestFlight releases are displaying that this in iOS app only, and Incompatible on this Apple Vision Pro. How do I enable my iPadOS app for TestFlight in vision OS PS. Native visionOS can appear there, I don't have any approved or released builds yet for visionOS. I also see the same issue with "app not compatible" in TestFlight without visionOS section present. The same app is available in App Store in visionOS/iPad apps
3
0
824
Feb ’24