I have an application that is meant to be a "watch together" GroupActivity using SharePlay that coordinates video playback using AVPlayerPlaybackCoordinator. In the current implementation, the activity begins before opening the AVPlayer, however when clicking the back button within the AVPlayer view, the user is prompted to "End Activity for Everyone" or "End Activity for just me". There is not an option to continue the group activity. My goal is to retain the same GroupSession, even if a user exits the AVPlayer view. Is there a way to avoid ending the session when coordinating playback using the AVPlayerPlaybackCoordinator?
private func startObservingSessions() async {
sessionInfo = .init()
// Await new sessions to watch video together.
for await session in MyActivity.sessions() {
// Clean up the old session, if it exists.
cleanUpSession(groupSession)
#if os(visionOS)
// Retrieve the new session's system coordinator object to update its configuration.
guard let systemCoordinator = await session.systemCoordinator else { continue }
// Create a new configuration that enables all participants to share the same immersive space.
var configuration = SystemCoordinator.Configuration()
// Sets up spatial persona configuration
configuration.spatialTemplatePreference = .sideBySide
configuration.supportsGroupImmersiveSpace = true
// Update the coordinator's configuration.
systemCoordinator.configuration = configuration
#endif
// Set the app's active group session before joining.
groupSession = session
// Store session for use in sending messages
sessionInfo?.session = session
let stateListener = Task {
await self.handleStateChanges(groupSession: session)
}
subscriptions.insert(.init { stateListener.cancel() })
// Observe when the local user or a remote participant changes the activity on the GroupSession
let activityListener = Task {
await self.handleActivityChanges(groupSession: session)
}
subscriptions.insert(.init { activityListener.cancel() })
// Join the session to participate in playback coordination.
session.join()
}
}
/// An implementation of `AVPlayerPlaybackCoordinatorDelegate` that determines how
/// the playback coordinator identifies local and remote media.
private class CoordinatorDelegate: NSObject, AVPlayerPlaybackCoordinatorDelegate {
var video: Video?
// Adopting this delegate method is required when playing local media,
// or any time you need a custom strategy for identifying media. Without
// implementing this method, coordinated playback won't function correctly.
func playbackCoordinator(_ coordinator: AVPlayerPlaybackCoordinator,
identifierFor playerItem: AVPlayerItem) -> String {
// Return the video id as the player item identifier.
"\(video?.id ?? -1)"
}
}
///
/// Initializes the playback coordinator for synchronizing video playback
func initPlaybackCoordinator(playbackCoordinator: AVPlayerPlaybackCoordinator) async {
self.playbackCoordinator = playbackCoordinator
if let coordinator = self.playbackCoordinator {
coordinator.delegate = coordinatorDelegate
}
if let activeSession = groupSession {
// Set the group session on the AVPlayer instances's playback coordinator
// so it can synchronize playback with other devices.
playbackCoordinator.coordinateWithSession(activeSession)
}
}
/// A coordinator that acts as the player view controller's delegate object.
final class PlayerViewControllerDelegate: NSObject, AVPlayerViewControllerDelegate {
let player: PlayerModel
init(player: PlayerModel) {
self.player = player
}
#if os(visionOS)
// The app adopts this method to reset the state of the player model when a user
// taps the back button in the visionOS player UI.
func playerViewController(_ playerViewController: AVPlayerViewController,
willEndFullScreenPresentationWithAnimationCoordinator coordinator: UIViewControllerTransitionCoordinator) {
Task { @MainActor in
// Calling reset dismisses the full-window player.
player.reset()
}
}
#endif
}
visionOS
RSS for tagDiscuss developing for spatial computing and Apple Vision Pro.
Posts under visionOS tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
How do I directly input landmarks to the activity classifier rather than inputting an image/video?
I am trying to attach a button to user's left hand. the position is tracked.
the button stays above the user's left hand. but it doesn't face the user. or it doesn't even face where the wrist is pointing. this is the main code snippet:
if (model.editWindowAdded) {
let originalMatrix = model.originFromWristLeft
let theattachment = attachments.entity(for: "sample")!
entityDummy.addChild(theattachment)
let testrotvalue = simd_quatf(real: 0.9906431, imag: SIMD3<Float>(-0.028681312, entityDummy.orientation.imag.y, 0.025926698))
entityDummy.orientation = testrotvalue
theattachment.position = [0, 0.1, 0]
var timer = Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) {_ in
let originalMatrix = model.originFromWristLeft
print(originalMatrix.columns.0.y)
let testrotvalue = simd_quatf(real: 0.9906431, imag: SIMD3<Float>(-0.028681312,0.1, 0.025926698))
entityDummy.orientation = testrotvalue
}
}
Here is what I have discovered:
When I have window A pushes window B, and then B's onAppear dismisses A by its id. In this case, A will not appear if B later dismisses itself unless B calls open/pushWindow(id: A)
However, if I then open an immersive space by A and dismiss it, there will be several B appearing depending on how many times the process I mentioned above was repeated
It does make no sense using onAppear to dismiss A while we later want to reuse it, but is this feature expected?
In the apple map of some areas, there will be a very realistic real-life 3D map. And now I want to call it through 3d in visionOS (like model3d). How can I call it?
Note: What I ask for is not to have an effect similar to 3d on a flat screen like in iOS, but to display the USDZ model in visionOS.
Hi everyone,
We are looking for Apple Vision Pro users who would like to test our new app. You'll get an opportunity to experience a unique digital pet app still in alpha before we release it to the public and help shape its future by providing your valuable feedback.
We know there are only a handful of spatial native apps optimized for Apple Vision Pro out there, so wouldn’t you like to be one of the first to check out what we’re developing exclusively for Apple Vision Pro? You will see for yourself that the experience we’re creating is nothing like anyone has ever seen before.
How it works:
Sign up here: https://bootloader.studio/usertestingprogram#signup
Confirm: Wait for a confirmation email with more detailed information.
Participate: Start providing your feedback in our Discord channel!
Thank you!
Hello,
is it possible to take a screenshot of the whole immersive view, including or excluding SwiftUI components? ARView has a snapshot method for this, but it seems there's no equivalent for RealityView.
I've tried to use ImageRenderer on a parent of RealityView, but I'm only getting plain white bitmap so far.
Thanks in advance,
Rlu
I have an application made from Flutter, which is possible to run on VisionOS by running as design to Ipad, and I would like that inside this application would be possible to go to mixed reality somehow. I am trying to do so far was to embedded the vision project that I have inside the swift application that flutter generates, but in this attempt I got an error from Xcode telling me that this way is not possible. I wonder if is there an another way that I could achieve my goal?
This is a visionOS App. I added contextMenu under a combination view, but when I pressed the view for a long time, there was no response. I tried to use this contextMenu in other views, which can be used normally, so I think there is something wrong with this combination view, but I don't know what the problem is. I hope you can remind me. Thank you!
Views with problems:
struct NAMEView: View {
@StateObject private var placeStore = PlaceStore()
var body: some View {
ZStack {
Group {
HStack(spacing: 2) {
Image(systemName: "mappin.circle.fill")
.font(.system(size: 50))
.symbolRenderingMode(.multicolor)
.accessibilityLabel("your location")
.accessibilityAddTraits([.isHeader])
.padding(.leading, 5.5)
VStack {
Text("\(placeStore.locationName)")
.font(.title3)
.accessibilityLabel(placeStore.locationName)
Text("You are here in App")
.font(.system(size: 13))
.foregroundColor(.secondary)
.accessibilityLabel("You are here in App")
}
.hoverEffect { effect, isActive, _ in
effect.opacity(isActive ? 1 : 0)
}
.padding()
}
}
.onAppear {
placeStore.updateLocationName()
}
.glassBackgroundEffect()
.hoverEffect { effect, isActive, proxy in
effect.clipShape(.capsule.size(
width: isActive ? proxy.size.width : proxy.size.height,
height: proxy.size.height,
anchor: .leading
))
.scaleEffect(isActive ? 1.05 : 1.0)
}
}
}
}
In RealityKit using visionOS, I scan the room and use the resulting mesh to create occlusion and physical boundaries. That works well and iI can place cubes (with physics on) onto that too.
However, I also want to update the mesh with versions from new scans and that make all my cubes jump.
Is there a way to prevent this? I get that the inaccuracies will produce slightly different mesh and I don’t want to anchor the objects so my guess is I need to somehow determine a fixed floor height and alter the scanned meshes so they adhere that fixed height.
Any thoughts or ideas appreciated
/Andreas
Hi,
My goal is to obtain the device location (6 DoF) of the Apple Vision Pro and I find a function that might satisfy my need:
final func queryDeviceAnchor(atTimestamp timestamp: TimeInterval) -> DeviceAnchor?
which returns a device anchor (containing the position and orientation of the headset).
However, I couldn't find any document specify where does the device anchor exactly locate on the headset.
Does it locate at the midpoint between the user's eyes? Does it locate at the centroid of the six world facing tracking cameras?
It would be really helpful if someone can provide a local transformation matrix (similar to a camera extrinsic) from a visible rigid component (say the digital crown, top button, or the laser scanner) to the device anchor.
Thanks.
Hi, I'm working on visionOS and find I can't get onDisappear event just on the first window after app launch. It comes like that:
WindowGroup(id:"WindowA"){
MyView()
.onDisappear(){
print("WindowA disappear")
}
}
WindowGroup(id:"WindowB"){
MyView()
.onDisappear(){
print("WindowB disappear")
}
}
WindowGroup(id:"WindowC"){
MyView()
.onDisappear(){
print("WindowC disappear")
}
}
When the app first launch, it will open WindowA automatically
And then I open WindowB and WindowC programatically.
Then I tap the close button on window bar below window.
If I close WindowB/WindowC, I can receive onDisappear event
If I close WindowA, I can't receive onDisappear event
If I reopen WindowA after it is closed and then close it again by tap the close button below window, I can receive onDisappear event
Is there any logic difference for the first window on app launch? How can I get onDisappear Event for it.
I'm using Xcode 16 beta 2
Platform and Version
Development Environment: Xcode 16 Beta 3
visionOS 2 Beta 3
Description of Problem
I am currently working on integrating SharePlay into my visionOS 2 application. The application features a fully immersive space where users can interact. However, I have encountered an issue during testing on TestFlight.
When a user taps a button to activate SharePlay via the GroupActivity's activate() method within the immersive space, the immersive space visually disappears but is not properly dismissed. Instead, the immersive space can be made to reappear by turning the Digital Crown. Unfortunately, when it reappears, it overlaps with the built-in OS immersive space, resulting in a mixed and confusing user interface. This behavior is particularly concerning because the immersive space is not progressive and should not work with the Digital Crown being turned.
It is important to note that this problem is only present when testing the app via TestFlight. When the same build is compiled with the Release configuration and run directly through Xcode, the immersive space behaves as expected, and the issue does not occur.
Steps to Reproduce
Build a project that includes a fully immersive space and incorporates GroupActivity support.
Add a button within a window or through a RealityView attachment that triggers the GroupActivity's activate() method.
Upload the build to TestFlight.
Connect to a FaceTime call.
Open the app and enter a immersive space then press the button to activate the Group Activity.
https://developer.apple.com/documentation/avfoundation/media_reading_and_writing/converting_side-by-side_3d_video_to_multiview_hevc_and_spatial_video
While using this sample project to convert SBS video into Spatial MVHEVC video, it cannot be recognized as spatial video on visionOS 2.0 Beta 3.
I want to create framework for this repo:https://github.com/BradLarson/GPUImage
but failed.
1.I downloaded this repo and run below:
xcodebuild archive
-project GPUImage.xcodeproj
-scheme GPUImage
-destination "generic/platform=iOS"
-archivePath "archives/GPUImage"
xcodebuild archive
-project GPUImage.xcodeproj
-scheme GPUImage
-destination "generic/platform=iOS Simulator"
-archivePath "archivessimulator/GPUImage"
xcodebuild -create-xcframework
-archive archives/GPUImage.xcarchive -framework GPUImage.framework
-archive archivessimulator/GPUImage.xcarchive -framework GPUImage.framework
-output xcframeworks/GPUImage.xcframework
there is error :cryptexDiskImage' is an unknown content type
and com.apple.platform.xros' is an unknown platform identifier
I've been having some networking issues since updating to Sequoia and AVP 2.0. This problem has existed on all betas released so far up to Beta 3.
My session will connect (though it will on occasion fail). I will often get a frozen screen on the Vision. The screen always toggles between that and a grey connection issue screen. And finally will often disconnect with a miscellaneous error -455.
I've tried logging out and back in of my Apple ID on both devices. I've tried creating a new Mac user. I've even changed the network Wifi to match and to differ. Nothing seems to remedy this issue.
This was working under Sonoma. And continues to work from the AVP b3 to Sonoma on a different Mac.
I have reported to radar/feedback (FB13888947). I am curious if anyone else is seeing this and whether or not the symptoms are the same. Obviously, if you are seeing it, please make sure you report feedback to for traction on the back end.
Thanks much.
I want to automate tests for my iOS app and start writing UITests.
Sometimes system alerts appear, and my tests have to simulate button tapping.
In iOS and iPadOS these alerts are available via the system Springboard application:
let springboard = XCUIApplication(bundleIdentifier: "com.apple.springboard")
let cancelButton = springboard.alerts.buttons["Cancel"].firstMatch
if cancelButton.exists {
cancelButton.tap() // <-- cancel button taps, and test continue working
}
But when I launch my test in the Vision Pro simulator, the springboard is not available:
let springboard = XCUIApplication(bundleIdentifier: "com.apple.springboard")
print(springboard.exists) // <-- "false" will be printed, springboard does not exist
It means that I can't automate button tapping in system alerts.
So, my question is:
How can I access system alerts in VisionOS and tap buttons by UITests?
I am building the VisionPro project in Unity.And tried to add GroupActivity to it.
But I found that the VisionOS simulator project generated by Unity didn't have any items under Signing&Capabilities. The Signing configuration item is not displayed also.
Even when I try "+Capability", it says "Capabilities are not supported for Project-Name".
Thanks for any help.
Hey, is there a way to create a good ground shadow shader? I'm using a ground with an unlit material and I can't get the ground shadow to work properly. If I use a PBR texture it works better, but i can barely see it and I want to control the intensity more.
Hey, I need help achieving realistic fog and clouds for immersive spaces. Making 3D planes with transparent fog/cloud textures work, but they create issues when there are a lot of them overlapping each other. Also I can't get a good result with particles either.
Thanks in advance!