Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics

Post

Replies

Boosts

Views

Activity

Vision framework not working on Apple Vision Pro
com.apple.Vision Code=9 "Could not build inference plan - ANECF error: failed to load ANE model file:///System/Library/Frameworks/ Vision.framework/anodv4_drop6_fp16.H14G.espresso.hwx Code rise this error: func imageToHeadBox(image: CVPixelBuffer) async throws -> [CGRect] { let request:DetectFaceRectanglesRequest = DetectFaceRectanglesRequest() let faceResult:[FaceObservation] = try await request.perform(on: image) let faceBoxs:[CGRect] = faceResult.map { face in let faceBoundingBox:CGRect = face.boundingBox.cgRect return faceBoundingBox } return faceBoxs }
0
0
83
20h
Error in TensorFlow in MacBook Air M1 (macOS Monterey)
getting this error again and again even if I tried reinstalling. Traceback (most recent call last): File "", line 1, in File "/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow/init.py", line 439, in _ll.load_library(_plugin_dir) File "/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: OBJC_CLASS$_MPSGraphRandomOpDescriptor Referenced from: /Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow-plugins/libmetal_plugin.dylib Expected in: /System/Library/Frameworks/MetalPerformanceShadersGraph.framework/Versions/A/MetalPerformanceShadersGraph
0
0
90
2d
CreateML - problems with asyncronous training
I can successfully train an ActionClassifier using CreateML. However, I get crashes when I attempt to do the same asynchronously. The model parameters and training data sources are the same in both cases: let modelParameters = MLActionClassifier.ModelParameters(validation: validationDataSet,batchSize: 5, maximumIterations: 10, predictionWindowSize: 120, targetFrameRate: 30) let trainingDataSource = MLActionClassifier.DataSource.directoryWithVideosAndAnnotation(at: myStudyParticipantURLFinal, annotationFile: documentURLFinal, videoColumn: "file", labelColumn: "category", startTimeColumn: "startTime", endTimeColumn: "endTime") the only thing I add to attempt asyncrounous training is sessionParameters: let sessionDirectory = URL(fileURLWithPath: "(NSHomeDirectory())/test") // Session parameters can be provided to `train` method. let sessionParameters = MLTrainingSessionParameters( sessionDirectory: sessionDirectory, reportInterval: 10, checkpointInterval: 100, iterations: 10 ) To the final method: let trainJob = try MLActionClassifier.train(trainingData: trainingDataSource, parameters: modelParameters, sessionParameters: sessionParameters) The job crashes saying it cannot find plist files. I notice that only one plist file is written: meta.plist It seems there should also be a parameters.plist written, but it is not there.
1
0
150
5d
What is the correct way to modify a SceneStorage variable from an AppIntent to one scene only for iPad SplitView mode
Apple's sample code 'Trails' supports multiple scenes, however everything is using shared state across the scenes. Put the app in Split View mode and have two windows of the app running and navigate, you can see both mirror each other. Works as designed, it is using a shared 'navigation model' across all scenes. https://developer.apple.com/documentation/appintents/acceleratingappinteractionswithappintents I would like to know if there is a supported or recommended way to modify individual scene storage from within the perform body of an AppIntent. The objective is to have App Shortcuts that launch different tabs in a TabView or different selections in a List. In short, I want to deep link to features, but account for more than one scene being open on iPad and only have programatic navigation happen on the scene that is 'foremost' or the 'activated' one in Split View. I have it working with either a @Dependency or posting a Notification with my main ContentView listening to the other end, but it changes all scenes.
0
0
210
1w
Apple Intelligence returned to Waitlist on my MacBook Pro
I've been running Sequoia 15.1 since it was released. I soon thereafter was taken off the waitlist and had been using Apple Intelligence until this morning. My first hint something was wrong was that Writing Tools, which I'd been using extensively, disappeared. I tried in another app, and it wasn't there, either. I then looked at the Siri icon in my menu bar - which looks different under Apple Intelligence - and it had been reverted to the old icon. I then checked my Apple Intelligence settings and, sure enough, not only was it off, but I'd been returned to the waitlist. My iOS and iPadOS devices continue working just fine with Apple Intelligence. Only my MacBook Pro is experiencing this issue. Has anyone else seen this?
1
0
274
1w
Apple Intelligence cannot be downloaded (MacOS)
Hello, I still have problems with the activation with Apple Intelligence despite Beta 2 I had recently written a post here but had to make a new one because I had accidentally marked the old post as solved. I really tried to restart everything and reinstall MacOS it always gets stuck with Preparing even in memory you can't see a download I even tried it with VPN. I constantly use mobile hotspot is it maybe because of that? Unfortunately, I don't have Wi-Fi at home, only an unlimited data volume contract. MacBook Air M2 and 170GB free Storage
4
0
287
1w
MLTensor computation took more time than expected.
func testMLTensor() { let t1 = MLTensor(shape: [2000, 1], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 2000), scalarType: Float.self) let t2 = MLTensor(shape: [1, 3000], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 3000), scalarType: Float.self) for _ in 0...50 { let t = Date() let x = (t1 * t2) print("MLTensor", t.timeIntervalSinceNow * 1000, "ms") } } testMLTensor() The above code took more time than expected, especially in the early stage of iteration.
1
0
152
1w
MLTensor computation took more time than expected.
func testMLTensor() { let t1 = MLTensor(shape: [2000, 1], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 2000), scalarType: Float.self) let t2 = MLTensor(shape: [1, 3000], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 3000), scalarType: Float.self) for _ in 0...50 { let t = Date() let x = (t1 * t2) print("MLTensor", t.timeIntervalSinceNow * 1000, "ms") } } testMLTensor() The above code took more time than expected, especially in the early stage of iteration.
0
0
88
1w
MLTensor computation took more time than expected.
func testMLTensor() { let t1 = MLTensor(shape: [2000, 1], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 2000), scalarType: Float.self) let t2 = MLTensor(shape: [1, 3000], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 3000), scalarType: Float.self) for _ in 0...50 { let t = Date() let x = (t1 * t2) print("MLTensor", t.timeIntervalSinceNow * 1000, "ms") } } testMLTensor() The above code took more time than expected, especially in the early stage of iteration.
0
0
69
1w
Using OpenIntents with voice
Hi, I am working with AppIntents, and have created an OpenIntent with a target using a MyAppContact AppEntity that I have created. This works fine when running from Shortcuts because it pops up a list of options from the 'suggestedEntities` method. But It doesn't work well when using with Siri. It invokes the AppIntent, but keeps repeatedly asking for the value of the 'target' entity, which you can't really pass in with voice. What's the workaround here? Can an OpenIntent be activated by voice as well?
0
0
315
2w
Idea's to improve Apple Watch
Dear Apple Team, I have a suggestion to enhance the Apple Watch user experience. A new feature could provide personalized recommendations based on weather conditions and the user’s mood. For example, during hot weather, it could suggest drink something cold, or if the user feeling down,it could offer ways the boost their mood. This kind of feature could make the Apple Watch not just a health and fitness tracker but also a more functional personal assistant. “İmprove communication with Apple Watch” Feature #1: Noise detection and location suggestions. Imagine having your Apple Watch detect ambient noise levels and suggest a quieter location for your call. Feature #2: Context-aware call response options. If you can't answer a call, your Apple Watch could offer pre-set responses to communicate your status and reduce missed call anxiety. For example, if you're in a busy restaurant, your Apple Watch could suggest moving to a quieter spot nearby for a better conversation. Or if you’re in a movie theater,your Apple Watch could send an automatic “I’m in the movie’s” text to the caller. “İmprove user experience and app management“ Automated Sleep Notifications: The ability for the Apple Watch to automatically turn off notifications or change the watch face when the user is sleeping would provide a more seamless experience. For instance,when the watch detects that the user is in sleep mode,it could enable Do Not Disturb to silence calls and alerts. Caller Notification: In addition,it would be great if the Apple Watch could inform callers that the user is currently sleeping. This could help manage expectations for those attempting to reach the user at night. App Management to Conserve Battery: Implementing a feature that detects draining app's when the user is asleep could further enhance battery life. The watch and the iphone could close or pause apps that are using significant power while the user is not active. I believe these features could provide valuable advancements in enhancing the Apple Watch's usability for those who prioritize a restful night's sleep. Thank you for considering my suggestions. Best regards, Mahmut Ötgen Istanbul,Turkey
1
0
694
2w
iOS 18.1 beta - App crashes at runtime while using Translation.TranslationError in project
I'm trying to cast the error thrown by TranslationSession.translations(from:) as Translation.TranslationError. However, the app crashes at runtime whenever Translation.TranslationError is used in the project. Environment: iOS Version: 18.1 beta Xcode Version: 16 beta yld[14615]: Symbol not found: _$s11Translation0A5ErrorVMa Referenced from: <3426152D-A738-30C1-8F06-47D2C6A1B75B> /private/var/containers/Bundle/Application/043A25BC-E53E-4B28-B71A-C21F77C0D76D/TranslationAPI.app/TranslationAPI.debug.dylib Expected in: /System/Library/Frameworks/Translation.framework/Translation
1
1
330
2w
CoreML Crash on iOS18 Beta5
Hello, My App works well on iOS17 and previous iOS18 Beta version, while it crashes on latest iOS18 Beta5, when it calling model predictionFromFeatures. Calling stack of crash is as: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Unrecognized ANE execution priority MLANEExecutionPriority_Unspecified' Last Exception Backtrace: 0 CoreFoundation 0x000000019bd6408c __exceptionPreprocess + 164 1 libobjc.A.dylib 0x000000019906b2e4 objc_exception_throw + 88 2 CoreFoundation 0x000000019be5f648 -[NSException initWithCoder:] 3 CoreML 0x00000001b7507340 -[MLE5ExecutionStream _setANEExecutionPriorityWithOptions:] + 248 4 CoreML 0x00000001b7508374 -[MLE5ExecutionStream _prepareForInputFeatures:options:error:] + 248 5 CoreML 0x00000001b7507ddc -[MLE5ExecutionStream executeForInputFeatures:options:error:] + 68 6 CoreML 0x00000001b74ce5c4 -[MLE5Engine _predictionFromFeatures:stream:options:error:] + 80 7 CoreML 0x00000001b74ce7fc -[MLE5Engine _predictionFromFeatures:options:error:] + 208 8 CoreML 0x00000001b74cf110 -[MLE5Engine _predictionFromFeatures:usingState:options:error:] + 400 9 CoreML 0x00000001b74cf270 -[MLE5Engine predictionFromFeatures:options:error:] + 96 10 CoreML 0x00000001b74ab264 -[MLDelegateModel _predictionFromFeatures:usingState:options:error:] + 684 11 CoreML 0x00000001b70991bc -[MLDelegateModel predictionFromFeatures:options:error:] + 124 And my model file type is ml package file. Source code is as below: //model MLModel *_model; ...... // model init MLModelConfiguration* config = [[MLModelConfiguration alloc]init]; config.computeUnits = MLComputeUnitsCPUAndNeuralEngine; _model = [MLModel modelWithContentsOfURL:compileUrl configuration:config error:&error]; ..... // model prediction MLPredictionOptions *option = [[MLPredictionOptions alloc]init]; id<MLFeatureProvider> outFeatures = [_model predictionFromFeatures:_modelInput options:option error:&error]; Is there anything wrong? Any advice would be appreciated.
3
1
229
2w
Chat gpt audio in background
Dear Apple Development Team, I’m writing to express my concerns and request a feature enhancement regarding the ChatGPT app for iOS. Currently, the app's audio functionality does not work when the app is in the background. This limitation significantly affects the user experience, particularly for those of us who rely on the app for ongoing, interactive voice conversations. Given that many apps, particularly media and streaming services, are allowed to continue audio playback when minimized, it’s frustrating that the ChatGPT app cannot do the same. This restriction interrupts the flow of conversation, forcing users to stay within the app to maintain an audio connection. For users who multitask on their iPhones, being able to switch between apps while continuing to listen or interact with ChatGPT is essential. The ability to reference notes, browse the web, or even respond to messages while maintaining an ongoing conversation with ChatGPT would greatly enhance the app’s usability and align it with other background-capable apps. I understand that Apple prioritizes resource management and device performance, but I believe there’s a strong case for allowing apps like ChatGPT to operate with background audio. Given its growing importance as a tool for productivity, learning, and communication, adding this capability would provide significant value to users. I hope you will consider this feedback for future updates to iOS, or provide guidance on any existing APIs that could be leveraged to enable such functionality. Thank you for your time and consideration. Best regards, luke yes I used gpt to write this.
1
0
172
2w