Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics

Post

Replies

Boosts

Views

Activity

What format for writeHEIFRepresentation preserves HDR?
In the WWDC 24 session "Use HDR for dynamic image experiences in your app" it's noted this is how you save edits for Adaptive HDR: SDR + HDR: writeHEIFRepresentation(of: sdrImage, to: url, colorSpace: p3Space, options: [.hdrImage: hdrImage]) SDR + Gain: writeHEIFRepresentation(of: sdrImage, to: url, colorSpace: p3Space, options: [.hdrGainMapImage: gainImage]) This won't compile because the format argument is missing. What format should be used? In the WWDC 23 session "Support HDR images in your app" RGBAf, RGBAh, and RGBA16, and RGB10 were mentioned but I'm not sure which one to use. If relevant, I'm editing photos from the user's photo library, so the image was probably taken on iPhone but perhaps not. Thanks!
0
0
26
3h
iOS18 Apple Car Play Audio Error
After updating iOS18.1 Beta Version, I have a lot of issues with my Apple Car Play as per following. Audio Quality Really Bad (it’s not playing with media instead voice channel. Sometime, it’s playing with the phone speakers even though I have connected the car play via cable. Its not operating well with car steering control such as Volume up & down, skip button. After receiving the phone call, it’s go back to the original audio quality but when my phone screen locked, its go bad again. I expect to fix these problems asap as I love to play music when I drive around.
1
0
88
1d
SoundRecognition causes Input/Output callbacks to have varying Buffer sizes and introduces Glitching
Hello, We have noticed an issue with SoundRecognition that causes glitching with our AudioUnit setup in Smule. Input and output frame sizes are inconsistent. Input frame size does not match [AVAudioSession sharedInstance].IOBufferDuration My best guess is that SoundRecognition influences the input frame size and not the output frame size. To reproduce use the example app here: https://github.com/MarkoGill/SoundRecognitionBug Hardware/OS iPhone 14 Pro on iOS 18 -> Experiences the problem iPhone 11 on iOS 18 -> Experiences the problem iPhone 15 on iOS 18 -> Not experiencing the problem Reproduction Steps Enable Sound Recognition (Settings > Accessibility > Sound Recognition > On) Enable a Sound for detection (Sounds > Dog > On) Open the example app with headset (it routes input to output) Notice glitching occurs Check the logs. Record and Playback buffer sizes vary Example Log: AU input sample rate: 48000.000000 AU output sample rate: 48000.000000 hardware sample rate: 48000.000000 hardware buffer size: 1104.000000 updated record frame counts: 1024 updated playback frame counts: 1104 Notes: You can disable Sound Recognition, restart the app, and playback behaves correctly.
2
0
115
1d
Distorted Audio When Recording External Mics With AVCaptureSession and AVAssetWriter
I’m working on a macOS app, written in Swift. My goal is to record audio from an external microphone, e.g., one connected via USB. For this, I’m using an AVCaptureSession and recording its output with an AVAssetWriter. This works perfectly in principle (and reliably with internal microphones, for example). The problem occurs after my app has successfully completed the first recording and I then want to make additional recordings (which makes me think it might be process-dependent, because it works again after restarting the app). The problem: Noisy or distorted-sounding audio files. In addition, the following error message appears in the Console from CoreAudio / its AudioConverter: Input data proc returned inconsistent 512 packets for 2048 bytes; at 3 bytes per packet, that is actually 682 packets It is easy to reproduce. This problem is reproducible even if I don’t configure the AVAssetWriter manually and instead let it receive its audioSettings using a preset from an AVOutputSettingsAssistant. I’m running on macOS 15.0 (24A335). I’ve filed a feedback including a demo project → FB15333298 🎟️ I would greatly appreciate any help! Have a great day, Martin
2
0
107
1d
'You don’t have permission. - The AVPlayerItem instance has failed with the error code 257 and domain "NSCocoaErrorDomain".'
[[PHImageManager defaultManager] requestAVAssetForVideo:asset options:videoOptions resultHandler:^(AVAsset *_Nullable avAsset, AVAudioMix *_Nullable audioMix, NSDictionary *_Nullable info) { if ([avAsset isKindOfClass:[AVURLAsset class]]) { AVURLAsset *urlAsset = (AVURLAsset *)avAsset; NSURL *videoURL = urlAsset.URL; mediaInfo[@"path"] = videoURL.absoluteString; } else { // Failed to get video asset completion(nil); } }];``` Before iOS 18, i could able access AVAsset video using the method mentioned above with the url, but starting from the iOS 18 version, the following error appears 'You don’t have permission. - The AVPlayerItem instance has failed with the error code 257 and domain "NSCocoaErrorDomain".'
2
0
107
1d
Destination Video Sample
I have using the Destination Video sample project as a template to recreate a similar project. I have been able to update all text and images but when I go to change out the videos, it still plays the old videos. If I rename the new video to the old video name, the new video appears. Problem is, there are only two videos that repeat on different links. But changing the URLs in SampleData.swift to the new file path does not work. Any suggestions on how to replace a video with a newly imported asset? Here is an example of the code. Video( id: 1, name: "Landing", synopsis: """ After a long journey through the stars, the robot botanist and its trusty spaceship finally arrive at Wolf 1069 B, ready to explore the mysteries that lie on the planet’s surface. New plants to catalog, new animals to discover, and cool rocks to unearth. Follow along as the botanist’s mission begins! """, categoryIDs: [ 1004, 1005 ], url: URL(string: "file://BOT-anist_video.mov")!, imageName: "landing", yearOfRelease: 2024, duration: 66, contentRating: "NR", isFeatured: true ),
1
0
78
2d
AVAssetWriterInput -- inserting sample buffers with pauses in between not working
Hi, I'm trying to insert CMSampleBuffers into an AVAssetWriterInput that has been configured with expectsMediaDataInRealTime = false with pauses. That is, I insert fixed-length audio at specific (increasing and non-overlapping) time points with large gaps in between. E.g., 5 seconds of audio at t=3.0, 5 seconds of audio at t=12.0, etc. The first audio sample plays at t=3 in the final output video as expected. But then all the other samples are bunched up immediately after it instead of being scheduled at the correct time. Below is my code. I'm just loading the asset and then readjusting its timestamps to be correct in the absolute timeline. Why do they not get scheduled correctly when the timestamps and durations are definitely correct and non-overlapping? func addFrame(_ pixelBuffer: CVPixelBuffer) { guard CGSize(width: pixelBuffer.width, height: pixelBuffer.height) == outputSize else { return } let frameTime = CMTimeMake(value: frameCount, timescale: frameRate) if videoInput?.isReadyForMoreMediaData == true { pixelBufferAdaptor?.append(pixelBuffer, withPresentationTime: frameTime) frameCount += 1 currentTime = frameTime } } func addMP3AudioClip(_ audioData: Data) async throws { let tempURL = FileManager.default.temporaryDirectory.appendingPathComponent(UUID().uuidString + ".mp3") defer { try? FileManager.default.removeItem(at: tempURL) } try audioData.write(to: tempURL) let asset = AVAsset(url: tempURL) let duration = try await asset.load(.duration) let audioTrack = try await asset.loadTracks(withMediaType: .audio).first! let audioReader = try AVAssetReader(asset: asset) let outputSettings: [String: Any] = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2, AVLinearPCMBitDepthKey: 16, AVLinearPCMIsFloatKey: false, AVLinearPCMIsBigEndianKey: false, AVLinearPCMIsNonInterleaved: false ] let audioReaderOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: outputSettings) audioReader.add(audioReaderOutput) guard audioReader.startReading() else { throw NSError(domain: "AudioReaderError", code: 0, userInfo: [NSLocalizedDescriptionKey: "Failed to start reading audio"]) } let baseInsertionTime = currentTime.convertScale(duration.timescale, method: .default) // Capture the current video time when the method is called print("Adding audio clip at \(baseInsertionTime.seconds) seconds, duration: \(duration.seconds) seconds") var audioTime = CMTime.zero var totalDuration: Double = 0 while let sampleBuffer = audioReaderOutput.copyNextSampleBuffer() { let bufferDuration = CMSampleBufferGetDuration(sampleBuffer) let adjustedBuffer = adjustTimeStamp(of: sampleBuffer, by: baseInsertionTime) while !audioInput!.isReadyForMoreMediaData { try await Task.sleep(nanoseconds: 100_000_000) // 0.1 second } audioInput!.append(adjustedBuffer) print(" Adjusted time: \(adjustedBuffer.presentationTimeStamp.seconds)") audioTime = CMTimeAdd(audioTime, bufferDuration) totalDuration += bufferDuration.seconds } print("Finished adding audio clip. Last sample at: \(CMTimeAdd(baseInsertionTime, audioTime).seconds) seconds") print(" totalDuration=\(totalDuration)") } private func adjustTimeStamp(of sampleBuffer: CMSampleBuffer, by timeOffset: CMTime) -> CMSampleBuffer { var count: CMItemCount = 0 CMSampleBufferGetSampleTimingInfoArray(sampleBuffer, entryCount: 0, arrayToFill: nil, entriesNeededOut: &count) var timingInfo = [CMSampleTimingInfo](repeating: CMSampleTimingInfo(), count: count) CMSampleBufferGetSampleTimingInfoArray(sampleBuffer, entryCount: count, arrayToFill: &timingInfo, entriesNeededOut: nil) for i in 0..<count { timingInfo[i].presentationTimeStamp = CMTimeAdd(timingInfo[i].presentationTimeStamp, timeOffset) if timingInfo[i].decodeTimeStamp != .invalid { timingInfo[i].decodeTimeStamp = CMTimeAdd(timingInfo[i].decodeTimeStamp, timeOffset) } else { timingInfo[i].decodeTimeStamp = timingInfo[i].presentationTimeStamp } } var adjustedBuffer: CMSampleBuffer? CMSampleBufferCreateCopyWithNewTiming(allocator: nil, sampleBuffer: sampleBuffer, sampleTimingEntryCount: count, sampleTimingArray: &timingInfo, sampleBufferOut: &adjustedBuffer) return adjustedBuffer! }
0
0
76
2d
AVPlayerViewController not displaying playback controls in iOS 18
Hi everyone, I’ve encountered an issue with the showsPlaybackControls property in AVPlayerViewController after updating to iOS 18. Even though it’s set to true, the native playback controls (play, pause, etc.) are no longer appearing as they used to in previous iOS versions. This behavior was consistent and worked perfectly prior to iOS 18. Additionally, I’m seeing the same problem when using the VideoPlayer in SwiftUI. The native controls that should appear by default seem to have vanished after the update. Has anyone else experienced this? Is there any workaround or additional configuration required to restore the native controls? Any help or insights would be appreciated. Thanks! struct CustomPlayerView: UIViewControllerRepresentable { let player: AVPlayer func updateUIViewController(_ playerController: AVPlayerViewController, context: Context) { playerController.player = player playerController.showsPlaybackControls = true player.play() } func makeUIViewController(context: Context) -> AVPlayerViewController { return AVPlayerViewController() } }
2
0
96
2d
Now Playing info error with CarPlay on iOS 18
Our app, Universalis (Apple ID 284942719) plays audio successfully on all versions of iOS up to and including iOS 17. It uses the old MediaPlayer interface because it is targeted at versions all the way down to iOS 12. On iOS 18, it plays audio but CarPlay fails to show the Now Playing screen. Instead, a message box pops up in CarPlay saying "There was a problem loading this content", with an OK button. Nevertheless the audio plays correctly. This has been reported in the wild by a user of iOS 18 with CarPlay. I am also able to reproduce it locally, running the app in Xcode with the CarPlay Simulator, with an iPhone using iOS 18.0 or iOS 18.1. Earlier versions work correctly. Looking at the console log in CarPlay, the following error message appears about 10 seconds before the error message pops up: MSVEntitlementUtilities - Process Universalis PID[1173] - Group: (null) - Entitlement: com.apple.mediaremote.external-artwork-validation - Entitled: NO - Error: (null) The message has an orange background which appears to mean that it does not come directly from NSLog in the app. The message appears immediately after the request handler of MPMediaItemArtwork has been called requesting a 128 x 128 image and has successfully returned a 128x128 UIImage object. This has been reported through Feedback Assistant: Bug report ID is FB15343941 How can we work round this error?
1
0
100
2d
Cancel or quit from loadValuesAsynchronouslyForKeys?
The app needs to play remote videos. Sometimes it takes very long time (~10 seconds) to load the media and play with AVPlayer. So I use a timer to check and try to play next video if it is over 5 seconds: AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:videoUrl options:nil];// line 1 NSArray *keys = @[@"playable"]; mediaLoaded = NO; [asset loadValuesAsynchronouslyForKeys:keys completionHandler:^() { // line 2 mediaLoaded = YES; // line 4 dispatch_async(dispatch_get_main_queue(), ^{ [self.player replaceCurrentItemWithPlayerItem:[AVPlayerItem playerItemWithAsset:asset]]; [self.player playImmediatelyAtRate:playSpeed]; }); }]; dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(5 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{ if (!mediaLoaded) { [self playNextVideo]; // line 3 } }); So the flow is: line 1 (of video 1)- line 2 (of video 1)- line 3 (if over 5 seconds and video 1 is not playing)- line 1 (of video 2)-... Now the problem is that seems line 2 is blocking line 1: only line 4 (for video 1 after ~10 seconds) or the completionHandler is executed will line 2 (for video 2) will be executed. Anybody can give any insight? Thx!
1
0
57
2d
Glitch in AVPlayer while playing HLS videos
We have observed consistent glitches in video playback when using AVPlayer to stream HLS (HTTP Live Streaming) videos on iOS. The issue manifests as intermittent frame drops, stuttering, and playback instability during HLS streams. However, the same behavior is not present when playing MP4 videos using the same AVPlayer instance. The HLS streams being used follow standard encoding practices, and network conditions have been ruled out as a cause for this problem. https://drive.google.com/file/d/1lhdpHTyjPYCYLHjzvb6ZF6P6jehIuwY0/view?usp=sharing Steps to Reproduce: 1. Load an HLS video into AVPlayer and initiate playback. 2. Observe intermittent glitches and stuttering during video playback. 3. Load and play an MP4 video in the same AVPlayer instance. 4. Notice that MP4 playback is smooth without any glitches.
1
0
64
2d
Take correctly sized screenshots with ScreenCaptureKit
I've been using CGWindowListCreateImage which automatically creates an image with the size of the captured window. But SCScreenshotManager.captureImage(contentFilter:configuration:) always creates images with the width and height specified in the provided SCStreamConfiguration. I could be setting the size explicitly by reading SCWindow.frame or SCContentFilter.contentRect and multiplying the width and height by SCContentFilter.pointPixelScale , but it won't work if I want to keep the window shadow with SCStreamConfiguration.ignoreShadowsSingleWindow = false. Is there a way and what's the best way to take full-resolution screenshots of the correct size? import Cocoa import ScreenCaptureKit class ViewController: NSViewController { @IBOutlet weak var imageView: NSImageView! override func viewDidAppear() { imageView.imageScaling = .scaleProportionallyUpOrDown view.wantsLayer = true view.layer!.backgroundColor = .init(red: 1, green: 0, blue: 0, alpha: 1) Task { let windows = try await SCShareableContent.excludingDesktopWindows(false, onScreenWindowsOnly: true).windows let window = windows[0] let filter = SCContentFilter(desktopIndependentWindow: window) let configuration = SCStreamConfiguration() configuration.ignoreShadowsSingleWindow = false configuration.showsCursor = false configuration.width = Int(Float(filter.contentRect.width) * filter.pointPixelScale) configuration.height = Int(Float(filter.contentRect.height) * filter.pointPixelScale) print(filter.contentRect) let windowImage = try await SCScreenshotManager.captureImage(contentFilter: filter, configuration: configuration) imageView.image = NSImage(cgImage: windowImage, size: CGSize(width: windowImage.width, height: windowImage.height)) } } }
2
0
62
2d
How to insert multiple AVAssets into AVMutableCompositionTrack with silence in between?
Hi, I'm recording videos frame by frame and occasionally a sound plays (from an MP3 asset). I want to composite these sounds into the video at the correct timings. But this doesn't work. Really pulling my hair out here. I've tried everything, including adding one after another and then inserting silence in between (allegedly this pushes subsequent clips back) but nothing works. Here, _currentTime is the current time according to the video frames added, which are added at 20Hz. You can see I am adding silence long enough to cover the time from the end of the last audio clip to now, plus extra padding to contain the audio we are about to add. Doesn't matter if I remove this, it just doesn't work. Sometimes I can get two pieces of audio to play but never a third and usually, only the first audio plays, and then nothing after. I'm completely stumped. func addFrame(_ pixelBuffer: CVPixelBuffer) { guard CGSize(width: pixelBuffer.width, height: pixelBuffer.height) == _outputSize else { return } let frameTime = CMTimeMake(value: Int64(_frameCount), timescale: _frameRate) if _videoInput?.isReadyForMoreMediaData == true { _pixelBufferAdaptor?.append(pixelBuffer, withPresentationTime: frameTime) _frameCount += 1 _currentTime = frameTime } } func addMP3AudioClip(_ audioData: Data) async throws { let tempURL = FileManager.default.temporaryDirectory.appendingPathComponent(UUID().uuidString + ".mp3") try audioData.write(to: tempURL) let asset = AVAsset(url: tempURL) let duration = try await asset.load(.duration) let audioTrack = try await asset.loadTracks(withMediaType: .audio).first! let currentAudioTime = _currentTime.convertScale(duration.timescale, method: .default) _audioTrack?.insertEmptyTimeRange(CMTimeRangeFromTimeToTime(start: _lastAudioClipEndTime, end: currentAudioTime)) _audioTrack?.insertEmptyTimeRange(CMTimeRangeFromTimeToTime(start: currentAudioTime, end: CMTimeAdd(currentAudioTime, duration))) let timeRange = CMTimeRangeMake(start: .zero, duration: duration) try _audioTrack?.insertTimeRange(timeRange, of: audioTrack, at: currentAudioTime) _lastAudioClipEndTime = CMTimeAdd(currentAudioTime, duration) try FileManager.default.removeItem(at: tempURL) _audioClipTimeRanges.append(CMTimeRangeMake(start: _currentTime, duration: duration)) } Thank you, -- B.
0
0
83
3d
Why Aren't All Songs Being Added to the Queue?
Hi, I've recently been working with the Apple Music API and have had success in loading all the playlists on my account, loading songs from each playlist, and adding songs to the ApplicationMusicPlayer.share.queue. The problem I'm running into is that not all songs from the playlist are being added to the queue, despite confirming all the songs are being based on the PlaybackView.swift I'm about to share with you. I would also like to answer other underlying questions if possible. I am also open to any other suggestions. In this scenario were also assuming isShuffled is true every time. How can I determine when a song has ended? How can I get the album title information? How can I get the current song title, album information, and artist information without pressing play? I can only seem to update the screen when I select my play meaning ApplicationMusicPlayer.shared.play() is being called. How do I get the endTime of the song? I believe it should be ApplicationMusicPlayer.shared.queue.currentEntry.endTime but this doesn't seem to work. // // PlayBackView.swift // // Created by Justin on 8/16/24. // import SwiftUI import MusicKit import Foundation enum PlayState { case play case pause } struct PlayBackView: View { @State var song: Track @Binding var songs: [Track]? @State var isShuffled = false @State private var playState: PlayState = .pause @State private var isFirstPlay = true private let player = ApplicationMusicPlayer.shared private var isPlaying: Bool { return (player.state.playbackStatus == .playing) } var body: some View { VStack { // Album Cover HStack(spacing: 20) { if let artwork = player.queue.currentEntry?.artwork { ArtworkImage(artwork, height: 100) } else { Image(systemName: "music.note") .resizable() .frame(width: 100, height: 100) } VStack(alignment: .leading) { // Song Title Text(player.queue.currentEntry?.title ?? "Song Title Not Found") .font(.title) .fixedSize(horizontal: false, vertical: true) // How do I get AlbumTitle from here?? // Artist Name Text(player.queue.currentEntry?.subtitle ?? "Artist Name Not Found") .font(.caption) } } .padding() Spacer() // Progress View // endTime doesn't work and not sure why. ProgressView(value: player.playbackTime, total: player.queue.currentEntry?.endTime ?? 1.00) .progressViewStyle(.linear) .tint(.red.opacity(0.5)) // Duration View HStack { Text(durationStr(from: player.playbackTime)) .font(.caption) Spacer() if let duration = player.queue.currentEntry?.endTime { Text(durationStr(from: duration)) .font(.caption) } } Spacer() Button { Task { do { try await player.skipToNextEntry() } catch { print(error.localizedDescription) } } } label: { Label("", systemImage: "forward.fill") .tint(.white) } Spacer() // Play/Pause Button Button(action: { handlePlayButton() isFirstPlay = false }, label: { Text(playState == .play ? "Pause" : isFirstPlay ? "Play" : "Resume") .frame(maxWidth: .infinity) }) .buttonStyle(.borderedProminent) .padding() .font(.largeTitle) .tint(.red) } .padding() .onAppear { if isShuffled { songs = songs?.shuffled() if let songs, let firstSong = songs.first { player.queue = .init(for: songs, startingAt: firstSong) player.state.shuffleMode = .songs } } } .onDisappear { player.stop() player.queue = [] player.playbackTime = .zero } } private func handlePlayButton() { Task { if isPlaying { player.pause() playState = .pause } else { playState = .play await playTrack() } } } @MainActor private func playTrack() async { do { try await player.play() } catch { print(error.localizedDescription) } } private func durationStr(from duration: TimeInterval) -> String { let seconds = Int(duration) let minutes = seconds / 60 let remainder = seconds % 60 // Format the string to ensure two digits for the remainder (seconds) return String(format: "%d:%02d", minutes, remainder) } } //#Preview { // PlayBackView() //}
0
0
108
3d
dyld[434]: Library not loaded: error when running LockedCameraCapture compatible app on iOS 15
Hello, I am getting the following error while attempting to run my LockedCameraCapture compatible app on an iOS 15 device: dyld[434]: Library not loaded: '/System/Library/Frameworks/LockedCameraCapture.framework/LockedCameraCapture' Referenced from: '/private/var/containers/Bundle/Application/.../MyApp.app/MyApp.debug.dylib' Reason: tried: '/System/Library/Frameworks/LockedCameraCapture.framework/LockedCameraCapture' (no such file) Of course iOS 15 doesn't have the library for LockedCameraCapture, but I have had no issue including Lock Screen Widgets (which require iOS 16), so I am not sure why the error is popping up. Thank you!
1
0
85
3d
CMSAMPLEBuffer: audio PCM to MP4 AAC
Hello, As explained in this link, the AVAssetReaderTrackOutput.copyNextSampleBuffer() returns a CMSampleBuffer in linear PCM audio format. I want to place this audio buffer into an AVAssetWriterInput of type kAudioFormatMPEG4AAC, but I can't manage the conversion. Could you help me by providing an extension that returns a CMSampleBuffer converted from linear PCM audio format to kAudioFormatMPEG4AAC? Example: extension CMSampleBuffer { func fromPCMToAAC() -> CMSampleBuffer? { // Here, get a new AudioStreamBasicDescription, create a CMSampleBuffer and a CMBlockBuffer } } I've tried multiple times but without success. Software: iOS 18.1 XCode: 16.0 Thank you!
1
0
81
3d
Accessing Events from Video Device
I have an intra-**** video device that's supported by Apple's AVCaptureDevice. I can use the AV classes to connect to the device and get video. However, this device has a button that's used to acquire still images from the video stream. I can't use the IOUSBDeviceInterface to do an asynchronous read, because the Apple driver has the device opened exclusively. How do I go about receiving the button event in this scenario? I know which pipe to read, based on a bus analyzer when I run this on Windows, I just need to know how to access that pipe when the device is opened by another process.
0
0
58
3d
Receiving eventMetadata from AVPlayerItemMetadataOutput stops responding on iOS18 only
Case-ID: 9391388 Our application uses timed Metadata as part of a rating control system. We noticed a problem in production and diagnosis shows that we stop receiving timed Metadata on iOS18 only Our live streams are primed with metadata at least once per second but we are seeing extended gaps in receiving this content, in excess of 10 minutes. We have also observed that this happens more as the player climbs the bitrate ladder, and doesn't happen if we cap to a low resolution i.e. a preferredMaximumResolution of 768x432. Furthermore, if we throttle network conditions after we stop receiving metadata the we start receiving them again. Following is a simple example that demonstrates the above behaviour, unfortunately I cannot share the live stream endpoint which is primed with metadata publicly, but can provide privately to Apple to reproduce the problem. import UIKit import AVKit class ViewController: UIViewController, AVPlayerItemMetadataOutputPushDelegate { var player: AVPlayer? var itemMetadataOutput: AVPlayerItemMetadataOutput? override func viewDidAppear(_ animated: Bool) { guard let url = URL(string: "endpoint redacted") else { return } let player = AVPlayer(url: url) let controller = AVPlayerViewController() controller.player = player self.player = player present(controller, animated: true) { player.play() let currentItem = player.currentItem let itemMetadataOutput = AVPlayerItemMetadataOutput(identifiers: nil) self.itemMetadataOutput = itemMetadataOutput self.itemMetadataOutput?.setDelegate(self, queue: .main) currentItem?.add(itemMetadataOutput) } } public func metadataOutput(_ output: AVPlayerItemMetadataOutput, didOutputTimedMetadataGroups groups: [AVTimedMetadataGroup], from track: AVPlayerItemTrack?) { print("received metadata \(Date())") } }
1
0
146
3d
AVAudioEngine change playback rate in real time (AVAudioUnitVarispeed is non real time)
I am using the AVAudioEngine to play back samples in an iOS game. I would like to change the play back rate of a sample in real time. When using AVAudioUnitVarispeed for chaging the play back rate it creates stutters in the game as it isn't processed in real time (as stated here:AVAudioUnitTimeEffect) The other option I found to change the rate is by using an AVAudioEnvironmentNode and change the rate of the AVAudioPlayerNode. That works without creating stutters but limits the valid values for the rate from 0.5 to 2.0 (I need higher rates then 2.0). See here: AVAudio3DMixing. Are there any other ways to play back a sample with a rate control in real time?
0
0
65
3d