I take a picture using the iPhone's camera. The taken resolution is 3024.0 x 4032. I then have to apply a watermark to this image. After a bunch of trial and error, the method I decided to use was taking a snapshot of a watermark UIView, and drawing that over the image, like so:
// Create the watermarked photo.
let result: UIImage=UIGraphicsImageRenderer(size: image.size).image(actions: { _ in
image.draw(in: .init(origin: .zero, size: image.size))
let watermark: Watermark = .init(
size: image.size,
scaleFactor: image.size.smallest / self.frame.size.smallest
)
watermark.drawHierarchy(in: .init(origin: .zero, size: image.size), afterScreenUpdates: true)
})
Then with the final image — because the client wanted it to have a filename as well when viewed from within the Photos app and exported from it, and also with much trial and error — I save it to a file in a temporary directory. I then save it to the user's Photo library using that file. The difference as compared to saving the image directly vs saving it from the file is that when saved from the file the filename is used as the filename within the Photos app; and in the other case it's just a default photo name generated by Apple.
The problem is that in the image saving code I'm getting the following error:
[Metal] 9072 by 12096 iosurface is too large for GPU
And when I view the saved photo it's basically just a completely black image. This problem only started when I changed the AVCaptureSession preset to .photo. Before then there was no errors.
Now, the worst problem is that the app just completely crashes on drawing of the watermark view in the first place. When using .photo the resolution is significantly higher, so the image size is larger, so the watermark size has to be commensurately larger as well. iOS appears to be okay with the size of the watermark UIView. However, when I try to draw it over the image the app crashes with this message from Xcode:
So there's that problem. But I figured that could be resolved by taking a more manual approach to the drawing of the watermark then using a UIView snapshot. So it's not the most pressing problem. What is, is that even after the drawing code is commented out, I still get the iosurface is too large error.
Here's the code that saves the image to the file and then to the Photos library:
extension UIImage {
/// Save us with the given name to the user's photo album.
/// - Parameters:
/// - filename: The filename to be used for the saved photo. Behavior is undefined if the filename contain characters other than what is represented by this regular expression [A-Za-z0-9-_]. A decimal point for the file extension is permitted.
/// - location: A GPS location to save with the photo.
fileprivate func save(_ filename: String, _ location: Optional<Coordinates>) throws {
// Create a path to a temporary directory. Adding filenames to the Photos app form of images is accomplished by first creating an image file on the file system, saving the photo using the URL to that file, and then deleting that file on the file system.
// A documented way of adding filenames to photos saved to Photos was never found.
// Furthermore, we save everything to a `tmp` directory as if we just tried deleting individual photos after they were saved, and the deletion failed, it would be a little more tricky setting up logic to ensure that the undeleted files are eventually
// cleaned up. But by using a `tmp` directory, we can save all temporary photos to it, and delete the entire directory following each taken picture.
guard
let tmpUrl: URL=try {
guard let documentsDirUrl=NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true).first else {
throw GeneralError("Failed to create URL to documents directory.")
}
let url: Optional<URL> = .init(string: documentsDirUrl + "/tmp/")
return url
}()
else {
throw GeneralError("Failed to create URL to temporary directory.")
}
// A path to the image file.
let filePath: String=try {
// Reduce the likelihood of photos taken in quick succession from overwriting each other.
let collisionResistantPath: String="\(tmpUrl.path(percentEncoded: false))\(UUID())/"
// Make sure all directories required by the path exist before trying to write to it.
try FileManager.default.createDirectory(atPath: collisionResistantPath, withIntermediateDirectories: true, attributes: nil)
// Done.
return collisionResistantPath + filename
}()
// Create `CFURL` analogue of file path.
guard let cfPath: CFURL=CFURLCreateWithFileSystemPath(nil, filePath as CFString, CFURLPathStyle.cfurlposixPathStyle, false) else {
throw GeneralError("Failed to create `CFURL` analogue of file path.")
}
// Create image destination object.
//
// You can change your exif type here.
// This is a note from original author. Not quite exactly sure what they mean by it. Link in method documentation can be used to refer back to the original context.
guard let destination=CGImageDestinationCreateWithURL(cfPath, UTType.jpeg.identifier as CFString, 1, nil) else {
throw GeneralError("Failed to create `CGImageDestination` from file url.")
}
// Metadata properties.
let properties: CFDictionary={
// Place your metadata here.
// Keep in mind that metadata follows a standard. You can not use custom property names here.
let tiffProperties: Dictionary<String, Any>=[:]
return [
kCGImagePropertyExifDictionary as String: tiffProperties
] as CFDictionary
}()
// Create image file.
guard let cgImage=self.cgImage else {
throw GeneralError("Failed to retrieve `CGImage` analogue of `UIImage`.")
}
CGImageDestinationAddImage(destination, cgImage, properties)
CGImageDestinationFinalize(destination)
// Save to the photo library.
PHPhotoLibrary.shared().performChanges({
guard let creationRequest: PHAssetChangeRequest = .creationRequestForAssetFromImage(atFileURL: URL(fileURLWithPath: filePath)) else {
return
}
// Add metadata to the photo.
creationRequest.creationDate = .init()
if let location=location {
creationRequest.location = .init(latitude: location.latitude, longitude: location.longitude)
}
}, completionHandler: { _, _ in
try? FileManager.default.removeItem(atPath: tmpUrl.absoluteString)
})
}
}
If anyone can provide some insight as to what's causing the iosurface is too large error and what can be done to resolve it, that'd be awesome.
Image I/O
RSS for tagRead and write most image file formats, manage color, access image metadata using Image I/O.
Posts under Image I/O tag
56 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I would like to use a third-party app to edit the metadata of a photo to change its Caption and then be able to search in the Photos app to find that image with the edited caption.
I have managed to do this by duplicating the photo with the edited metadata. The Photos app recognizes it as a new photo and indexes it with the new caption, making it searchable. However, when editing the photo in-place, the Photos app will not re-index the photo, therefore it will not be searchable.
Is there a way to edit photos in-place and have them searchable with the new metadata?
Hello! After recent talk on the WWDC2023 about HDR support and finding this documentation page on Applying Apple HDR effect on photos, I became very interested in the HDR Gain Map format. From documentation page it is clear how we can restore original HDR from SDR and Gain Map representation, but my question is - how from HDR we can convert back to the SDR + Gain Map representation? As I understand right know, conversion from HDR to SDR + Gain Map includes two steps:
Tone mapping of HDR for getting correct SDR
When we have both HDR and SDR, from equation in the documentation page we can calculate Gain Map
Am I correct? If so, what tone mapping algorithm for HDR -> SDR conversion is used right know? Can't find any information about this in the internet:(
Would be very grateful for your response!
HEIF Decompression Crash on iOS 17.
struct ContentView: View {
@State var listOfImages: [String] = ["One", "Two", "Three", "Four"]
@State var counter = 0
var body: some View {
VStack {
Button(action: {
counter += 1
}, label: {
Text("Next Image")
})
}
.background(Image(listOfImages[counter]))
.padding()
}
}
When I click on the button, counter increases and the next image is displayed as the background. The memory usage of the app increases as each image changes. Is there anyway to maintain a steady memory use?
Hello All,
I am trying to compress PNG image by applying PNG Filters like(Sub, Up, Average, Paeth), I am applying filers using property kCGImagePropertyPNGCompressionFilter but there is no change seen in resultant images after trying any of the filter. What is the issue here can someone help me with this.
Do I have compress image data after applying filter? If yes how to do that?
Here is my source code
CGImageDestinationRef outImageDestRef = NULL;
long keyCounter = kzero;
CFStringRef dstImageFormatStrRef = NULL;
CFMutableDataRef destDataRef = CFDataCreateMutable(kCFAllocatorDefault,0);
Handle srcHndl = //source image handle;
ImageTypes srcImageType = //'JPEG', 'PNGf, etct;
CGImageRef inImageRef = CreateCGImageFromHandle(srcHndl,srcImageType);
if(inImageRef)
{
CFTypeRef keys[4] = {nil};
CFTypeRef values[4] = {nil};
dstImageFormatStrRef = CFSTR("public.png");
long png_filter = IMAGEIO_PNG_FILTER_SUB; //IMAGEIO_PNG_FILTER_SUB, IMAGEIO_PNG_FILTER_UP, IMAGEIO_PNG_FILTER_AVG, IMAGEIO_PNG_FILTER_PAETH .. it is one of this at a time
keys[keyCounter] = kCGImagePropertyPNGCompressionFilter;
values[keyCounter] = CFNumberCreate(NULL,kCFNumberLongType,&png_filter);
keyCounter++;
outImageDestRef = CGImageDestinationCreateWithData(destDataRef, dstImageFormatStrRef, 1, NULL);
if(outImageDestRef)
{
// keys[keyCounter] = kCGImagePropertyDPIWidth;
// values[keyCounter] = CFNumberCreate(NULL,kCFNumberLongType,&Resolution);
// keyCounter++;
//
// keys[keyCounter] = kCGImagePropertyDPIHeight;
// values[keyCounter] = CFNumberCreate(NULL,kCFNumberLongType,&Resolution);
// keyCounter++;
CFDictionaryRef options = CFDictionaryCreate(NULL,keys,values,keyCounter,&kCFTypeDictionaryKeyCallBacks,&kCFTypeDictionaryValueCallBacks);
CGImageDestinationAddImage(outImageDestRef,inImageRef, options);
CFRelease(options);
status = CGImageDestinationFinalize(outImageDestRef);
if(status == true)
{
UInt8 *destImagePtr = CFDataGetMutableBytePtr(destDataRef);
destSize = CFDataGetLength(destDataRef);
//using destImagePtr after this ...
}
CFRelease(outImageDestRef);
}
for(long cnt = kzero; cnt < keyCounter; cnt++)
if(values[cnt])
CFRelease(values[cnt]);
if(inImageRef)
CGImageRelease(inImageRef);
}
Using the screencapture CLI on macOS Sonoma 14.0 (23A344) results in a 72dpi image file, no matter if it was captured on a retina display or not.
For example, using
screencapture -i ~/Desktop/test.png in Terminal lets me create a selective screenshot, but the resulting file does not contain any DPI metadata (checked using mdls ~/Desktop/test.png), nor does the image itself have the correct DPI information (should be 144, but it's always 72; checked using Preview.app).
I noticed a (new?) flag option, -r, for which the documentation states:
-r Do not add screen dpi meta data to captured file.
Is that flag somehow automatically set? Setting it myself makes no difference and obviously results in a no-dpi-in-metadata and wrong-dpi-in-image file.
The only two ways I got the correct DPI information in a resulting image file was using the default options (forced by -p): screencapture -i -p, and by making the capture go to the clipboard screencapture -i -c. Sadly, I can't use those in my case.
Feedback filed: FB13208235
I'd appreciate any pointers,
Matthias
Our iOS app can access the photo library when running it on an M1 Mac. The app was programmed using Xcode and Objective C. We cannot select a photo from the library and we need the Objective C code to accomplish this task. None of our attempts were successful.
I am facing an issue with a blog post, where I cannot view the image that is added on the blog post. In the blog post the image is a BMP type. I have read some earlier posts and it seems that BMP was not supported on the earlier IOS version. My IOS version is 17.0.2.
Browser: Safari
OS: iOS 17.0.2
Device: iPhone 14
URL: https://www.optimabatteries.com/experience/blog/if-a-cars-charging-system-isnt-working-properly-why-cant-we-just-jump-start-it-with-an-external-booster-pack
Please let me know what could be the issue.
I have written and used the code to get the colors from CGImage and it worked fine up to iOS16.
However, when I use the same code in iOS17, Red and Blue out of RGB are reversed.
Is this a temporary bug in the OS and will it be fixed in the future? Or has the specification changed and will it remain this way after iOS17?
Here is my code:
let pixelDataByteSize = 4
guard let cfData = image.cgImage?.dataProvider?.data else { return }
let pointer:UnsafePointer = CFDataGetBytePtr(cfData)
let scale = UIScreen.main.nativeScale
let address = ( Int(image.size.width * scale) * Int(image.size.height * scale / 2) + Int(image.size.width * scale / 2) ) * pixelDataByteSize
let r = CGFloat(pointer[address]) / 255
let g = CGFloat(pointer[address+1]) / 255
let b = CGFloat(pointer[address+2]) / 255
It appears I can't add a WebP image as an Image Set in an Asset Catalog. Is that correct?
As a workaround, I added the WebP image as a Data Set. I'm then loading it as a CGImage with the following code:
guard let asset = NSDataAsset(name: imageName),
let imageSource = CGImageSourceCreateWithData(asset.data as CFData, nil),
let image = CGImageSourceCreateImageAtIndex(imageSource, 0, nil)
else {
return nil
}
// Use image
Is it fine to store and load WebP images in this way? If not, then what's best practice?
I am able to create a UIImage from webP data using UIImage(data: data) on iOS and iPadOS. When I try to do this same thing on watchOS 10, it fails. Is there a workaround to displaying webp images on watch os if this isn't expected to work?
guard let rawfilter = CoreImage.CIRAWFilter(imageData: data, identifierHint: nil) else { return }
guard let ciImage = rawfilter.outputImage else { return }
let width = Int(ciImage.extent.width)
let height = Int(ciImage.extent.height)
let rect = CGRect(x: 0, y: 0, width: width, height: height)
let context = CIContext()
guard let cgImage = context.createCGImage(ciImage, from: rect, format: .RGBA16, colorSpace: CGColorSpaceCreateDeviceRGB()) else { return }
print("cgImage prepared")
guard let dataProvider = cgImage.dataProvider else { return }
let rgbaData = CFDataCreateMutableCopy(kCFAllocatorDefault, 0, dataProvider.data)
In iOS 16 this process is much faster than the same process in iOS 17
Is there a method to boost up the decoding speed?
This is my test code.
import SwiftUI
extension View {
@MainActor func render(scale: CGFloat) -> UIImage? {
let renderer = ImageRenderer(content: self)
renderer.scale = scale
return renderer.uiImage
}
}
struct ContentView: View {
@Environment(\.colorScheme) private var colorScheme
@State private var snapImg: UIImage = UIImage()
var snap: some View {
Text("I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!")
.foregroundStyle(colorScheme == .dark ? .red : .green)
}
@ViewBuilder
func snapEx() -> some View {
VStack {
Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!")
.foregroundStyle(colorScheme == .dark ? .red : .green)
Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!")
.background(.pink)
Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!")
.background(.purple)
Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!")
.foregroundStyle(colorScheme == .dark ? .red : .green)
Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!")
.foregroundStyle(colorScheme == .dark ? .red : .green)
}
}
@ViewBuilder
func snapView() -> some View {
VStack {
Text("Text")
Text("Test2")
.background(.green)
snap
snapEx()
}
}
var body: some View {
let snapView = snapView()
VStack {
snapView
Image(uiImage: snapImg)
Button("Snap") {
snapImg = snapView.render(scale: UIScreen.main.scale) ?? UIImage()
}
}
}
}
When using ImageRenderer, there are some problems with converting View to images.
For example, Text cannot automatically modify the foreground color of Dark Mode.
This is just a simple test code, not just Text.
How should I solve it?
Webp images work on iOS and iPadOS. But it doesn't work on tvOS.
In fact, Apple say that:
But why doesn't it happen? Is there a way that it works on tvOS?
I want to read metadata of image files such as copyright, author etc.
I did a web search and the closest thing is CGImageSourceCopyPropertiesAtIndex:
- (void)tableViewSelectionDidChange:(NSNotification *)notif {
NSDictionary* metadata = [[NSDictionary alloc] init];
//get selected item
NSString* rowData = [fileList objectAtIndex:[tblFileList selectedRow]];
//set path to file selected
NSString* filePath = [NSString stringWithFormat:@"%@/%@", objPath, rowData];
//declare a file manager
NSFileManager* fileManager = [[NSFileManager alloc] init];
//check to see if the file exists
if ([fileManager fileExistsAtPath:filePath] == YES) {
//escape all the garbage in the string
NSString *percentEscapedString = (NSString *)CFURLCreateStringByAddingPercentEscapes(NULL, (CFStringRef)filePath, NULL, NULL, kCFStringEncodingUTF8);
//convert path to NSURL
NSURL* filePathURL = [[NSURL alloc] initFileURLWithPath:percentEscapedString];
NSError* error;
NSLog(@"%@", [filePathURL checkResourceIsReachableAndReturnError:error]);
//declare a cg source reference
CGImageSourceRef sourceRef;
//set the cg source references to the image by passign its url path
sourceRef = CGImageSourceCreateWithURL((CFURLRef)filePathURL, NULL);
//set a dictionary with the image metadata from the source reference
metadata = (NSDictionary *)CGImageSourceCopyPropertiesAtIndex(sourceRef,0,NULL);
NSLog(@"%@", metadata);
[filePathURL release];
} else {
[self showAlert:@"I cannot find this file."];
}
[fileManager release];
}
Is there any better or easy approach than this?
Hello,
I'm wondering if there is a way to programmatically write a series of UIImages into an APNG, similar to what the code below does for GIFs (credit: https://github.com/AFathi/ARVideoKit/tree/swift_5). I've tried implementing a similar solution but it doesn't seem to work. My code is included below
I've also done a lot of searching and have found lots of code for displaying APNGs, but have had no luck with code for writing them.
Any hints or pointers would be appreciated.
func generate(gif images: [UIImage], with delay: Float, loop count: Int = 0, _ finished: ((_ status: Bool, _ path: URL?) -> Void)? = nil) {
currentGIFPath = newGIFPath
gifQueue.async {
let gifSettings = [kCGImagePropertyGIFDictionary as String : [kCGImagePropertyGIFLoopCount as String : count]]
let imageSettings = [kCGImagePropertyGIFDictionary as String : [kCGImagePropertyGIFDelayTime as String : delay]]
guard let path = self.currentGIFPath else { return }
guard let destination = CGImageDestinationCreateWithURL(path as CFURL, __UTTypeGIF as! CFString, images.count, nil)
else { finished?(false, nil); return }
//logAR.message("\(destination)")
CGImageDestinationSetProperties(destination, gifSettings as CFDictionary)
for image in images {
if let imageRef = image.cgImage {
CGImageDestinationAddImage(destination, imageRef, imageSettings as CFDictionary)
}
}
if !CGImageDestinationFinalize(destination) {
finished?(false, nil); return
} else {
finished?(true, path)
}
}
}
My adaptation of the above code for APNGs (doesn't work; outputs empty file):
func generateAPNG(images: [UIImage], delay: Float, count: Int = 0) {
let apngSettings = [kCGImagePropertyPNGDictionary as String : [kCGImagePropertyAPNGLoopCount as String : count]]
let imageSettings = [kCGImagePropertyPNGDictionary as String : [kCGImagePropertyAPNGDelayTime as String : delay]]
guard let destination = CGImageDestinationCreateWithURL(outputURL as CFURL, UTType.png.identifier as CFString, images.count, nil)
else { fatalError("Failed") }
CGImageDestinationSetProperties(destination, apngSettings as CFDictionary)
for image in images {
if let imageRef = image.cgImage {
CGImageDestinationAddImage(destination, imageRef, imageSettings as CFDictionary)
}
}
}
I have applied some filters (like applyingGaussianBlur) to a CIImage that was converted from UIImage. The resulting image data gets corrupted only in lower end devices. What could be the reason?
Is it impossible to write tEXt chunk data to PNG on iOS?
I successfully read the chunk data and modified it to add tEXt data to the chunk and saved it as an image in Gallery,
But the tEXt data keeps disappearing when I read the chunk data from the image in the Gallery.
Does iOS prevent preserving tEXt data when saving an image to Gallery?
- (void)cameraDevice:(ICCameraDevice*)camera
didReceiveMetadata:(NSDictionary* _Nullable)metadata
forItem:(ICCameraItem*)item
error:(NSError* _Nullable) error API_AVAILABLE(ios(13.0)){
NSLog(@"metadata = %@",metadata);
if (item) {
ICCameraFile *file = (ICCameraFile *)item;
NSURL *downloadsDirectoryURL = [[NSFileManager defaultManager] URLsForDirectory:NSDocumentDirectory inDomains:NSUserDomainMask].firstObject;
downloadsDirectoryURL = [downloadsDirectoryURL URLByAppendingPathComponent:@"Downloads"];
NSDictionary *downloadOptions = @{ ICDownloadsDirectoryURL: downloadsDirectoryURL,
ICSaveAsFilename: item.name,
ICOverwrite: @YES,
ICDownloadSidecarFiles: @YES
};
[self.cameraDevice requestDownloadFile:file options:downloadOptions downloadDelegate:self didDownloadSelector:@selector(didDownloadFile:error:options:contextInfo:) contextInfo:nil];
}
}
- (void)didDownloadFile:(ICCameraFile *)file
error:(NSError* _Nullable)error
options:(NSDictionary<NSString*, id>*)options
contextInfo:(void* _Nullable) contextInfo API_AVAILABLE(ios(13.0)){
if (error) {
NSLog(@"Download failed with error: %@", error);
}
else {
NSLog(@"Download completed for file: %@", file);
}
}
I don't know what's wrong. I don't know if this is the right way to get the camera pictures. I hope someone can help me