samedi 25 avril 2015

Creating a large GIF with CGImageDestinationFinalize - running out of memory


I'm trying to fix a performance issue when creating GIFs with lots of frames. For example some GIFs could contain > 1200 frames. With my current code I run out of memory. I'm trying to figure out how to solve this; could this be done in batches? My first idea was if it was possible to append images together but I do not think there is a method for that or how GIFs are created by the ImageIO framework. It would be nice if there was a plural CGImageDestinationAddImages method but there isn't, so I'm lost on how to try to solve this. I appreciate any help offered. Sorry in advance for the lengthy code, but I felt it was necessary to show the step by step creation of the GIF.

It is acceptable that I can make a video file instead of a GIF as long as the differing GIF frame delays are possible in a video and recording doesn't take as long as the GIF animation.

Note: jump to Latest Update heading below to skip the backstory.

Updates 1 - 6: Thread lock fixed by using GCD, but the memory issue still remains. 100% CPU utilization is not the concern here, as I show a UIActivityIndicatorView while the work is performed. Using the drawViewHierarchyInRect method might be more efficient/speedy than the renderInContext method, however I discovered you can't use the drawViewHierarchyInRect method on a background thread with the afterScreenUpdates property set to YES; it locks up the thread.

There must be some way of writing the GIF out in batches. I believe I've narrowed the memory problem down to: CGImageDestinationFinalize This method seems pretty inefficient for making images with lots of frames since everything has to be in memory to write out the entire image. I've confirmed this because I use little memory while grabbing the rendered containerView layer images and calling CGImageDestinationAddImage. The moment I call CGImageDestinationFinalize the memory meter spikes up instantly; sometimes up to 2GB based on the amount of frames. The amount of memory required just seems crazy to make a ~20-1000KB GIF.

Update 2: There is a method I found that might promise some hope. It is:

CGImageDestinationCopyImageSource(CGImageDestinationRef idst, 
CGImageSourceRef isrc, CFDictionaryRef options,CFErrorRef* err) 

My new idea is that for every 10 or some other arbitrary # of frames, I will write those to a destination, and then in the next loop, the prior completed destination with 10 frames will now be my new source. However there is a problem; reading the docs it states this:

Losslessly copies the contents of the image source, 'isrc', to the * destination, 'idst'. 
The image data will not be modified. No other images should be added to the image destination. 
* CGImageDestinationFinalize() should not be called afterward -
* the result is saved to the destination when this function returns.

This makes me think my idea won't work, but alas I tried. Continue to Update 3.

Update 3: I've been trying the CGImageDestinationCopyImageSource method with my updated code below, however I'm always getting back an image with only one frame; this is because of the documentation stated in Update 2 above most likely. There is yet one more method to perhaps try: CGImageSourceCreateIncremental But I doubt that is what I need.

It seems like I need some way of writing/appending the GIF frames to disk incrementally so I can purge each new chunk out of memory. Perhaps a CGImageDestinationCreateWithDataConsumer with the appropriate callbacks to save the data incrementally would be ideal?

Update 4: I started to try the CGImageDestinationCreateWithDataConsumer method to see if I could manage writing the bytes out as they come in using an NSFileHandle, but again the problem is that calling CGImageDestinationFinalize sends all of the bytes in one shot which is the same as before - I run out memory. I really need help to get this solved and will offer a large bounty.

Update 5: I've posted a large bounty. I would like to see some brilliant solutions without a 3rd party library or framework to append the raw NSData GIF bytes to each other and write it out incrementally to disk with an NSFileHandle - essentially creating the GIF manually. Or, if you think there is a solution to be found using ImageIO like what I've tried that would be amazing too. Swizzling, subclassing etc.

Update 6: I have been researching how GIFs are made at the lowest level, and I wrote a small test which is along the lines of what I'm going for with the bounty's help. I need to grab the rendered UIImage, get the bytes from it, compress it using LZW and append the bytes along with some other work like determining the global color table. Source of info: http://ift.tt/1JoQ77k .

Latest Update:

I've spent all week researching this from every angle to see what goes on exactly to build decent quality GIFs based on limitations (such as 256 colors max). I believe and assume what ImageIO is doing is creating a single bitmap context under the hood with all image frames merged as one, and is performing color quantization on this bitmap to generate a single global color table to be used in the GIF. Using a hex editor on some successful GIFs made from ImageIO confirms they have a global color table and never have a local one unless you set it for each frame yourself. Color quantization is performed on this huge bitmap to build a color palette (again assuming, but strongly believe).

I have this weird and crazy idea: The frame images from my app can only differ by one color per frame and even better yet, I know what small sets of colors my app uses. The first/background frame is a frame that contains colors that I cannot control (user supplied content such as photos) so what I'm thinking is I will snapshot this view, and then snapshot another view with that has the known colors my app deals with and make this a single bitmap context that I can pass into the normal ImaegIO GIF making routines. What's the advantage? Well, this gets it down from ~1200 frames to one by merging two images into a single image. ImageIO will then do its thing on the much smaller bitmap and write out a single GIF with one frame.

Now what can I do to build the actual 1200 frame GIF? I'm thinking I can take that single frame GIF and extract the color table bytes nicely, because they fall between two GIF protocol blocks. I will still need to build the GIF manually, but now I shouldn't have to compute the color palette. I will be stealing the palette ImageIO thought was best and using that for my byte buffer. I still need an LZW compressor implementation with the bounty's help, but that should be alot easier than color quantization which can be painfully slow. LZW can be slow too so I'm not sure if it's even worth it; no idea how LZW will perform sequentially with ~1200 frames.

What are your thoughts?

@property (nonatomic, strong) NSFileHandle *outputHandle;    

- (void)makeGIF
{
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0),^
    {
        NSString *filePath = @"/Users/Test/Desktop/Test.gif";

        [[NSFileManager defaultManager] createFileAtPath:filePath contents:nil attributes:nil];

        self.outputHandle = [NSFileHandle fileHandleForWritingAtPath:filePath];

        NSMutableData *openingData = [[NSMutableData alloc]init];

        // GIF89a header

        const uint8_t gif89aHeader [] = { 0x47, 0x49, 0x46, 0x38, 0x39, 0x61 };

        [openingData appendBytes:gif89aHeader length:sizeof(gif89aHeader)];


        const uint8_t screenDescriptor [] = { 0x0A, 0x00, 0x0A, 0x00, 0x91, 0x00, 0x00 };

        [openingData appendBytes:screenDescriptor length:sizeof(screenDescriptor)];


        // Global color table

        const uint8_t globalColorTable [] = { 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0xFF, 0x00, 0x00, 0x00 };

        [openingData appendBytes:globalColorTable length:sizeof(globalColorTable)];


        // 'Netscape 2.0' - Loop forever

        const uint8_t applicationExtension [] = { 0x21, 0xFF, 0x0B, 0x4E, 0x45, 0x54, 0x53, 0x43, 0x41, 0x50, 0x45, 0x32, 0x2E, 0x30, 0x03, 0x01, 0x00, 0x00, 0x00 };

        [openingData appendBytes:applicationExtension length:sizeof(applicationExtension)];

        [self.outputHandle writeData:openingData];

        for (NSUInteger i = 0; i < 1200; i++)
        {
            const uint8_t graphicsControl [] = { 0x21, 0xF9, 0x04, 0x04, 0x32, 0x00, 0x00, 0x00 };

            NSMutableData *imageData = [[NSMutableData alloc]init];

            [imageData appendBytes:graphicsControl length:sizeof(graphicsControl)];


            const uint8_t imageDescriptor [] = { 0x2C, 0x00, 0x00, 0x00, 0x00, 0x0A, 0x00, 0x0A, 0x00, 0x00 };

            [imageData appendBytes:imageDescriptor length:sizeof(imageDescriptor)];


            const uint8_t image [] = { 0x02, 0x16, 0x8C, 0x2D, 0x99, 0x87, 0x2A, 0x1C, 0xDC, 0x33, 0xA0, 0x02, 0x75, 0xEC, 0x95, 0xFA, 0xA8, 0xDE, 0x60, 0x8C, 0x04, 0x91, 0x4C, 0x01, 0x00 };

            [imageData appendBytes:image length:sizeof(image)];


            [self.outputHandle writeData:imageData];
        }


        NSMutableData *closingData = [[NSMutableData alloc]init];

        const uint8_t appSignature [] = { 0x21, 0xFE, 0x02, 0x48, 0x69, 0x00 };

        [closingData appendBytes:appSignature length:sizeof(appSignature)];


        const uint8_t trailer [] = { 0x3B };

        [closingData appendBytes:trailer length:sizeof(trailer)];


        [self.outputHandle writeData:closingData];

        [self.outputHandle closeFile];

        self.outputHandle = nil;

        dispatch_async(dispatch_get_main_queue(),^
        {
           // Get back to main thread and do something with the GIF
        });
    });
}

- (UIImage *)getImage
{
    // Read question's 'Update 1' to see why I'm not using the
    // drawViewHierarchyInRect method
    UIGraphicsBeginImageContextWithOptions(self.containerView.bounds.size, NO, 1.0);
    [self.containerView.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage *snapShot = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    // Shaves exported gif size considerably
    NSData *data = UIImageJPEGRepresentation(snapShot, 1.0);

    return [UIImage imageWithData:data];
}


Aucun commentaire:

Enregistrer un commentaire