Category Archives: Tutorials

Quickshot: CoreData react when an object’s property changes

Core Data only supports a small subset of ObjectiveC’s data. That’s fine – for most apps, the data you need to save can be broken down into smaller pieces of primitive data (that Apple supports).

Real properties and simulated properties

We often need to mix “supported” and “unsupported” data in a single class. There’s a couple of ways you can implement this – the most correct is to extend your CoreData class with a category that simulated a property using Associated Objects. The quicker way is to do the same but instead of an Ass. obj, simply write a short “setter” and “getter” (or for read-only: only a “getter”) in the category:

@interface MyCoreDataClass
@property(nonatomic) String storedFilename;

@interface MyCoreDataClass(NonStoredAdditions)
/** simulated, derived readonly property that CoreData doesn't know about */
-(BOOL) fileExists;

@implementation MyCoreDataClass(NonStoredAdditions)
/** simulated, derived readonly property that CoreData doesn't know about */
-(BOOL) fileExists
    ... complex code here to check for existence of the file
    ... save the result into an Associated Object so we don't re-calc every time
    ... MUST invalidate this code each time .storedFilename changes!

    ... Hmm. How?

Either way, you need to react to “the other variable changing”. The lame way of doing this is that every time you update the CoreData-backed fields, you also manually update the front-end fields – and vice-versa. It’s … lots of code, and easy to get wrong.

But that’s error-prone, and I hate writing boilerplate (it’s bound to cause a bug sooner or later).

ObjectiveC’s listen-to-other-property: KVO (with shortcut)

The correct ObjectiveC response is to use KVO, which is fiendishly over-complicated – but has a couple of shortcuts for the common-cases:

-(id) init
    self = [super init];
    if( self )
	[self addObserver:self forKeyPath:@"storedFilename" options:NSKeyValueObservingOptionNew context:self];
    return self;

- (void)dealloc
    // REQUIRED: or your app will crash horribly at runtime, at random times:
    [self removeObserver:self forKeyPath:@"storedFilename" context:self];
    [super dealloc];

- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context
    if([keyPath isEqualToString:@"storedFilename"])
        // react here by re-calculating the value of your derived property, and
        // it into the ass. object

Sadly, those shortcuts require you to put code in -init and -dealloc (otherwise you get random crashes, and some angry log messages). Argh! It’s a vicious circle of “CoreData doesn’t support X, so we do Y, but Y requires Z … which CoreData *also* doesn’t support”.

CoreData *does* support KVO listeners … just needs a tiny tweak

The correct way for Core Data is to use the “near equivalent” magic post-init methods, and the magic pre-dealloc method:

-(void)awakeFromFetch // called when pre-saved object is re-loaded from database
	[self addObserverForFileChanges];

-(void)awakeFromInsert // called when object is FIRST created
	[self addObserverForFileChanges];

-(void) addObserverForFileChanges
	[self addObserver:self forKeyPath:@"storedFilename" options:NSKeyValueObservingOptionNew context:self];
- (void) willTurnIntoFault // thanks to Ilya in comments. Not "- (void)didTurnIntoFault"
    [super willTurnIntoFault]; // thanks to Ilya in comments. Not "- (void)didTurnIntoFault"
    [self removeObserver:self forKeyPath:@"storedFilename" context:self];

For more info:

2012: Using OpenGL on iPhone/iPad, and Apple’s GLKit

Apple launched a library – GLKit – that makes iPhones handle OpenGL properly.

Most of the people writing tutorials on the web don’t know how to use it, and so they ignore it. The few tutorials I found that mentioned GLKit were written by people who clearly didn’t read the docs, and they did some silly stuff (like re-writing texture-loading from scratch, when it’s a 2-line method call in GLKit. Apple has already done the hard work for you! Doh!)

After reading the docs (!) and a bunch of trial and error, we got it working quite nicely – and it’s a big time-saver in several areas, while sticking to standard OpenGL. Nice!

Things you need to know about GLKit

  1. Only works in iOS 5 and later
  2. Works on all hardware (assuming the hardware can run iOS 5 or later)
  3. OpenGL ES 1.1 has been removed/ignored; you can still use ES 1.1 features, via a simple compatibility layer – which isn’t documented (but we worked it out in the end)
  4. Apple provides a full Vector and Matrix library – all the core methods (cross-product, normalization, etc) are implemented for you
    • Apple’s classes appear to work fine. Please don’t re-implement your own cross-product just to prove you can get it subtly wrong 🙂
    • NB: if you use Apple’s Vector and Matrix classes, your code will be automatically compatible with everyone else on iOS who does so. This would be a good thing…
  5. Texture loading is massively simplified – just 1 method call to load arbitrary Image files as textures, and a couple of properties to set if you want to configure them
  6. Apple has finally deleted their HORRIBLY broken Xcode template for OpenGL projects (the one with the coloured square), and replaced it with one that works correctly (and is configured correctly)
    • The iOS 3 and iOS 4 Xcode templates would disable 3D by default, for no apparent reason (depth buffer disabled). Not a good template to start from!
    • … and they used a strange-shaped rectangle on screen – but stretched it to look like a square, and claimed it was a “square” … because the person at Apple who wrote the project had a major bug they didn’t understand, and just hid it under the carpet (instead of fixing it)

Creating an ultra-simple OpenGL app using Xcode 4 and GLKit

Out of the box, Apple’s templates for Xcode 4 do NOT include a basic OpenGL project. Instead, you have a good quality project that’s customized to do OpenGL ES 2 shaders.

That’s great, but a huge amount of the code out there is pre-shader, and if you just want to update / port your iOS 3 apps – or if you don’t need shaders right now – you need to delete a big chunk out of the template. And you have to add back in a couple of small items that got left out.

…I was planning to include a sample project here, cut-down from Apple’s template, but Apple broke their templating system in Xcode 4 (removed lots of features that many developers made use of. No explanation given, no alternatives provided). Given how few people are using non-shaders / GL ES 1.1, I’m not sure it’s worth finishing and posting here.

If you *really* need / want it, please post in the comments, and I’ll dig it out and see how much I can make it work with current/latest Xcode (currently: 4.5 / 4.6)

box2D for iOS made easy: make it a static library is a wonderful, FREE!, physics library. Without it, we probably wouldn’t have the likes of Angry Birds (since Angry Birds was built on-top of it).

But … it’s a C++ library, and the official version doesn’t play nice with iOS (iPhone/iPad). You can include the full source into your project, but then you (might) run into Apple’s bugs where C++ sometimes fails to compile correctly in Xcode. And … you get several Warnings for the box2D source, where it doesn’t quite meet Xcode’s expectations. Most teams work on a “zero warnings allowed” basis, forcing you to branch the project :(.

Solution: convert box2D into a static library!

Please note: This tutorial is the MINIMUM steps we found that workaround the Xcode bugs. They SHOULD work for all other C++ libraries too – this is a more careful, cautious approach than the “standard” way you create static libraries.

But … if you see an unecessary step, or mistake, please add a comment to this post and I’ll look into it.

1. Create a new Xcode Project

When we’re done, you’ll have two items (a file and a folder) which you can drag/drop into any xcode project, and they will Just Work ™ – every time, with no code changes needed.

To start with, create a new Xcode project, and choose a type of: “Cocoa Touch Static Library”

2. Add the Box2D source

This is where it gets slightly tricky: we are going to workaround a major bug in Xcode, and a minor nuisance in the way box2D’s source code is written.

NB: box2d’s source is correct – it’s following the standards – it’s Apple that is wrong here. But … many library authors have given up, and just follow Apple’s “customized” version of the standards, because it makes life easier.

i. Download the source

First, download the box2d source code (if it’s not there, use the links from to find it)

ii. Find the correct set of files

Inside the zip file, you’ll find a lot of stuff:

…you want the Box2D sub-folder from the above screenshot. Copy the whole thing somewhere on your hard disk (you don’t want to change the original).

iii. Duplicate the headers

This step is critical: most libraries don’t need this, but we have to workaround a bug in Xcode

  1. Create two new folders: “src” and “headers”
  2. Copy/paste the Box2D folder into “headers”, and move the original into “src” – yes, you really DO need two copies of the source!
  3. Move both folders (“src” and “headers”) into the folder where your new Xcode project lives. DO NOT ADD THEM TO YOUR PROJECT.

Finally … open the “headers” copy of Box2D in Finder … and “move to trash” every single file that is NOT a “.h” file.

NB: you *will* end up with two copies of every header file – one in the “headers” folder, and one in the “src” folder. That sounds strange, but it’s how we force Xcode to do what it’s supposed to do.

iv. Add to project

This step is critical: most libraries don’t need this, but we have to workaround a bug in Xcode

  1. Drag/drop the “src” folder into your project
    1. CRITICAL: “Create groups for any added folders” MUST be checked
  2. Drag/drop the second Box2D folder (the one inside “headers” into your project
    1. CRITICAL: “Create folder references for any added folders” MUST be checked

When you’re finished, it MUST look like this. If not – you did it wrong, and YOUR LIBRARY WILL NOT WORK

Note that the copy from “headers” has all the folders coloured blue, whereas the copy from “src” has all the folders coloured yellow. That is very important…

3. Fix the headers (part 2)

We’ve duplicated the headers, and added them to the project.

But now we have to workaround ANOTHER bug in Xcode – one that is fatal to the box2D project.

Also … by default, Xcode does NOT make headers available to other projects. This is a bad “default” behaviour of Xcode – Xcode knows we’re making a static library, so it’s pretty obvious we’d want people to use it!

First, remove Xcode’s buggy, incorrect header-export:

  1. Open the build-settings for your new library project
  2. Select the “Build Phases” tab
  3. Open the “Copy Headers” phase
  4. …and delete all the headers in there

Next, add the headers CORRECTLY, and make them PUBLIC:

  1. Drag/drop the blue “Box2D” folder from the Project Navigator into the Public section
  2. …NB: you *must* drag the blue folder
  3. …NB: you *must* drag it – if you use the small “+” button, that is for adding headers, XCODE WILL NOT DO WHAT YOU TOLD IT TO (bug).

When you’re done, you should have exactly this:

5. Make the library “universal”

For Xcode 4.0, Apple removed a *critical* feature – their build notes said it was too confusing for developers, so they simply removed it. I think it’s more likely they decided they didn’t have time to fix all the bugs in their implementation (there were quite a few)…

Anyway, we’re going to add the feature back in.

Using this StackOverflow question and answer, convert your library from “Simulator only” / “Device only”, into a library that works on both simulators AND on devices, all in one file.


  1. In the Build Settings page you already have open in Xcode, click “Add Build Phase -> Run Script Phase” at bottom right
  2. Copy/paste the script from StackOverflow into the script area.


6. Build, and add to your game / app project

Hit Build, and you get a library.

However, there’s ANOTHER bug in Xcode: it will “hide” the library somewhere semi-random on your hard disk. The StackOverflow answer has step-by-step instructions for finding it (hint: look in the Build Results window in Xcode, and the script prints the location to screen, so you can find it, because Xcode hides it from you).

Now drag/drop the file (e.g. “libBox2dStatic.a”, if you called your box2d project “Box2DStatic”) into your main app project.

In Finder, in the same folder where you found the “.a” file, you’ll see a folder “usr” – also drag/drop this one into your project. (it contains all the header files that Box2D needs)

7. Configure your project to find the header files

By default, Xcode ignores incoming header files. This is poor interface design from Apple – most IDE’s would have asked you “are you adding these files to your project – or are they part of a library that you’re importing?” – but it means we have to manually tell Xcode that we just added library headers.

  1. IN YOUR MAIN GAME/APP: go to the Build Settings page
  2. Select the TARGET you’re building (not the PROJECT), and select “Build Settings” tab
  3. in the search box, type “header”
  4. Edit the “Header Search Paths” line, and add an entry (if its not there already): “$(SRCROOT)”
  5. Hit Enter/Return key (Xcode will often delete the new entry if you don’t do this!)
  6. Check the little tickbox to the left of the new entry – this is REQUIRED

When you’re done, double-click the “Header Search Paths” and check your entry is there (c.f. note above, about how Xcode often deletes new entries for no apparent reason). It should look like this:

(NB: the libxml2 line in the screenshot probably isn’t needed in your project – in this example, I used a project that already had some Header Search Paths)

8. Import Box2D, and start coding…

YOU MUST NOT #import BOX2D – it doesn’t support that usage. Instead, you must #include it – and you MUST use angle-brackets.

Here’s an example – note how the “#import” for Foundation is an import, but the Box2D is an include:

Finally: since box2d is written in C++, you MUST make your source file into Objective-C++. This is very easy – all you have to do is rename it from “.m” to “.mm”.

(if that doesn’t work, you can open the right-hand panel in Xcode, and in the “Identity and Type” section, set the File Type setting to “C++ Source File”)

Core Data Made Easy: Some Code + Practices for Beginners and Experts

Core Data is Apple’s answer to “Wow! It’s difficult to store objects in an SQL database!”. Extended, over time, to do a lot more than just that – but that’s the core.

If you know what you’re doing – and you avoid the pit-traps along the way – it can be very good indeed. But we frequently see code written by iOS professionals that betrays a misunderstanding of what Apple intended, and misses out on some of the best features of Core Data.

Over time, we’ve coalesced some of our practices into re-useable code and techniques. All the code in this article is already up on GitHub, and I’ll be maintaining it periodically with improvements from our own projects.

(this post will be followed by a few others, or else extended over time)


NB … if you’re not interested in alternative approaches, skip ahead to the next section.

…re-write / extend Core Data?

Before we start, I should point out that if you want “a better Core Data” there’s several things out there. I’ve tried a few, but none of them have ever made it onto our launched apps. This is *even though* I’ve been quietly impressed by some of them.

Why not? Well … here’s the issues that have affected us (and may affect you too)

  1. Core Data – in fact, any Persistence Layer – is massively complex, it deliberately breaks the Objective-C/Cocoa platform, and it’s extremely dangerous to mess with. Some of the tech we’ve seen tries too hard to “fix” CoreData, and in the process has unexpected side-effects.
    • …when you’re dealing with USER DATA, that’s a terrifying prospect
    • Examples include: the Objective-C standard method “description” *corrupts yourd data if you alter it in any way”
    • Examples include: the Objective-C standard method “copy” *is hardcoded* to be unusable on CD objects – you get a nice error message at runtime from Apple if you try to use it, and your app crashes
    • …and other similar terrors. Unless you are 120% confident of your “CoreData++” library, don’t risk it.
  2. Core Data is a single standard that Apple beats developers over the head with. There’s only one version, it’s very difficult to customize.
    • So: if you stick to standard CoreData method calls and behaviour, most developers can immediately understand your code without having to learn your proprietary setup
    • …which saves a lot of time and money on projects with multiple teams / companies involved
  3. For some projects / clients / partners, the “license” on code is a serious issue, and code that you or I would happily use gets “forbidden” by the legal teams.
    • For some clients, anything we use has to be written by us, or have a massively open license (no GPL, no LGPL – not even MIT/BSD, in some cases)
    • For a very narrow set of clients, anything we use has to be reviewed *by us* (or by the client) line by line, and we have to warranty it. i.e. the code needs to be extremely short and simple!
  4. Any code we write on top of this – can it be re-used in a “plain” CoreData project?
    • Often, you want to copy/paste snippets of code from one old project to a new one (assuming of course you – or same client – own the original code).
    • …if you’ve extended CoreData with new classes and methods, you might find this is very hard to do. (this has happened before on some iOS projects, with other libs we’ve used)

So, instead, we focus on making as few changes to CoreData as possible – and generally just sticking to Apple’s own concepts.

Making life easy, with minimal changes

We’re going to do three broad things:

  1. Clean up Apple’s code, and enable some Apple-provided features you probably didn’t know existed
  2. Very slightly extend Apple’s code (mostly: we’re going to make a start at adding some Blocks, since Apple still hasn’t!)
  3. Offer some good practices in how you write YOUR code

1. Replace “100 lines of code” with a simple, easy, encapsulated class

One new class: CoreDataStack.m

One of the best parts of Core Data is that the high-level architecture is neatly split into 4 distinct layers. Collectively, Apple terms these the Core Data “stack” (although that term never appears in code – only in docs). Unfortunately, since all those layers are needed before you can do anything, it requires 100 lines of code just to read a single item from Core Data.

Apple’s image + text showing the layers (may require login to Apple Dev network)

  1. Layer 1: file (on disk, or flash memory – Apple has NOT YET IMPLEMENTED a network version, or a remote DB one, although devs have been asking for this for many years. Unlikely to happen anytime soon)
  2. Layer 2: Persistent Object Store (the DB part)
  3. Layer 3: Persistence Store Co-ordinator (high-level management stuff)
  4. Layer 4: NSManagedObjectContext (this is what 99% of your app code talks to)

Apple has largely ignored this problem; their “solution” is that when you create a new project in Xcode, they dump 100 lines (well, not quite 100, but close) of crud into your AppDelegate class.

This the wrong place to put that code – by Apple’s own rules, it should not go there. If your application is multi-threaded (90% of apps are) then it also MUST NOT go there (that’s almost guaranteed to cause data-loss bugs in the long run).

So, the first thing we did was to create a class – CoreDataStack – that encapsulates Apple’s 100 lines of boilerplate code:

GitHub link to CoreDataStack.m

You can drag/drop this into your projects (take the .h file too, of course) – it has NO DEPENDENCIES, except for CoreData.Framework (which is needed in all Core Data projects, by definition)

Using CoreDataStack

Instead of Apple’s large number of method calls, and large number of properties, in real-world projects you need just one property and just one method.

(NB: this assumes you’ve already used Apple’s GUI Editor to create a CoreDataModel file with one or more Entities and Attributes)

To start using Core Data:

/** one line complete init */
CoreDataStack* cdStack = [CoreDataStack coreDataStackWithModelName:@"MyModelName"];

/** one property you need for all Core Data method calls */

…where MyModelName is the filename of the Model file (the thing you click on to get the GUI interface for editing Entities and Attributes)

That’s it! Really! Everything else can be inferred (so we do it automatically, in code).

For compatibility, all the other items you *might, theoretically* want to access (e.g. the PersistenceStoreCoordinator, etc) are presented as properties on CoreDataStack.

Why is this a separate class, and yet not a Singleton?

This is actually very important: you DO NOT WANT CoreDataStack to be a Singleton! (although many people try to write their apps that way – c.f. Apple’s default template and its abuse of AppDelegate.m!)


Because Core Data was designed to have *multiple simultaneous instances* of NSManagedObjectContext, NSPersistentStoreCoordinator, etc. If you limit yourself to one copy of each – as per Apple’s template – you disable many of the features of CoreData.

Worse, some of those features are *required* if you’re going to use CoreData correctly on a real-world project.

The nearest you can get to making this a singleton is … have one instance per xcdatamodel file (an earlier version of the code would even cache this for you, although we since removed that because the performance impact was invisible – not worth the complexity).

More on this (multiple CoreDataStack instances) in the next section…

Tips and tricks: Improving your own code

Use multiple CoreDataStack instances

Here’s a little secret: Core Data was designed for you to access MULTIPLE “models” at once, within a single app … but Apple’s default template for iPhone projects makes that impossible (unless you replace it).

Also, remember the most important rule of Core Data: Apple’s source code is NOT THREAD SAFE and WILL CORRUPT YOUR DATA at random intervals if your app is multithreaded in the “wrong” way [click for more info].

…but here’s another, bigger, secret: if you’re clever, using multiple models at once, you can *avoid* the need for thread-safe code – both your code and Apple’s code. This is the only excuse for Apple’s code being unsafe: you can avoid the need for safety.

So, with CoreDataStack, to use a second model in your app, all you do is:

/** one line complete init */
CoreDataStack* cdOtherStack = [CoreDataStack coreDataStackWithModelName:@"MyOtherModelName"];

/** one property you need for all Core Data method calls */

So long as you only have one thread reading and writing to each CoreDataStack instance, you will avoid all the bugs caused by Apple’s unsafe code. In most multi-threaded apps, it’s easy to split your data into multiple separate models, and have each model locked to a particular thread.

Multiple models … what?

Say you’re writing an Email client, that has the following classes for Core Data:

  • Email.m : Subject, To, From, Body, isReadYet, DateReceived
  • Person.m : EmailAddress, Name

Your app does the following:

  • Shows a list of emails, that you can click on to read
  • Shows a list of people – tap a person’s name to send them an email, or tap an “Add” button to add a new person
  • Downloads emails in the background automatically
  • Syncs your contacts to an addressbook server in the background

You now have a multi-threading problem – two sets of background threads accessing Core Data. This *will corrupt your data*. Apple has never tried to make CoreData thread-safe.

Now you have two choices:

  • OPTION 1: learn how to write the bizarre code that lets CoreData function with multiple threads. Pray that none of your colleagues dares to edit any of your code – because if they do, there’s a high chance they’ll break it without realizing
  • OPTION 2: Instead of having ONE model, have TWO models. One model contains just “Email” and the other contains just “Person”. Each background thread is associated 1:1 with a separate CoreDataStack instance – and suddenly everything is Thread-Safe. Nothing to worry about!

NB: this works because of CoreData’s fundamental design: Apple’s code is *not thread-safe for a single NSManagedObjectContext, but it IS thread-safe for multiple separate NSManagedObjectContext’s in memory at once*

Experts only: Single CoreData model, multiple threads

Further, if you know what you’re doing with multi-threading, there are many situations where you need to have two copies in memory of the SAME object-model. You’ll be manually cross-synching (there lie dragons).

But to be thread-safe, you have to guarantee that the entire stack – NSPersistenceCoordinator etc – is separate for each thread. The hard way to do this is to manually manage it. The easy way is to init a separate CoreDataStack instance per thread – and this will automatically be thread-safe, because each CoreDataStack is coded to NOT share any data or references between instances.

Good practices – applies to all CoreData projects

Don’t hard-code your class-names

Apple’s example source code for using CoreData is technically correct, but practically poor. It encourages a bad habit that – for most projects – is unnecessary and the cause of many bugs over time.

When you need to reference a CD object, Core Data is written so that you don’t neet to have access to the Class of that object. In theory, you can load a Class (from CoreData’s database, saved by a different app) that doesn’t exist in your project. We’re getting into some weird and freaky stuff here.

Just in case – even though 90% of coders will never use that feature – Apple tells you to use NSString to instantiate your CoreData objects. This is bad for most projects:

  1. Apple’s refactor tools in Xcode are 10 years behind everyone else’s – they don’t support CoreData, and they ignore those strings – if you refactor a CoreData class, Xcode will break your project
  2. It’s very very easy to make a typo when writing that string. Xcode 3 would auto-complete the name for you, but Xcode 4 removed this feature

So, each time you need to create a new object in the CoreData database/store, instead of this:

Email* newEmail = [NSEntityDescription insertNewObjectForEntityForName:@"Email" inManagedObjectContext:cdStack.managedObjectContext];

do this:

Email* newEmail = [NSEntityDescription insertNewObjectForEntityForName:NSStringFromClass([Email class]) inManagedObjectContext:cdStack.managedObjectContext];

…using Xcode’s autocomplete, this is the same or fewer keystrokes, even though it results in more text. More importantly, the compiler will now double-check that class for you, and refuse to build if you typo it. You’re also safe(r) with refactoring.

Similarly, when you do a Fetch with CoreData, instead of passing the string-name of the class, use the same NSStringFromClass call as above, for the same benefits.

Upgrading the CoreDataStack


Apple doesn’t like Exception Handlers and Assertions. Personally, I think 30 years of computer industry have proved them wrong there, but I’m willing to accept it.

Except for when a SAVE of CoreData fails; in this particular case, it is totally unacceptable for your app to silently ignore the error. Sadly, Apple’s default setup encourages you to do this.

How many times have you seen this in an app:

[self.managedObjectContext save:nil];


Sadly, I see it all the time. Because the alternative requires a minimum of 5 lines of code, including a double-pointer (that a lot of junior programmers and/or people who’ve never used C/C++ seem to feel uncomfortable with).

As a bonus, Apple’s code has some nasty behaviour with that save method. They have documented this (so I guess it’s a “feature” instead of a bug – but see what you think):

If managedObjectContex is nil, then “a save that fails” will ALWAYS seem to return “there was no error”.

This is one of the most painful things I’ve had to debug on CoreData projects. Many times.

So, we’re going to fix both of those. CoreDataStack.m has an optional method:

-(void) saveOrFail:(void(^)(NSError* errorOrNil)) blockFailedToSave
  1. Because it’s a block, it’s *just as easy to log the error* as it is to ignore it (instead of requiring you to write error-creating + checkign code). Further, because it’s a block, it’s easy to nest attempted saves, logs, and failures.
  2. If your NSManagedObjectContex is nil … instead of doing what Apple does (silent failure; pretends that the save has succeeded) … we explicitly FAIL (c.f. the source code for the saveOrFail method).


CoreData corrupts data if used multi-threaded?

Until 2011, this wasn’t even documented, but … not only is CoreData “not thread-safe”, but if you merely allocate/init a CoreData context from a different thread you will cause it to quietly self-destruct later on – after it’s taken your precious data. It’s easy to understand once you know – but this is not an obvious side-effect, and I’ve seen it catch a few people.

e.g. one of the patterns coders previously used to handle multi-threading – because Apple hadn’t documented this requirement – involved Thread A creating the context, adding notification listeners, and then passing the context to the thread that would use it.

This works most of the time – I know, because I used this pattern back in 2009 – and I’d learnt it from someone else who’d been using it for a while themselves. But it also fails some of the time, unpredictably, with strange crashes deep in Apple’s code. Now we know better, of course.

My take-home: multi-threaded CoreData is to be avoided unless you *really* know what you’re doing with CoreData. Given that Apple has been very slow (years slow!) to document this aspect of CoreData, I don’t recommend it, unless you’re willing to spend a lot of time learning the “community wisdom” on the topic.

SVGKit (iOS) now supports more SVG’s – including most Wikipedia maps

Finally got the complicated viewBox / transform-matrix code working for SVGKit. Net result: the remaining maps from Wikipedia that used to fail catastrophically now render perfectly, e.g. their European map:

Why so hard?

For the curious…

This European map is specified at “180,000 units wide and 150,000 units high”, but … “Oh, by the way, we suggest you render it at 1800 pixels by 1500 pixels”.

This is a problem unique to scalable vector graphics (and part of what makes vector images so interesting). There is *no* “correct” size for a vector image – it can be rendered at *any* size, by definition.

Unlike a bitmap (i.e. the images used in 99% of situations today), a vector image never goes “blurry” or “pixellated”. If you want to view it at ultra-high res, you’re welcome.

And, because some code in the world still uses low-quality “floats” to store data (which are very inaccurate at high numbers), a lot of the world’s SVG files are specified using large integer numbers, and they assume you will “scale down” to your desired display size. Which is great. Except that (until today) SVGKit didn’t support that scaling, and tried to render everything literally, causing huge memory use and crashes.

NB: all this is currently in an experimental branch of SVGKit – we’re hoping to merge it into the master branch soon, but for now you’ll have to access this directly from or not at all – sorry!

CATiledLayer: how to use it, how it works, what it does

Our recent iPad app – London Unfurled – draws enormous images (90,000 pixels wide) very fast, with zooming and panning. We built it using a custom OpenGL rendering loop – we did NOT use CATiledLayer.

When I gave a couple of talks on this recently, several people wondered if “CATiledLayer would do it all for you?”. Ah … no. Try the app. You’ll quickly see that it’s much faster than CATL, even at CATL’s fastest.

It seems that CATL is a little slower, and uses more memory, than many of us expected. But it’s still very useful – e.g. we’re using it in a new game project right now. So … what’s going on? When should you use CATL, and when should you avoid it? What works, and what doesn’t?

Why all the confusion?

Three problems:

  1. Until iOS4, CATL routinely crashed people’s apps, with little or no explanation. It was a documented feature in Apple’s rendering code (Apple has now changed this to be more programmer-friendly – no crashes), quite easy to workaround once you knew it. But … it created a lot of FUD and frustration. A lot of experienced iOS programmers have learnt to live without CATL, and haven’t used it in a shipping app
  2. There’s very little documentation for CATL. A total of 5 sentences for the class, and 5 sentences covering configuration. Even the most Frequently Asked Question about CATL is missing from the docs (“how do I get rid of the fade/flash effect?”).
  3. CATL appears to fill an obvious hole in Apple’s libraries: Apple hasn’t (yet) provided classes to handle lazy rendering and “automatic redraw” when zooming. CATL doesn’t necessarily fill those holes – but with the lack of docs, lots of programmers jump to some (reasonable) conclusions here

What do iOS programmers try to use CATiledLayer for?

So, let’s have a look at some common use cases, and some common misconceptions:

  1. Drawing images that are “too big” to place in a single UIImageView (e.g. anything over 2096×2096 pixels)
  2. Reducing the memory footprint of “huge” images, so that an inifinitely large image can be rendered
  3. Rendering “huge” images very quickly, using automatic caching
  4. Adding “infinite zooming” to your application (e.g. as used in the Google Maps app)
  5. Making CGPath objects zoom correctly when you zoom in a UIScrollView (without CATL, they go blurry when you zoom)

What is CATiledLayer actually good for?


  • GOOD INTEGRATION with UIScrollView, automatically “Just Works” (almost; see below)
  • When tiles are cached, panning and zooming is FAST and SMOOTH
  • When some tiles are NOT cached, panning is JUDDERY, but zooming is SMOOTH
  • Uses MORE MEMORY than manually managing memory; on iOS, if you’re rendering truly huge images, and you want them to run fast … then you need more fine-grained control over the exact amount of RAM you’re using from second to second
  • Uses a SIMPLISTIC, OFTEN POOR caching algorithm: e.g. for any content that is not “UIImage / CGImageRef”, it will REDUCE rendering speed
  • Even for images, if an image is more than one tile in size, CATL will ADD A FLICKERING EFFECT TO THE RENDERING (like the tiles in Google Maps “flicking” into existence). This CANNOT BE REMOVED.
  • Additionally, by default, it will ANIMATE THE FLICKERING to make it less offensive – but more obvious. This CAN BE REMOVED.

So, in summary, when to use CATL?

  • If the “raw” content of your layer CHANGES RARELY OR NEVER
  • If your content can be rendered at MULTIPLE RESOLUTIONS (e.g. text, e.g. CAShapeLayer, e.g. CGPath)

If you need to change content, there’s workarounds that work nicely (see below) – but it’s only workable if the changes are small and/or relatively infrequent (i.e. no more than a couple / second).

Making use of CATiledLayer…

…CATL does NOT automatically re-draw!

Every time you make ANY change to the contents of a CATL, you must manually call:

[(CATiledLayer*) YOUR_OBJECT setNeedsDisplay];

…effectively, you are really calling the private internal method:

[(CATiledLayer*) YOUR_OBJECT invalidateCache]; // just guessing. This?
[(CATiledLayer*) YOUR_OBJECT reGenerateTiles]; // ...or maybe this?

…but that’s not entirely obvious.

…with UIScrollView

For “perfect” integration, the CATL would use the UISV’s zoom settings to automatically determine it’s own zoom settings.

Fortunately, this is relatively easy to implement manually. There’s some excellent walk-through info on this at Things That Were Not Immediately Obvious To Me (NB: if you also read the “Part 1” of that page, don’t panic – it’s doing a LOT more work than you need to. Probably ignore it for now).

…with CAShapeLayer

If you place a CAShapeL inside a UIScrollView, and zoom in … it all goes blurry. :(.

However, if you place CATiledL inside a UIScrollView, and write your CAShapeL’s into the CATiledL, and zoom in … you get high-resolution. Yay!

But … performance drops through the floor. Because CATiledL isn’t doing any bitmap caching.

Fortunately, you can flip the “shouldRasterize” property of CAShapeL to “TRUE” just before drawing it to the CATiledL, and it wil output something that’s already a bitmap, so you get the best of both worlds.

NB: shouldRasterize can have unexpected consequences when using CA Animations – however, it’s a huge performance boost for some things, like drop shadows. So, if your rendering looks wrong when you use this trick, Google for the extensive tutorials and support questions on shouldRasterize…

Example code:

	CGContextRef context = UIGraphicsGetCurrentContext();
	for( CAShapeLayer* s in self.myShapes )

// NB: this "if" assumes that you manually created your CAShapeLayer
//    the correct "frame" such that it fits the embedded CGPath.
// By default, Apple doesn't do this for you - your CAShapeLayer will
//    have a frame of {{0,0} {0,0}}.
// You can omit the "if" altogether - it will reduce performance a little,
//    but the tile is going to be cached anyway, so it's not fatal
		if( CGRectIntersectsRect(rect, s.frame ) 
			 this effectively causes the CATiledLayer to cache a bitmap instead of complete CGPath's
			 so ... rendering performance is noticeably faster, even with just 5-6 paths on screen!
			s.shouldRasterize = TRUE;
			CGContextTranslateCTM(context, s.frame.origin.x, s.frame.origin.y );
			[s renderInContext:context];


…with small changes to contents

If you redraw a CALayer, Apple waits till you’ve drawn the whole thing, then moves it to the screen. It appears all in one go, maybe with a tiny delay.

If you change even a TINY part of a CATL, even if you tell the CATL that “only this small piece has changed”, then CATL updates the screen lots of times, once for each tile that is generated. This is slower than doing it all in once, and creates an artificial “flicker” on the screen.

(NB: to do “only a small change”, use [setNeedsDisplayInRect:] – I’ve tested, and this appears to only de-cache the tiles in that rect. It is noticeably faster than [setNeedsDisplay], when the rect is small)

There is no way around this – CATL is just a little weak in its core algorithm. I believe Apple is introducing an artificial delay so that it can execute on a background thread without “using too much CPU time”.

So … instead, if you want the screen to update instantly with no flicker … you have to create an “overlay” layer above the CATL, and write your changes into that layer.

e.g. I recently made a game which showed all the countries of the world, and let you click a country to select it. When you select it, I wanted to change the fill-colour of the country. Implementation:

  • UIScrollView (requires ONE AND ONLY ONE subview, or it doesn’t work correctly)
    • UIView (with embedded CALayer) “container”
      • UIView (with embedded CATiledLayer) “all countries of the world”
      • UIView (with embedded CALayer) “OVERLAY for temporary changes to the CATL
        • … if nothing selected, this layer is empty
        • … if a country is selected, I clone the country’s pixels, change the colours, and add them to this layer

This works very fast – when you select a country, there’s no flicker, and it appears to happen instantaneously.

Without this, the update is slow, and you can see the tiles getting rendered one by one.

What does CATL actually do?

Through trial and error, we can see that CATL works something like this…


CATL layers *cannot have* CALayer sublayers.

Try it. Everything goes horribly wrong. Your sublayers might render, if you’re lucky (I saw this happen approx 1 time in 100 – I think it was a bug).

You can force them to render, by adding them to your tiles. Don’t do this: it *destroys* the performance of CATL.

If you want sublayers … you need to create a separate “container” CALayer, add it to the CATL’s *superlayer* … and then add your “sublayers” to that new “container”.

However, generally, it’s better to make a container UIView, and add to the CALayer’s UIView’s superview. Why? Because then you can still add sublayers (UIView.layer is always there), but you can also add things like UIButton too…


CATL – surprise, surprise! – works by keeping a list of Tile objects internally, and each time the rendering system asks it to render itself, it blits one or more tiles onto the screen to cover the dirty rectangle.

Unfortunately, Apple *does not* allow us access to the NSObject (or, possibly, the struct) that they use to represent individual “tiles”. They give us an approximation – when they are creating a new Tile, they send us a CGRect that is the exact frame of the tile to generate (i.e. an offset/origin, and a width/height).

All Tiles are the same size *in pixels*, but they end up different sizes when you start zooming (see below).

There is also some magic about the meaning of “size” when a view starts to zoom, more on that later.

You can change the tile-size (it’s a config option for CATL), but that resets the CATL, and deletes all existing tiles (it seems).

The process for “generating new tiles” used to be complex, but now it’s very very simple:

/*! Override "drawRect:".
With CATL, the "rect" object has a special definition: it is the exact *frame* of the Tile that you are
generating - whatever you render will be cached as "TILE-N" (educated guess: no-one knows how Apple implements this internally :)).

NOTE1: Apple hasn't documented this in the CATL class; you have to watch the WWDC videos, or use trial and error to discover it.
NOTE2: because "rect" is a *frame*, it captures both the tile size AND the tile offset. e.g. rect might be: {{512,256}, {256,256}}
	CGContextRef context = UIGraphicsGetCurrentContext();

	for( CALayer* l in self.myInternalLayers )
		if( CGRectIntersectsRect( rect, l.frame ) )
				CGContextTranslateCTM(context, l.frame.origin.x, l.frame.origin.y );
				[l renderInContext:context];
			;//DEBUG: NSLog(@"ignored non-intersecting tile");

Now: here there IS a surprise. By default, CATL stores the tiles in a very low performance way – it stores them as rendering commands. I think most people assume it stores them as blit’able bitmaps – that would make sense: it’s fast, it’s efficient.

IF you’re rendering pure image data in the CATL, then de facto a Tile is “only slightly more memory” than a blit. If you’re rendering anything else, you should write your own code to convert your render commands to a bitmap, and render the bitmap to CATL. If you do this, you can often get literally 5x increase in render speed.

So, for instance, if you draw CGPath objects to a CATL, they will be stored *multiple times over* in memory, increasing your memory usage, and reducing your rendering speed. Yes: used naively, CATL can make rendering slower, and take more memory.

Tile Caching

When CATL renders to screen, it only generates Tiles that it doesn’t already have inside its cache. When it hits some arbitrary limit, it deletes some existing tiles. Apple provides zero info on what the limit is, or which tiles get deleted first.

The cache is private, opaque, of “undocumented” size, with “undocumented” behaviour. This – if nothing else – makes CATL useless in many real-world cases. Opaque caches are evil.

Also, c.f. notes above on Tiles: this is a RENDER COMMAND cache, not a BITMAP cache. If you’re going to do render commands (i.e. 95% of rendering in a modern iOS app!), make sure you generate bitmaps on the fly, and render those to the CATL instead.

LOD (Levels of Detail)

CATL innately supports zooming: it has a special feature where you can insert it inside a UIScrollView, and it will intelligently redraw itself *at higher resolution* whenever the UIScrollView zooms in.

(NB: you can also achieve this behaviour manually, without a UIScrollView, by manually updating the low-level CALayer properties that tell a layer the size/area/zoom it should render with)

When you zoom (the complex process where you change the CALayer size, frame, contentScale, transform, etc), the CATL simply creates a whole NEW set of Tiles, and adds them to its cache. The old ones are NOT deleted (unless it runs out of space).

If you then zoom back out again, CATL will render very quickly, because it has the data cached. Probably.

i.e. the CATL cache looks something like this:

  • TILE1 — ZOOM = 1:1 — SIZE = 256×256 — POSITION = 0,0
  • TILE2 — ZOOM = 1:1 — SIZE = 256×256 — POSITION = 0,1
  • TILE3 — ZOOM = 1:1 — SIZE = 256×256 — POSITION = 1,0
  • TILE4 — ZOOM = 1:2 — SIZE = 256×256 — POSITION = 0,0
  • TILE5 — ZOOM = 1:2 — SIZE = 256×256 — POSITION = 0,1
  • TILE6 — ZOOM = 1:2 — SIZE = 256×256 — POSITION = 1,0


NB: what follows comes from trial and error and educated guesses. I’d be very happy for someone from Apple to correct this with what’s *actually* happening – but I think this is close enough for us to understand how to use CATL.

The process that CATL uses goes something like this:

  1. Look at the layer’s contentsScale (I think that’s the property it uses? Or … maybe it reads the affineTransform property instead?), and the layer’s “normal” bounds (widthxheight)
  2. Use the “scale” info to decide “how many on-screen pixels” a standard tile would cover
    • e.g. if you’ve zoomed-in by a factor of 1.5, and you’re using the default Tile size of 256×256, then the “on-screen” size of a tile is now 384×384
  3. WHILE the “on-screen pixel size” is greater than 2 x the default size, switch LOD level (i.e. use tiles that have smaller and smaller “layer.bounds” size)
  4. Look at the CGRect that the windowing system has told the CATL to “drawRect:” in
    • if you’ve zoomed in, this will be much smaller area than the layer’s normal bounds
    • if you’ve zoomed out, this will be much bigger area than the layer’s normal bounds
  5. Use that CGRect to calculate a list of tile-offsets that are needed to cover the VISIBLE area
    • e.g. for a VISIBLE frame of “{{100,0}, {500,256}}”, and 256×256 tiles, you’d need tiles at: {0,0}, {256,0}, {512,0}.
  6. For each tile that is in the cache, render it immediately *BY REPLAYING RENDER COMMANDS*
    • c.f. above, this can be SLOW. So slow that you see tiles appearing one-by-one on screen
  7. Display to screen
  8. While that’s appearing on scren, in a background thread, do:
    1. For each tile that is NOT in the cache, call the user-written “drawRect:” method to generate the tiles.
    2. Each time a tile completes, schedule an update to the mainthread that will OVERWRITE the screen contents with the new tile
      • i.e. some time after the CATL was rendered on screen (typically “half a second, up to several seconds”), while your main app is now running some other piece of code, small pieces of the CATL start magically appearing
    3. With the default CATL implementation, each new tile is animated in, using a “flash/fade” animation that takes 0.25 seconds per tile
      • For most apps, most of the time, this is painfully slow. Most people subclass CATL, and override the “+(CSTimeInterval) fadeDuration” method to return “0.0” instead.
      • c.f. above: even with a fadeDuration of 0.0, you can often STILL “see” the tiles appear on screen, because CATL uses an inefficient tile rendering / tile caching algorithm

Handling crashes and NSAssert gracefully in a Production App

NSAssert? What? Why? (skip if you’re familiar with this)

During development, it’s standard “best practice” to us NSAssert liberally, alerting you early if you have unexpected bugs in your code, for instance:

-(NSString*) processItem:(int) index
	NSAssert( index < [myArray count], @"Index is bigger than number of items in the myArray array" );
	... // rest of the method goes here

Unit Tests are an even better approach, but there are situations – e.g. complex algorithms – where you want to perform sanity-checks in the middle of code.

NSAssert in Xcode (Apple’s default setup for new projects)

Apple has hooked up NSAssert so that in Debug builds (by default: anything on the simulator or on the device using Development mode) it stops your app, and immediately tells you what’s gone wrong.

When you do a Release build (by default: anything you send out as Ad-Hoc, or to the Apple App Store), Apple automatically replaces your NSAssert calls with blank lines – nothing happens. Otherwise your app would crash in many situations where it’s probably OK to keep running.

But that means you lose this info that could be very valuable in discovering if your app has stopped working, and how/why – e.g. if Apple changed something in an iOS update that now breaks your app.

Ideally, we want to do something different for Release builds.

NSAssertionHandler to the rescue

In older platforms, you often had to manually do a #define to change the assert() function to something different, in different builds. That was midly annoying – complex macros are slightly harder to maintain than straight code – and not very configurable.

Fortunately, Apple uses an OOP approach via NSAssertionHandler. You can implement *multiple* subclasses of NSAssertionHandler, and switch between them at runtime.

First, create a subclass of NSAssertionHandler:

@implementation AssertionHandlerLogAll

-(void)handleFailureInFunction:(NSString *)functionName file:(NSString *)fileName lineNumber:(NSInteger)line description:(NSString *)format, ...
	NSLog(@"[%@] Assertion failure: FUNCTION = (%@) in file = (%@) lineNumber = %i", [self class], functionName, fileName, line );

-(void)handleFailureInMethod:(SEL)selector object:(id)object file:(NSString *)fileName lineNumber:(NSInteger)line description:(NSString *)format, ...
	NSLog(@"[%@] Assertion failure: METHOD = (%@) for object = (%@) in file = (%@) lineNumber = %i", [self class], NSStringFromSelector(selector), object, fileName, line );


…then, in your AppDelegate class (whichever class implements “NSObject ” ):

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
	// Override point for customization after application launch.
	NSLog(@"[%@] Setting a custom assertion handler", [self class] );
	NSAssertionHandler* customAssertionHandler = [[AssertionHandlerLogAll alloc] init];
	[[[NSThread currentThread] threadDictionary] setValue:customAssertionHandler forKey:NSAssertionHandlerKey];
	// NB: your windowing code goes here - e.g. self.window.rootViewController = self.viewController;

Assertions – upload to Flurry, perhaps?

But why limit yourself to just one assertion handler?

If you’re using Flurry (or Google Analytics etc), why not automatically log + upload each assertion to Flurry, and have them show up in your Dashboard? That way, you can see if ANY assertions are firing – but also get a count of:

  1. how many
  2. which hardware (is this only happening on an iPhone 3GS, for instance?)
  3. which iOS version (perhaps it’s due to a bug in iOS v 4.2, but not in iOS 4.3?)

Something like this:

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
	// Override point for customization after application launch.
	/** start Flurry  */
	[FlurryAPI startSession:@"whatever your private Flurry Key is"];
	NSLog(@"[%@] Setting a custom assertion handler", [self class] );
	NSAssertionHandler* customAssertionHandler = [[AssertionHandlerSendToFlurry alloc] init];
	[[[NSThread currentThread] threadDictionary] setValue:customAssertionHandler forKey:NSAssertionHandlerKey];
	// NB: your windowing code goes here - e.g. self.window.rootViewController = self.viewController;

Xcode4: A script that creates / adds files to your project

This ought to be easy: it is the single most common function that a build system has to do.

So it’s rather depressing that Apple doesn’t support it. Every other build system (and IDE) I’ve ever used supports it out of the box. Apple for some unknown reason has designed Xcode4 to make this difficult. Strange.

The ultimate “fix” is just one line of code – but it’s a line of code that many people are afraid to write, because it seems like it would be fragile, and feels like it MUST be wrong. Most of this post is explaining why, in fact, it’s correct – and walking you through the other things we attempted before settling on this fix.

How you’d expect it to work

We’d expect: Step 1 – Add a “Run Script” phase

Aplpe has a simple process for adding scripts. Their script-management is very weak (it’s not been updated to modern standards in a long time), so you only have one option:

  1. Select the project itself, inside the project (it’s the thing with the blue icon)
  2. In the main window, select the target you’re building
  3. Click the Build Phases tab
  4. Hidden in the extreme bottom right corner of the screen (by a bad UI designer) is a button “Add Build Phase”, that lets you add a “Run Script” phas

OK, done. Wait … it’s asking us for “input files” and “output files”. What does that mean?

Well, whatever you were expecting is probably wrong: remember, this is a very weak build-system. It’s actually asking you:

  1. Every file you intend to read from, please tell me in advance, so I can cache your output if those files haven’t changed
  2. Every file you intend to write to, please tell me in advance, so I can cache the BUILD if your output hasn’t changed

(all modern build systems do the second feature automatically, and good ones have also been doing the first one automatically for over a decade)

Fine, so you fill those in. NB: even though you should be able to select input files from a Finder interface, Apple has disabled the Finder on this GUI, so you have to type them fully by hand.

What happens: The script runs, but the files vanish

If you use some debugging in your script, you can prove its run, by looking at the build log.

But the output files are NOT included in the build (even though we told Apple each file, by name).

Hmm. Well, maybe we need to use the only other option Apple provides us:

We’d expect: Step 2 – Add a “Copy Files” phase?

Actually, this should be automatic; we just told Xcode exactly what files we were creating, and where we were creating them.

But some badly-written build-systems are so dumb that you have to tell them what you’ve already told them. So, when step 1 above doesn’t work, we try step 2.

  1. Add a “Copy files” phase immediately after the “Run Script” phase
  2. Type in the name(s) of the file(s) generated in the “Run Script” phase
  3. …FAIL! Xcode4 is hardcoded to prevent you doing this

Yes, really. The “Add” button is grayed-out if you attempt to do this. Apple doesn’t want you to do it.

There is literally no way to take the output of a script and include it in your app.

Solving the problem with Xcode4

Instead, you have to write *into your script* the parts of Apple’s build system that they have barred you from accessing. Obviously, Apple is using “copy the output of a script” to take your compiled files and put the output into the app – but they won’t let you do that.

Here’s where it gets tricky: Apple’s documentation for Xcode4 is so misleading that I’d call it “incorrect”: it tells you to use the Copy Files phase described above (which we already know is impossible), and if you go digging in their build-system docs, it gives you various other places it suggests you manually copy your output files to.

But they don’t work, either because they’re missing key elements, or because Xcode4 ignores their content when making the build.

Pre-Solution: edit your attempts at Steps 1 and 2 above

Firstly, since Apple is ignoring the “output files” that we so carefully specified, and their docs say they will just re-run the script every time if the output files are blank … remove all your “output files” and “input files”. It’s the only safe way forwards – otherwise you’ll have to hand-maintain this error-prone dialog box forever.

Secondly, delete the Copy Files phase – Apple won’t let you use it. Give up, and move on.

Solution: Manually “insert” files into the final file

I tried everything, and scoured the Apple docs, the Apple developer guides, StackOverflow, and Google. I found two things that worked: an easy one that requires some typing, and an extremely long-winded and difficult one that hacks Xcode’s GUI to force it to do what it was supposed to in the first place.

Too much effort, too much to go wrong – and too hard to maintain. Instead, I went for the quick and easy method. Fortunately, there’s a pair of poorly documented Apple build-variables that together makes this easy and (almost) idiot-proof. At least, they do if you know they exist, and once you’ve guessed what they actually do (as opposed to what the docs suggest).

Add the following to the very end of your script (or get your tool:


…or just write your script so that it writes output directly to:


If you use script debugging, you’ll find that this deposits things directly inside the file – and it works fine.

Mac OS X: custom background for your app-window

I found this neat little technique: … for drawing a custom image to the background of your Application Window.

Here’s a screenshot of the technique applied to Regular Expression Helper (currently in submission, waiting for Apple to approve it for the App Store):

Things of note:

  1. The author was inspired by Apple’s own apps – recently, Apple has been doing this more and more, even though I’m sure it’s against their own Human Interface Guidelines … so I’m assuming they’ll have no complaints about you doing this in the App Store
  2. This covers your ENTIRE background – it overwrites the default titlebar, overwrites the default bottom bar, everything
  3. The author uses method swizzling to make this work *without* creating custom classes, and *without* altering Apple’s own system classes. That’s very neat: you can comment out a single line of code to enable/disable the customization

For my last OS X app – Regular Expressions Helper – Brett made an excellent icon. As required by Apple’s App Store policies, we’ve got it in all sizes up to 512×512. So, I wanted to use that as the background to the app. This needed some tweaks.

Also, the Parmanoir site skipped over some minor points (all obvious when you read the source code they provided), but for future reference, here’s my changes:

Code requirements

You need to import objc/runtime.h for method swizzling to work – a little surprising, considering you’d expect this to be part of the base stuff pre-imported by NSObject et al. No matter.

Complication: Xcode4 auto-complete is buggy on the objc directory – it displays it as a file, and attempts to autocomplete:

#import  // FAIL...

So, just remember to type it manually (or copy/paste this):

#import  // Correct

Loading your application icons (.icns file) as a background image

If you load an NSImage directly, using the standard method from iOS:

	NSImage* backgroundImage = [NSImage imageNamed:@"image-name"]; // DONT DO THIS

…then you’ll discover a slightly irritating feature of OS X v10.6: it loads a 32×32 pixel version of the image *no matter how big the image is*. This is well documented, but it’s easy to overlook, especially coming from iOS, where “imageNamed” is the method you *always* use, to take advantage of it’s built-in caching.

Instead, if you simply use a different init method, you get the “maximal resolution” version of the image:

	NSImage* backgroundImage = [[NSImage alloc] initWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"Regular Expressions Icons" ofType:@"icns" ]];

(note: this is loading an .icns file – the one that Apple requires you to create, and requires you to put a 512×512 image into – rather than, say, a PNG file. That makes things easier to maintain: just one image file to update ;))

Copy/pasteable final custom drawRect method

For sheer convenience, here’s the default basic method for doing background-image rendering from icns, with the alpha scaled down so you can see the image more clearly:

-(void) customDrawRect:(NSRect) rect
	// Call original drawing method
	[self drawRectOriginal:rect];
	// Build clipping path : intersection of frame clip (bezier path with rounded corners) and rect argument
	NSRect windowRect = [[self window] frame];
	windowRect.origin = NSMakePoint(0, 0);
	// Draw background image (extend drawing rect : biggest rect dimension become's rect size)
	NSRect imageRect = windowRect;
	if (imageRect.size.width > imageRect.size.height)
		imageRect.origin.y = -(imageRect.size.width-imageRect.size.height)/2;
		imageRect.size.height = imageRect.size.width;
		imageRect.origin.x = -(imageRect.size.height-imageRect.size.width)/2;
		imageRect.size.width = imageRect.size.height;
	NSImage* backgroundImage = [[NSImage alloc] initWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"Regular Expressions Icons" ofType:@"icns" ]];
	[backgroundImage drawInRect:imageRect fromRect:NSZeroRect operation:NSCompositeSourceAtop fraction:0.55];

iOS4: Take photos with live video preview using AVFoundation

I’m writing this because – as of April 2011 – Apple’s official documentation is badly wrong. Some of their source code won’t even compile (typos that are obvious if they’d checked them), and some of their instructions are hugely over-complicated and yet simply don’t work.

This is a step-by-step guide to taking photos with live image preview. It’s also a good starting point for doing much more advanced video and image capture on iOS 4.

What are we trying to do?

It’s very easy to write an app that takes photos. It’s quite a lot of code, but it’s been built-in to iOS/iPhone OS for a few years now – and it still works.

But … with iOS 4, the new “AV Foundation” library offers a much more powerful way of taking photos, which lets you put the camera view inside your own app. So, for instance, you can make an app that looks like this:


0. Requires a 3GS, iPod Touch 3, or better…

The entire AV Foundation library is not available on the oldest iPhone and iPod Touch devices. I believe this is because Apple is doing a lot of the work in hardware, making use of features that didn’t exist in the original iPhone chips, and the 3G chips.

Interestingly, the AV Foundation library *is* available on the Simulator – which suggest that Apple certainly *could* have implemented AV F for older phones, but they decided not to. It’s very useful that you can test most of your AV F app on the Simulator (so long as you copy/paste some videos into the Simulator to work with).

1. Apple doesn’t tell you the necessary Frameworks

You need *all* the following frameworks (all come with Xcode, but you have to manually add them to your project):

  1. CoreVideo
  2. CoreMedia
  3. AVFoundation (of course…)
  4. ImageIO
  5. QuartzCore (maybe)

How do we: get live video from camera straight onto the screen?

Create a new UIViewController, add its view to the screen (either in IB or through code – if you don’t know how to add a ViewController’s view, you need to do some much more basic iPhone tutorials first).

Add a UIView object to the NIB (or as a subview), and create a @property in your controller:

@property(nonatomic, retain) IBOutlet UIView *vImagePreview;

Connect the UIView to the outlet above in IB, or assign it directly if you’re using code instead of a NIB.

Then edit your UIViewController, and give it the following viewDidAppear method:

-(void) viewDidAppear:(BOOL)animated
	AVCaptureSession *session = [[AVCaptureSession alloc] init];
	session.sessionPreset = AVCaptureSessionPresetMedium;
	CALayer *viewLayer = self.vImagePreview.layer;
	NSLog(@"viewLayer = %@", viewLayer);
	AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
	captureVideoPreviewLayer.frame = self.vImagePreview.bounds;
	[self.vImagePreview.layer addSublayer:captureVideoPreviewLayer];
	AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
	NSError *error = nil;
	AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
	if (!input) {
		// Handle the error appropriately.
		NSLog(@"ERROR: trying to open camera: %@", error);
	[session addInput:input];
	[session startRunning];

Run your app on a device (NB: this will NOT run on Simulator – Apple doesn’t support cameras on the simulator (yet)), and … you should see the live camera view appearing in your subview


2. Apple’s example code for live-video doesn’t work

In the AVFoundation docs, Apple has a whole section on trying to do what we did above. Here’s a link: AV Foundation Programming Guide – Video Preview. But it doesn’t work.

UPDATE: c.f. Robert’s comment below. This method does work, you just have to use it in a different way.

“The method “imageFromSampleBuffer” does work when you send a sample buffer from “AVCaptureVideoDataOutput” which is “32BGRA”. You tried to send a sample buffer from “AVCaptureStillImageOutput” which is “AVVideoCodecJPEG”.”

(more details + source code in Robert’s comment at the end of this post)

If you look in the docs for AVCaptureVideoPreviewLayer, you’ll find a *different* source code example, which works without having to change codecs:

captureVideoPreviewLayer.frame = self.vImagePreview.bounds;
[self.vImagePreview.layer addSublayer:captureVideoPreviewLayer];

3. Apple’s image-capture docs are also wrong

In the AV Foundation docs, there’s also a section on how to get Images from the camera. This is mostly correct, and then at the last minute it goes horribly wrong.

Apple provides a link to another part of the docs, with the following source code:

    UIImage* image = imageFromSampleBuffer(imageSampleBuffer);

UIImage *imageFromSampleBuffer(CMSampleBufferRef sampleBuffer)
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    // Lock the base address of the pixel buffer.
    // Get the number of bytes per row for the pixel buffer.
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    // Get the pixel buffer width and height.
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    // Create a device-dependent RGB color space.
    static CGColorSpaceRef colorSpace = NULL;
    if (colorSpace == NULL) {
        colorSpace = CGColorSpaceCreateDeviceRGB();
		if (colorSpace == NULL) {
            // Handle the error appropriately.
            return nil;
    // Get the base address of the pixel buffer.
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
    // Get the data size for contiguous planes of the pixel buffer.
    size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);
    // Create a Quartz direct-access data provider that uses data we supply.
    CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize, NULL);
    // Create a bitmap image from data supplied by the data provider.
    CGImageRef cgImage = CGImageCreate(width, height, 8, 32, bytesPerRow, colorSpace, kCGImageAlphaNoneSkipFirst | 
kCGBitmapByteOrder32Little, dataProvider, NULL, true, kCGRenderingIntentDefault);
    // Create and return an image object to represent the Quartz image.
    UIImage *image = [UIImage imageWithCGImage:cgImage];
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
    return image;

This code has never worked for me – it always returns an empty 0x0 image, which is useless. That’s 45 lines of useless code, that everyone is required to re-implement in every app they write.

Or maybe not.

Instead, if you look at the WWDC videos, you find an alternate approach, that takes just two lines of source code:

NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];

Even better … this actually works!

How do we: take a photo of what’s in the live video feed?

There’s two halves to this. Obviously, we’ll need a button to capture a photo, and a UIImageView to display it. Less obviously, we’ll have to alter our existing camera-setup routine.

To make this work, we have to create an “output source” for the camera when we start it, and then later on when we want to take a photo we ask that “output” object to give us a single image.

Part 1: Add buttons and views and image-capture routine

So, create a new @property to hold a reference to our output object:

@property(nonatomic, retain) AVCaptureStillImageOutput *stillImageOutput;

Then make a UIImageView where we’ll display the captured photo. Add this to your NIB, or programmatically.

Hook it up to another @property, or assign it manually, e.g.;

@property(nonatomic, retain) IBOutlet UIImageView *vImage;

Finally, create a UIButton, so that you can take the photo.

Again, add it to your NIB (or programmatically to your screen), and hook it up to the following method:

-(IBAction) captureNow
	AVCaptureConnection *videoConnection = nil;
	for (AVCaptureConnection *connection in stillImageOutput.connections)
		for (AVCaptureInputPort *port in [connection inputPorts])
			if ([[port mediaType] isEqual:AVMediaTypeVideo] )
				videoConnection = connection;
		if (videoConnection) { break; }
	NSLog(@"about to request a capture from: %@", stillImageOutput);
	[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
		 CFDictionaryRef exifAttachments = CMGetAttachment( imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
		 if (exifAttachments)
			// Do something with the attachments.
			NSLog(@"attachements: %@", exifAttachments);
			NSLog(@"no attachments");
		NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
		UIImage *image = [[UIImage alloc] initWithData:imageData];

		self.vImage.image = image;

Part 2: modify the camera-setup routine

Go back to the viewDidAppear method you created at the start of this post. The very last line must REMAIN the last line, so we’ll insert the new code immediately above it. Here’s the new code to insert:

stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];

[session addOutput:stillImageOutput];

Run the app, and you should get something like the image I showed at the start, where the part on the left is a live-preview from the camera, and the part on the right updates each time you click the “take photo” button: