Apple’s C compiler (LLVM) doesn’t set float to 0.0 – why not?

With this piece of code, what gets output?

float aFloat;
NSLog(@"Default value of a float is zero, right? A float = %.3f", aFloat );

(Hint: USUALLY, you’ll get “0.000”. But not always… )

A common expectation

Apple’s C compiler doesn’t always initialize float’s as you might expect.

The C standard has two different aspects to float initialization. One version will set floats to 0.0 *always*. The other version … will set them to random mem contents – which on iOS, for instance, is normaly but not always … 0.0.

(and the C++ equivalent is, of course, more complicated still … depends on the surrounding context of your variables, e.g. are they in a class? how’s that class getting constructed? etc)

e.g. I had to fix the following code, which was running fine 9 times in 10, but crashing/asserting occasionally:

float infitesimal; // In almost all languages, this = 0.0;
float smallPositiveFloat = 0.0001;

infitesimal += smallPositiveFloat;

NSAssert( infitesimal > 0.0, @"Not with Apple's LLVM version 3, it ain't (sometimes) ..." );

Apple should fix this anyway…

UPDATE: As pointed out by Nathan, Build won’t hilight this, but the Static Analyzer *will* (by default). This is great, but like many devs, currently we only use Xcode’s “Analyze…” every few days (and/or the weekly build) – it’s too slow to do “every” build, and it errors on things that you’re forced to do (e.g. it doesn’t understand asynch callbacks, e.g. it doesn’t like dead code that’s sometimes needed to workaround the Xcode debugger).

It doesn’t surprise me that Apple has left their C mode compiling this – it’s slightly annoying, but technically correct, and following standards is a very good thing.

But the question begs: why doesn’t Xcode flag this as a WARNING by default? That would still be standards compliant, and yet reduce the bugginess of many iOS apps.

(I find it very hard to think of real situations today where an iOS or OSX app would actually want to use a float uninitialized – meanwhile, it’s begging for data corruption bugs, security holes, etc)

In the meantime, of course … whenever you see “float something;” without an ” = 0.0;” on the end … well, you know what to do.

A general problem with Objective C

A brief note on the general case: this isn’t just about floats.

The above behaviour is often removed from modern languages – or is reduced in extent (originally I wrote: doesn’t happen in C++. As James points out below, it does happen in C++, my mistake – although I’d still argue it happens less easily / accidentally). The tiny, rare benefit is outweighed by the large, common loss.

Unfortunately, since ObjectiveC is defined as an extension of C … we’re stuck with it.

Ironically, Apple’s choice of Obj-C (rather than – say – Obj-C++) sometimes has the unwanted effect of pushing programmers back to the dark ages, to the days when “the x-y position indicator” was still just a research project.

(Apple brought out their first consumer desktop with a mouse in 1984. C++ launched in 1983 … )

CATiledLayer: how to use it, how it works, what it does

Our recent iPad app – London Unfurled – draws enormous images (90,000 pixels wide) very fast, with zooming and panning. We built it using a custom OpenGL rendering loop – we did NOT use CATiledLayer.

When I gave a couple of talks on this recently, several people wondered if “CATiledLayer would do it all for you?”. Ah … no. Try the app. You’ll quickly see that it’s much faster than CATL, even at CATL’s fastest.

It seems that CATL is a little slower, and uses more memory, than many of us expected. But it’s still very useful – e.g. we’re using it in a new game project right now. So … what’s going on? When should you use CATL, and when should you avoid it? What works, and what doesn’t?

Why all the confusion?

Three problems:

  1. Until iOS4, CATL routinely crashed people’s apps, with little or no explanation. It was a documented feature in Apple’s rendering code (Apple has now changed this to be more programmer-friendly – no crashes), quite easy to workaround once you knew it. But … it created a lot of FUD and frustration. A lot of experienced iOS programmers have learnt to live without CATL, and haven’t used it in a shipping app
  2. There’s very little documentation for CATL. A total of 5 sentences for the class, and 5 sentences covering configuration. Even the most Frequently Asked Question about CATL is missing from the docs (“how do I get rid of the fade/flash effect?”).
  3. CATL appears to fill an obvious hole in Apple’s libraries: Apple hasn’t (yet) provided classes to handle lazy rendering and “automatic redraw” when zooming. CATL doesn’t necessarily fill those holes – but with the lack of docs, lots of programmers jump to some (reasonable) conclusions here

What do iOS programmers try to use CATiledLayer for?

So, let’s have a look at some common use cases, and some common misconceptions:

  1. Drawing images that are “too big” to place in a single UIImageView (e.g. anything over 2096×2096 pixels)
  2. Reducing the memory footprint of “huge” images, so that an inifinitely large image can be rendered
  3. Rendering “huge” images very quickly, using automatic caching
  4. Adding “infinite zooming” to your application (e.g. as used in the Google Maps app)
  5. Making CGPath objects zoom correctly when you zoom in a UIScrollView (without CATL, they go blurry when you zoom)

What is CATiledLayer actually good for?


  • GOOD INTEGRATION with UIScrollView, automatically “Just Works” (almost; see below)
  • When tiles are cached, panning and zooming is FAST and SMOOTH
  • When some tiles are NOT cached, panning is JUDDERY, but zooming is SMOOTH
  • Uses MORE MEMORY than manually managing memory; on iOS, if you’re rendering truly huge images, and you want them to run fast … then you need more fine-grained control over the exact amount of RAM you’re using from second to second
  • Uses a SIMPLISTIC, OFTEN POOR caching algorithm: e.g. for any content that is not “UIImage / CGImageRef”, it will REDUCE rendering speed
  • Even for images, if an image is more than one tile in size, CATL will ADD A FLICKERING EFFECT TO THE RENDERING (like the tiles in Google Maps “flicking” into existence). This CANNOT BE REMOVED.
  • Additionally, by default, it will ANIMATE THE FLICKERING to make it less offensive – but more obvious. This CAN BE REMOVED.

So, in summary, when to use CATL?

  • If the “raw” content of your layer CHANGES RARELY OR NEVER
  • If your content can be rendered at MULTIPLE RESOLUTIONS (e.g. text, e.g. CAShapeLayer, e.g. CGPath)

If you need to change content, there’s workarounds that work nicely (see below) – but it’s only workable if the changes are small and/or relatively infrequent (i.e. no more than a couple / second).

Making use of CATiledLayer…

…CATL does NOT automatically re-draw!

Every time you make ANY change to the contents of a CATL, you must manually call:

[(CATiledLayer*) YOUR_OBJECT setNeedsDisplay];

…effectively, you are really calling the private internal method:

[(CATiledLayer*) YOUR_OBJECT invalidateCache]; // just guessing. This?
[(CATiledLayer*) YOUR_OBJECT reGenerateTiles]; // ...or maybe this?

…but that’s not entirely obvious.

…with UIScrollView

For “perfect” integration, the CATL would use the UISV’s zoom settings to automatically determine it’s own zoom settings.

Fortunately, this is relatively easy to implement manually. There’s some excellent walk-through info on this at Things That Were Not Immediately Obvious To Me (NB: if you also read the “Part 1” of that page, don’t panic – it’s doing a LOT more work than you need to. Probably ignore it for now).

…with CAShapeLayer

If you place a CAShapeL inside a UIScrollView, and zoom in … it all goes blurry. :(.

However, if you place CATiledL inside a UIScrollView, and write your CAShapeL’s into the CATiledL, and zoom in … you get high-resolution. Yay!

But … performance drops through the floor. Because CATiledL isn’t doing any bitmap caching.

Fortunately, you can flip the “shouldRasterize” property of CAShapeL to “TRUE” just before drawing it to the CATiledL, and it wil output something that’s already a bitmap, so you get the best of both worlds.

NB: shouldRasterize can have unexpected consequences when using CA Animations – however, it’s a huge performance boost for some things, like drop shadows. So, if your rendering looks wrong when you use this trick, Google for the extensive tutorials and support questions on shouldRasterize…

Example code:

	CGContextRef context = UIGraphicsGetCurrentContext();
	for( CAShapeLayer* s in self.myShapes )

// NB: this "if" assumes that you manually created your CAShapeLayer
//    the correct "frame" such that it fits the embedded CGPath.
// By default, Apple doesn't do this for you - your CAShapeLayer will
//    have a frame of {{0,0} {0,0}}.
// You can omit the "if" altogether - it will reduce performance a little,
//    but the tile is going to be cached anyway, so it's not fatal
		if( CGRectIntersectsRect(rect, s.frame ) 
			 this effectively causes the CATiledLayer to cache a bitmap instead of complete CGPath's
			 so ... rendering performance is noticeably faster, even with just 5-6 paths on screen!
			s.shouldRasterize = TRUE;
			CGContextTranslateCTM(context, s.frame.origin.x, s.frame.origin.y );
			[s renderInContext:context];


…with small changes to contents

If you redraw a CALayer, Apple waits till you’ve drawn the whole thing, then moves it to the screen. It appears all in one go, maybe with a tiny delay.

If you change even a TINY part of a CATL, even if you tell the CATL that “only this small piece has changed”, then CATL updates the screen lots of times, once for each tile that is generated. This is slower than doing it all in once, and creates an artificial “flicker” on the screen.

(NB: to do “only a small change”, use [setNeedsDisplayInRect:] – I’ve tested, and this appears to only de-cache the tiles in that rect. It is noticeably faster than [setNeedsDisplay], when the rect is small)

There is no way around this – CATL is just a little weak in its core algorithm. I believe Apple is introducing an artificial delay so that it can execute on a background thread without “using too much CPU time”.

So … instead, if you want the screen to update instantly with no flicker … you have to create an “overlay” layer above the CATL, and write your changes into that layer.

e.g. I recently made a game which showed all the countries of the world, and let you click a country to select it. When you select it, I wanted to change the fill-colour of the country. Implementation:

  • UIScrollView (requires ONE AND ONLY ONE subview, or it doesn’t work correctly)
    • UIView (with embedded CALayer) “container”
      • UIView (with embedded CATiledLayer) “all countries of the world”
      • UIView (with embedded CALayer) “OVERLAY for temporary changes to the CATL
        • … if nothing selected, this layer is empty
        • … if a country is selected, I clone the country’s pixels, change the colours, and add them to this layer

This works very fast – when you select a country, there’s no flicker, and it appears to happen instantaneously.

Without this, the update is slow, and you can see the tiles getting rendered one by one.

What does CATL actually do?

Through trial and error, we can see that CATL works something like this…


CATL layers *cannot have* CALayer sublayers.

Try it. Everything goes horribly wrong. Your sublayers might render, if you’re lucky (I saw this happen approx 1 time in 100 – I think it was a bug).

You can force them to render, by adding them to your tiles. Don’t do this: it *destroys* the performance of CATL.

If you want sublayers … you need to create a separate “container” CALayer, add it to the CATL’s *superlayer* … and then add your “sublayers” to that new “container”.

However, generally, it’s better to make a container UIView, and add to the CALayer’s UIView’s superview. Why? Because then you can still add sublayers (UIView.layer is always there), but you can also add things like UIButton too…


CATL – surprise, surprise! – works by keeping a list of Tile objects internally, and each time the rendering system asks it to render itself, it blits one or more tiles onto the screen to cover the dirty rectangle.

Unfortunately, Apple *does not* allow us access to the NSObject (or, possibly, the struct) that they use to represent individual “tiles”. They give us an approximation – when they are creating a new Tile, they send us a CGRect that is the exact frame of the tile to generate (i.e. an offset/origin, and a width/height).

All Tiles are the same size *in pixels*, but they end up different sizes when you start zooming (see below).

There is also some magic about the meaning of “size” when a view starts to zoom, more on that later.

You can change the tile-size (it’s a config option for CATL), but that resets the CATL, and deletes all existing tiles (it seems).

The process for “generating new tiles” used to be complex, but now it’s very very simple:

/*! Override "drawRect:".
With CATL, the "rect" object has a special definition: it is the exact *frame* of the Tile that you are
generating - whatever you render will be cached as "TILE-N" (educated guess: no-one knows how Apple implements this internally :)).

NOTE1: Apple hasn't documented this in the CATL class; you have to watch the WWDC videos, or use trial and error to discover it.
NOTE2: because "rect" is a *frame*, it captures both the tile size AND the tile offset. e.g. rect might be: {{512,256}, {256,256}}
	CGContextRef context = UIGraphicsGetCurrentContext();

	for( CALayer* l in self.myInternalLayers )
		if( CGRectIntersectsRect( rect, l.frame ) )
				CGContextTranslateCTM(context, l.frame.origin.x, l.frame.origin.y );
				[l renderInContext:context];
			;//DEBUG: NSLog(@"ignored non-intersecting tile");

Now: here there IS a surprise. By default, CATL stores the tiles in a very low performance way – it stores them as rendering commands. I think most people assume it stores them as blit’able bitmaps – that would make sense: it’s fast, it’s efficient.

IF you’re rendering pure image data in the CATL, then de facto a Tile is “only slightly more memory” than a blit. If you’re rendering anything else, you should write your own code to convert your render commands to a bitmap, and render the bitmap to CATL. If you do this, you can often get literally 5x increase in render speed.

So, for instance, if you draw CGPath objects to a CATL, they will be stored *multiple times over* in memory, increasing your memory usage, and reducing your rendering speed. Yes: used naively, CATL can make rendering slower, and take more memory.

Tile Caching

When CATL renders to screen, it only generates Tiles that it doesn’t already have inside its cache. When it hits some arbitrary limit, it deletes some existing tiles. Apple provides zero info on what the limit is, or which tiles get deleted first.

The cache is private, opaque, of “undocumented” size, with “undocumented” behaviour. This – if nothing else – makes CATL useless in many real-world cases. Opaque caches are evil.

Also, c.f. notes above on Tiles: this is a RENDER COMMAND cache, not a BITMAP cache. If you’re going to do render commands (i.e. 95% of rendering in a modern iOS app!), make sure you generate bitmaps on the fly, and render those to the CATL instead.

LOD (Levels of Detail)

CATL innately supports zooming: it has a special feature where you can insert it inside a UIScrollView, and it will intelligently redraw itself *at higher resolution* whenever the UIScrollView zooms in.

(NB: you can also achieve this behaviour manually, without a UIScrollView, by manually updating the low-level CALayer properties that tell a layer the size/area/zoom it should render with)

When you zoom (the complex process where you change the CALayer size, frame, contentScale, transform, etc), the CATL simply creates a whole NEW set of Tiles, and adds them to its cache. The old ones are NOT deleted (unless it runs out of space).

If you then zoom back out again, CATL will render very quickly, because it has the data cached. Probably.

i.e. the CATL cache looks something like this:

  • TILE1 — ZOOM = 1:1 — SIZE = 256×256 — POSITION = 0,0
  • TILE2 — ZOOM = 1:1 — SIZE = 256×256 — POSITION = 0,1
  • TILE3 — ZOOM = 1:1 — SIZE = 256×256 — POSITION = 1,0
  • TILE4 — ZOOM = 1:2 — SIZE = 256×256 — POSITION = 0,0
  • TILE5 — ZOOM = 1:2 — SIZE = 256×256 — POSITION = 0,1
  • TILE6 — ZOOM = 1:2 — SIZE = 256×256 — POSITION = 1,0


NB: what follows comes from trial and error and educated guesses. I’d be very happy for someone from Apple to correct this with what’s *actually* happening – but I think this is close enough for us to understand how to use CATL.

The process that CATL uses goes something like this:

  1. Look at the layer’s contentsScale (I think that’s the property it uses? Or … maybe it reads the affineTransform property instead?), and the layer’s “normal” bounds (widthxheight)
  2. Use the “scale” info to decide “how many on-screen pixels” a standard tile would cover
    • e.g. if you’ve zoomed-in by a factor of 1.5, and you’re using the default Tile size of 256×256, then the “on-screen” size of a tile is now 384×384
  3. WHILE the “on-screen pixel size” is greater than 2 x the default size, switch LOD level (i.e. use tiles that have smaller and smaller “layer.bounds” size)
  4. Look at the CGRect that the windowing system has told the CATL to “drawRect:” in
    • if you’ve zoomed in, this will be much smaller area than the layer’s normal bounds
    • if you’ve zoomed out, this will be much bigger area than the layer’s normal bounds
  5. Use that CGRect to calculate a list of tile-offsets that are needed to cover the VISIBLE area
    • e.g. for a VISIBLE frame of “{{100,0}, {500,256}}”, and 256×256 tiles, you’d need tiles at: {0,0}, {256,0}, {512,0}.
  6. For each tile that is in the cache, render it immediately *BY REPLAYING RENDER COMMANDS*
    • c.f. above, this can be SLOW. So slow that you see tiles appearing one-by-one on screen
  7. Display to screen
  8. While that’s appearing on scren, in a background thread, do:
    1. For each tile that is NOT in the cache, call the user-written “drawRect:” method to generate the tiles.
    2. Each time a tile completes, schedule an update to the mainthread that will OVERWRITE the screen contents with the new tile
      • i.e. some time after the CATL was rendered on screen (typically “half a second, up to several seconds”), while your main app is now running some other piece of code, small pieces of the CATL start magically appearing
    3. With the default CATL implementation, each new tile is animated in, using a “flash/fade” animation that takes 0.25 seconds per tile
      • For most apps, most of the time, this is painfully slow. Most people subclass CATL, and override the “+(CSTimeInterval) fadeDuration” method to return “0.0” instead.
      • c.f. above: even with a fadeDuration of 0.0, you can often STILL “see” the tiles appear on screen, because CATL uses an inefficient tile rendering / tile caching algorithm

iOS project gets: “_objc_read_weak”, referenced from:

I got this recently from XCode4 – not the most obvious of error messages, and google had zero results. So, to help anyone else who gets the same problem…

What’s happened?

Somehow, you’ve included a library in your project that was written for Mac OS X – not iPhone/iPad.

The problem seems to be (note: I’m not 100% sure of this) the library is using OS X’s Garbage Collection directives. Often, the line above will be just in a few places in the project.

If so, I’m surprised this breaks Xcode builds – I’d have thought they’d have set iOS compilation to ignore this stuff.


iOS doesn’t (yet) support GC, and it looks like Apple intends that it never will do. If you’ve got access to the library source code, it seems you can usually just remove the “weak” directive, it’s there to provide a hint to GC, and isn’t actually needed.

Obviously, you don’t want to break the OS X code, so the typical workaround is to take the original line of source:

@property (nonatomic, readonly)  __weak  NSObject *thing;

and wrap it in an OS X only conditional compile:

@property (nonatomic, readonly) NSObject *thing;
@property (nonatomic, readonly)  __weak  NSObject *thing;

If you don’t have the source to the affected library…

…then you’re stumped. Best I can suggest is to implement the _objc_read_weak function itself, and make it effectively do nothing – but I haven’t looked at the Apple docs to work out when/why/how it’s invoked.

Loading SVG files on iPhone/iPad: SVGKit (not the javascript one!)

UPDATE SPRING 2012: The main SVG lib described below (SVGKit) now works a LOT better, and is only a short way from loading all mainstream SVG files.

NB: it also has a new URL (due to GitHub’s internals) –

I’ve got a bunch of SVG files – hand-drawn maps for a computer game – which I wanted to load onto iPad, and port the game. The files are standard, straight out of Inkscape (the most popular free vector-editing / SVG-editing program).

Unfortunately, Apple doesn’t provide any rendering-support for SVG, and the nearest they do provide – PDF rendering – is slow and uses too much memory. PDF isn’t easy to render, and it does a great job on accuracy, but even on OS X, Apple’s API is often too slow for real-world use.

So, a quick look around the open-source alternatives…


This is the library I found via googling. It sort-of works, and it was only designed for a very narrow use-case (so: fair enough) … but it has some pretty major problems, and it’s not really workable for general SVG loading:

  1. The project is badly organized – it has iOS-only files referenced in the OS X project, and vice versa
  2. You can’t do a simple “build this and use in another project” – there’s source bugs/typos that have to be fixed by hand first, mostly because of the bad project organization.
  3. The interface to your Cocoa / iOS app is weak – it’s not designed to integrate cleanly, and it seems to jump you through hoops that most programmers would find makes life hard


While trying to fix SVGQuartzRenderer, I found SVGKit by accident. There are two projects with this name – one is a Javascript library, the other is an Objective-C library. They appear to have nothing in common except the name.


SVGKit worked pretty well out of the box – you can do a build direct for iPad from their sample project, and it loads up into an “SVG Browser” that lets you try a couple of example SVG’s included in the project.

TL;DR – do NOT download the main version, it’s out of date and buggy, and the original author has disappeared. But there are high-quality forks you can download instead that work very well (instructions below).

UPDATE SPRING 2012: the project now has a team of contributors and admins, and the “main version” should soon be up to production quality

SVGKit installation / getting started

Tragically, the install instructions on the front page of the github project are simply … wrong. There’s no way you can use it in your own projects with those instructions. The sample projects work, so it was a case of doing a line by line compare to work out what special magic settings were needed.

UPDATE SPRING 2012: All the instructions have been re-written (I updated a lot of them myself :)) – use the instructions on the page

Corrected instructions are:

  1. Drag/drop (or copy/paste) the “iOS” and “Core” folders into your Xcode project
  2. edit the Build Phases, and add to the Libraries phase: libxml2 (this gets Xcode4 to add it as a framework/dylib)
  3. edit “Header Search Paths” and add /usr/share/include/libxml2 (this gets Xcode4 to actually USE it when building)
  4. edit your Build Settings, and set “GCC_VERSION” to be: “”
  5. add “QuartzCore.framework” to your iOS project.

The only one of those that’s particularly unusual is changing the Compiler to LLVM 1. If you don’t … the project won’t compile.

SVGKit: basic SVG files … use Reklis’ fork

Then I started opening Inkscape-generated files in SVGKit. Sadly, most files in Inkscape will – literally – crash SVGKit. There are major bugs in the parser, a combination of mis-reading the SVG spec, and a bit of really sloppy C code (buffer overruns in the parser! Ouch).

Fortunately, the parser has been completely rewritten by reklis, who’s fork seems to be now the “de facto” release of SVGKit. (incidentally, I tried emailing the original author, who I think is the only one that can update the main project, but his website email link is broken :(. I’ve tweeted him too, hopefully he can pass-on the project to reklis to maintain).

UPDATE SPRING 2012: … the original author has now made it a shared project, with multiple admins.

…but there were still some major bugs. More than half of the Inkscape files wouldn’t render even vaguely correctly (although all the crashing bugs are fixed).

I delved into this, and it turned out to be two very small remaining bugs in the parser, which I was able to fix quite quickly, and merge back into reklis’ fork – if you grab his fork now, it should work pretty well.

SVGKit: …advanced SVG files

Unfortunately, the really complex / rich SVG files still fail in SVGKit. With my fixes, most of them render approximately correct – e.g. I have a version of the famous Tiger.SVG which gets the main outline exactly correct, but loses all the internal colours and objects.

Others – such as the almost-as-famous Lion.SVG – fail completely, you just get a big blob of colourless trash on screen :(.

UPDATE: Lion.SVG now works OK. Colours are slightly wrong, but the complex image is rendering well – you actually get a lion now 🙂

However, all the Inkscape-authored files I’ve got are working fine, with all the tools (lines, curves, “freehand”) working as expected. That’s enough for most developer projects.

Guesses at fixing the remaining SVGKit bugs…

Just in case anyone reading this needs a perfect SVG renderer on iOS, and has the time to try and fix it, here’s some thoughts on what to look at next.

My guess is that the vast majority will render the correct *shape* (I’m confident those bugs are all fixed) – but there are some advanced curve shapes that no-one has implemented yet. I suspect that Lion.SVG is using them, hence why it’s such a big mess when it renders.

Separately, I think there’s an issue with z-order layering and/or the implementation of the “fill” command in SVG, that’s causing things like Tiger.SVG to be the right shape, but missing all their internal detail.

Hopefully, someone else can fix those soon. For now, I can say that everything I’ve got / made in Inkscape is working fine, which is good enough for me to get on with my game project.

UPDATE SPRING 2012: There is experimental support for CSS styles (which make the colours correct for every SVG file) – it’s not yet merged to the main fork (still needs testing).

Also, there is partial support for the “transform” attribute – currently only handles translation, not rotation. So, for instance, SVG files that have rotational symmetry (e.g. compass roses) fail badly. But this is coming soon…

Finally, there is partial support for SVG Text.

When all three of those have been tested and merged, SVGKit should be loading almost all SVG files correctly.

Handling crashes and NSAssert gracefully in a Production App

NSAssert? What? Why? (skip if you’re familiar with this)

During development, it’s standard “best practice” to us NSAssert liberally, alerting you early if you have unexpected bugs in your code, for instance:

-(NSString*) processItem:(int) index
	NSAssert( index < [myArray count], @"Index is bigger than number of items in the myArray array" );
	... // rest of the method goes here

Unit Tests are an even better approach, but there are situations – e.g. complex algorithms – where you want to perform sanity-checks in the middle of code.

NSAssert in Xcode (Apple’s default setup for new projects)

Apple has hooked up NSAssert so that in Debug builds (by default: anything on the simulator or on the device using Development mode) it stops your app, and immediately tells you what’s gone wrong.

When you do a Release build (by default: anything you send out as Ad-Hoc, or to the Apple App Store), Apple automatically replaces your NSAssert calls with blank lines – nothing happens. Otherwise your app would crash in many situations where it’s probably OK to keep running.

But that means you lose this info that could be very valuable in discovering if your app has stopped working, and how/why – e.g. if Apple changed something in an iOS update that now breaks your app.

Ideally, we want to do something different for Release builds.

NSAssertionHandler to the rescue

In older platforms, you often had to manually do a #define to change the assert() function to something different, in different builds. That was midly annoying – complex macros are slightly harder to maintain than straight code – and not very configurable.

Fortunately, Apple uses an OOP approach via NSAssertionHandler. You can implement *multiple* subclasses of NSAssertionHandler, and switch between them at runtime.

First, create a subclass of NSAssertionHandler:

@implementation AssertionHandlerLogAll

-(void)handleFailureInFunction:(NSString *)functionName file:(NSString *)fileName lineNumber:(NSInteger)line description:(NSString *)format, ...
	NSLog(@"[%@] Assertion failure: FUNCTION = (%@) in file = (%@) lineNumber = %i", [self class], functionName, fileName, line );

-(void)handleFailureInMethod:(SEL)selector object:(id)object file:(NSString *)fileName lineNumber:(NSInteger)line description:(NSString *)format, ...
	NSLog(@"[%@] Assertion failure: METHOD = (%@) for object = (%@) in file = (%@) lineNumber = %i", [self class], NSStringFromSelector(selector), object, fileName, line );


…then, in your AppDelegate class (whichever class implements “NSObject ” ):

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
	// Override point for customization after application launch.
	NSLog(@"[%@] Setting a custom assertion handler", [self class] );
	NSAssertionHandler* customAssertionHandler = [[AssertionHandlerLogAll alloc] init];
	[[[NSThread currentThread] threadDictionary] setValue:customAssertionHandler forKey:NSAssertionHandlerKey];
	// NB: your windowing code goes here - e.g. self.window.rootViewController = self.viewController;

Assertions – upload to Flurry, perhaps?

But why limit yourself to just one assertion handler?

If you’re using Flurry (or Google Analytics etc), why not automatically log + upload each assertion to Flurry, and have them show up in your Dashboard? That way, you can see if ANY assertions are firing – but also get a count of:

  1. how many
  2. which hardware (is this only happening on an iPhone 3GS, for instance?)
  3. which iOS version (perhaps it’s due to a bug in iOS v 4.2, but not in iOS 4.3?)

Something like this:

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
	// Override point for customization after application launch.
	/** start Flurry  */
	[FlurryAPI startSession:@"whatever your private Flurry Key is"];
	NSLog(@"[%@] Setting a custom assertion handler", [self class] );
	NSAssertionHandler* customAssertionHandler = [[AssertionHandlerSendToFlurry alloc] init];
	[[[NSThread currentThread] threadDictionary] setValue:customAssertionHandler forKey:NSAssertionHandlerKey];
	// NB: your windowing code goes here - e.g. self.window.rootViewController = self.viewController;

Archiving old GIT projects on Beanstalk or Github

EDIT: re-written to make it much clearer what I’m trying to achieve, and why.

Why Archive? Why not Archive?

The pricing model for hosted git has settled down to:

  1. Pay per month
  2. …for an upper limit on the number of active repositories
  3. ……measured by simultaneously limiting the number of “projects” (separate git repositories) and “people” (user accounts that are allowed to access projects)

Their aim seems to be: charge for peak concurrent usage, rather than for total historical usage.

For instance: if you have 20 user accounts allowed, and you use all of them, then delete 10, you can create 10 new ones. The vendor will NOT delete all the history of the deleted accounts – they just won’t allow you to login as those users any more.

This probably is setup that way to make sure:

  1. their revenue scales with their costs – these days, with scalable hardware costs, that’s straightforward.
  2. their prices scale with the budget-size of their customers

Some SAAS vendors selling at the same kind of price level / model allow this same “disabling” function on whole projects, not just on people. That enables you to e.g.:

  1. Work on 1 new project day-to-day
  2. Have 10 “old” projects that are no longer active (previously shipped)
  3. Reserve the right to temporariliy activate any ONE of the 10 – e.g. to enact a quick bugfix / maintenance release
  4. …while only paying for “2 simultaneous projects”

This works fine – the resource usage is closer to that of a company with only 2 active projects than it is to a company with 12 active projects, and the price you’re able to pay is too.

Unfortunately, at the moment neither Beanstalk nor Github offer this – although they’re both great git-hosting services.

Archive options

In practice, we need to support these use-cases:

  1. Old project that MIGHT still be around in a local repository needs small tweaks and a quick re-launch: typically only 1-3 files need to be edited.
  2. Very old project that definitely isn’t in a local repository any more: ditto
  3. New project needs to solve a problem that was previously solved in an old project: need read ONLY access to the full project to revise “how we fixed this last time”

Before signing up to a git host, I asked each of them how they coped with these use-cases (in some detail); each company responded with, essentially:

We don’t have any support for this. Best we can sugest: copy all old projects into a single repository

Archiving git projects via copy/paste

What happens when you try to do this?

Well, for a start, you can’t “just copy” the contents of one git project into another.

Git uses hidden directories to manage its source control, and is hugely reliant upon them. This causes a handful of problems relating to “is file X still file X?”, one of which is this one of copying between repositories.

Naively, you’d try to do this:

  1. Create a global “archive” repository where you will move *all* old projects (this was initially recommended to us by Beanstalk/Github)
  2. PULL the latest copy of the git repos you want to archive
  3. MOVE the root directory into your “archive” repository
  4. PUSH the “archive” repository to the git-host

In practice, what happens is:

  1. Every modern Operating System moves the .git hidden directory too
  2. …so you fail to do a checkin (and at this point: a lot of the current Git GUI clients will break in interesting ways; it’s a good test for a new client if you’re considering buying one)
  3. …so you fail to do the PUSH

Copy/pasting by discarding the history

EDIT: if git-archive works for you, I’d use that instead. Everyone I know who’s used it has had at least some problems, so I’m leaving this section for now – but scroll down to the next section and check git-archive too.

On anything unix-based (i.e. linux + OS X) the simple path is to use the command-line (or “terminal” as OS X calls it)

  1. “cd [the root directory of your project you want to archive]”
  2. “cd ..”
  3. “cp -R [the directory of project to archive] [the root directory of the “archive” repository]/[name of the project you want to archive]”
  4. “cd [the root directory of the “archive” repository]/[name of the project you want to archive]”
  5. “chmod -R u+rw .git” (otherwise you’ll have to say “yes” to every individual file delete)
  6. “alias rm=rm” (otherwise you’ll have to say “yes” to every individual file delete)
  7. “rm -R .git”

…then, in your git-client:

  1. COMMIT the “archive” repository

…then, in your git-host service:

  1. DELETE the old project

The key points here:

  1. You’re copying the repository, not moving it – so the original is unaffected (if you don’t have to delete it yet, you might as well leave it intact)
  2. You’re removing all git status from the files: it becomes a virgin archive
  3. DISADVANTAGE: you’re throwing away (deleting) all history for the old project.

Using Git Archive

Git archive isn’t perfect (archiving is a complex task, and from what I’ve seen git archive doesn’t cover every use-case). I’ve met a couple of people who’ve tried it and given up (e.g. because it didn’t support submodules), but it might work for you: Worth a try.

Other alternatives

In furture, I’m going to try out some of the many other alternatives listed on the SO page linked above.

Xcode4: A script that creates / adds files to your project

This ought to be easy: it is the single most common function that a build system has to do.

So it’s rather depressing that Apple doesn’t support it. Every other build system (and IDE) I’ve ever used supports it out of the box. Apple for some unknown reason has designed Xcode4 to make this difficult. Strange.

The ultimate “fix” is just one line of code – but it’s a line of code that many people are afraid to write, because it seems like it would be fragile, and feels like it MUST be wrong. Most of this post is explaining why, in fact, it’s correct – and walking you through the other things we attempted before settling on this fix.

How you’d expect it to work

We’d expect: Step 1 – Add a “Run Script” phase

Aplpe has a simple process for adding scripts. Their script-management is very weak (it’s not been updated to modern standards in a long time), so you only have one option:

  1. Select the project itself, inside the project (it’s the thing with the blue icon)
  2. In the main window, select the target you’re building
  3. Click the Build Phases tab
  4. Hidden in the extreme bottom right corner of the screen (by a bad UI designer) is a button “Add Build Phase”, that lets you add a “Run Script” phas

OK, done. Wait … it’s asking us for “input files” and “output files”. What does that mean?

Well, whatever you were expecting is probably wrong: remember, this is a very weak build-system. It’s actually asking you:

  1. Every file you intend to read from, please tell me in advance, so I can cache your output if those files haven’t changed
  2. Every file you intend to write to, please tell me in advance, so I can cache the BUILD if your output hasn’t changed

(all modern build systems do the second feature automatically, and good ones have also been doing the first one automatically for over a decade)

Fine, so you fill those in. NB: even though you should be able to select input files from a Finder interface, Apple has disabled the Finder on this GUI, so you have to type them fully by hand.

What happens: The script runs, but the files vanish

If you use some debugging in your script, you can prove its run, by looking at the build log.

But the output files are NOT included in the build (even though we told Apple each file, by name).

Hmm. Well, maybe we need to use the only other option Apple provides us:

We’d expect: Step 2 – Add a “Copy Files” phase?

Actually, this should be automatic; we just told Xcode exactly what files we were creating, and where we were creating them.

But some badly-written build-systems are so dumb that you have to tell them what you’ve already told them. So, when step 1 above doesn’t work, we try step 2.

  1. Add a “Copy files” phase immediately after the “Run Script” phase
  2. Type in the name(s) of the file(s) generated in the “Run Script” phase
  3. …FAIL! Xcode4 is hardcoded to prevent you doing this

Yes, really. The “Add” button is grayed-out if you attempt to do this. Apple doesn’t want you to do it.

There is literally no way to take the output of a script and include it in your app.

Solving the problem with Xcode4

Instead, you have to write *into your script* the parts of Apple’s build system that they have barred you from accessing. Obviously, Apple is using “copy the output of a script” to take your compiled files and put the output into the app – but they won’t let you do that.

Here’s where it gets tricky: Apple’s documentation for Xcode4 is so misleading that I’d call it “incorrect”: it tells you to use the Copy Files phase described above (which we already know is impossible), and if you go digging in their build-system docs, it gives you various other places it suggests you manually copy your output files to.

But they don’t work, either because they’re missing key elements, or because Xcode4 ignores their content when making the build.

Pre-Solution: edit your attempts at Steps 1 and 2 above

Firstly, since Apple is ignoring the “output files” that we so carefully specified, and their docs say they will just re-run the script every time if the output files are blank … remove all your “output files” and “input files”. It’s the only safe way forwards – otherwise you’ll have to hand-maintain this error-prone dialog box forever.

Secondly, delete the Copy Files phase – Apple won’t let you use it. Give up, and move on.

Solution: Manually “insert” files into the final file

I tried everything, and scoured the Apple docs, the Apple developer guides, StackOverflow, and Google. I found two things that worked: an easy one that requires some typing, and an extremely long-winded and difficult one that hacks Xcode’s GUI to force it to do what it was supposed to in the first place.

Too much effort, too much to go wrong – and too hard to maintain. Instead, I went for the quick and easy method. Fortunately, there’s a pair of poorly documented Apple build-variables that together makes this easy and (almost) idiot-proof. At least, they do if you know they exist, and once you’ve guessed what they actually do (as opposed to what the docs suggest).

Add the following to the very end of your script (or get your tool:


…or just write your script so that it writes output directly to:


If you use script debugging, you’ll find that this deposits things directly inside the file – and it works fine.

Simple Android template for new Game using an Entity System

If you’re working with Android, you quickly find that Google forgot to include some core things in the OS. Getting a “Hello World” application to run on your phone requires many hundreds of lines of code, 90% of which you’ll never change from application to application (i.e. it should have been built-in to the OS).

NB: you can do a fake “Hello World” on Android in literally 10 lines of code – but it’s not a real app. It only exists to pretend that Android is correctly configured by default – i.e. it’s a marketing hack :).

If you’ve been using the free Entity System libraries over at, the Java versions need to be manually integrated into each new Android project. The Objective-C versions don’t need any integration – the default templates for iPhone/iPad projects work fine – it’s just the Java/Android ones that need work.

So, here’s a pre-made template you can use for starting new Android games / Entity-system projects. It starts up and draws a starfield, so you can confirm that the render-loop is running, and animation is working. It also shows that auto-rotate is configured and running (by default, Google doesn’t provide this):

NB: the code provided works fine – but the Activity-integration could probably be done a lot better. I just ripped it out of an old project, so I’m sure it works – but it could be a lot more elegant.


The Entity Systems libs are constantly being updated, so the project above does NOT include any particular version – you have to download the lib you want separately. Install instructions are in the README (which also shows up as the main body of that webpage above).

NB: if you don’t know what Entity Systems are, none of this is of help to you! Go have a look at Entity Systems are the future of MMOG development – Part 1.

(it’s a coding / design technique for making computer games faster / easier to write and maintain)