Concurrency in iOS

There are many and more articles on concurrency in iOS and Objective-C, but I think this is a great one.  However, I recommend skipping over the talk about threads and go straight to GCD and NSOperationQueue.  There’s really no reason to manage your own threads any more (unless you’re porting over some code to iOS)–unless of course you like debugging hard-to-find threading issues in your app!

One further caveat: the section which discusses managing shared resources could use a little improvement.  It is correct that traditionally one uses a lock to manage access to a shared resource.  However, there is a much better way to do this in iOS, and that is with GCD and a serial queue.  Consider the following code example:

@implementation MyController ()
@property (nonatomic, assign) dispatch_queue_t sharedResourceQueue;
@property (nonatomic, assign) NSUInteger sharedCounter;
@end

@implementation MyController

- (id)init {
    if ((self = [super init])) {
        _sharedResourceQueue = dispatch_queue_create("com.myidentifier.MyCoolApp.sharedResourceQueue", DISPATCH_QUEUE_SERIAL);
    }
    return self;
}

- (void)dealloc {
    dispatch_release(self.sharedResourceQueue);
}

- (void)accessSharedResourceSynchronously {
    dispatch_sync(self.sharedResourceQueue, ^{
        // safely access my shared resource and block the calling thread
        self.sharedCounter++;
    });
}

- (void)accessSharedResourceAsynchronously {
    dispatch_async(self.sharedResourceQueue, ^{
        // safely access my shared resource but don't block the calling thread
        self.sharedCounter++;
    });
}

In the above example I’m creating a serial dispatch queue and managing access to the shared resource by using dispatch_sync() and dispatch_async(). Because the serial dispatch queue guarantees that any blocks executed on it will happen in order, I can ensure that the shared resource is not accessed simultaneously (as long as I don’t abuse this elsewhere in my class).

Why is this better than using @synchronized or NSLock? Because both of those require a kernel interrupt, which is going to cause a context switch and reduce your app performance. Dispatch queues are handled entirely in-process and don’t have the overhead of a kernel interrupt. You get the behavior of a lock without a performance hit.

Of course, you still need to watch out for deadlocks, but that’s a pitfall with managing access to any shared resource with concurrency.

There is even more great advice on using dispatch queues in this follow-up article, including a great pattern for multiple readers and a single writer.  My example is a trivial one, but using dispatch_barrier_async() will greatly improve performance by waiting to write until all the reads are finished.