您的位置:首页 > 移动开发 > Objective-C

Effective Objective-C 2.0: Item 41: Prefer Dispatch Queues to Locks for Synchronization

2013-12-12 21:14 585 查看


Item 41: Prefer Dispatch Queues to Locks for Synchronization

Sometimes in Objective-C, you will come across code that you’re having trouble with because it’s being accessed from multiple threads. This situation usually calls for the
application of some sort of synchronization through the use of locks. Before GCD, there were two ways to achieve this, the first being the built-in synchronization block:

Click here to view code image

- (void)synchronizedMethod {

@synchronized(self) {

// Safe

}

}

This construct automatically creates a lock based on the given object and waits on that lock until it
executes the code contained in the block. At the end of the code block, the lock is released. In the example, the object being synchronized against is
self
.
This construct is often a good choice, as it ensures that each instance of the object can run its own
synchronizedMethod
independently. However, overuse of
@synchronized(self)
can
lead to inefficient code, as each synchronized block will execute serially across all such blocks. If you overuse synchronization against
self,
you
can end up with code waiting unnecessarily on a lock held by unrelated code.

The other approach is to use the
NSLock
object directly:

Click here to view code image

_lock = [[NSLock alloc] init];

- (void)synchronizedMethod {

[_lock lock];

// Safe

[_lock unlock];

}

Recursive locks are also available through
NSRecursiveLock,
allowing for one thread to take out the same lock multiple times
without causing a deadlock.

Both of these approaches are fine but come with their own drawbacks. For example,synchronization blocks
can suffer from deadlock under extreme circumstances and are not necessarily efficient. Direct use of locks can be troublesome when it comes to deadlocks.

The alternative is to use GCD, which can provide locking in a much simpler and more efficient
manner. Properties are a good example of where developers find the need to put synchronization, known as making the property atomic. This
can be achieved through use of the
atomic
property attribute (see Item
6). Or, if the accessors need to be written manually, the following is often seen:

Click here to view code image

- (NSString*)someString {

@synchronized(self) {

return _someString;

}

}

- (void)setSomeString:(NSString*)someString {

@synchronized(self) {

_someString = someString;

}

}

Recall that
@synchronized(self)
is dangerous if overused, because all
such blocks will be synchronized with respect to one another. If multiple properties do that, each will be synchronized with respect to all others, which is probably not what you want. All
you really want is that access to each property be synchronized individually.

As an aside, you should be aware that although this goes some way to ensuring thread safety,
it does not ensure absolute thread safety of the object. Rather, access to the property is atomic.You are guaranteed to get valid results when using the property, but if you
call the getter multiple times from the same thread, you may not necessarily get the same result each time. Other threads may have written to the property between accesses.

A simple and effective alternative to synchronization blocks or lock objects is to use a serial
synchronization queue. Dispatching reads and writes onto the same queue ensures synchronization.
Doing so looks like this:

Click here to view code image

_syncQueue =

dispatch_queue_create("com.effectiveobjectivec.syncQueue", NULL);

- (NSString*)someString {

__block NSString *localSomeString;

dispatch_sync(_syncQueue, ^{

localSomeString = _someString;

});

return localSomeString;

}

- (void)setSomeString:(NSString*)someString {

dispatch_sync(_syncQueue, ^{

_someString = someString;

});

}

The idea behind this pattern is that all access to the property is synchronized because the GCD
queue that both the setter and the getter run on is a serial queue. Apart from the
__block
syntax in the getter, required to allow the block to set the
variable (see Item 37),
this approach is much neater. All the locking is handled down in GCD, which has been implemented at a very low level and has many optimizations made. Thus, you don’t have to worry about that side of things and can instead focus on writing your accessor code.

However, we can go one step further. The setter does not have to be synchronous. The block that sets
the instance variable does not need to return anything to the setter method. This means that you can change the setter method to look like this:

Click here to view code image

- (void)setSomeString:(NSString*)someString {

dispatch_async(_syncQueue, ^{

_someString = someString;

});

}

The simple change from synchronous dispatch to asynchronous provides the benefit that the setter is fast from the caller’s perspective, but reading and writing are still
executed serially with respect to each another. One downside, though, is that if you were to benchmark this, you might find that it’s slower; with asynchronous dispatch, the block has
to be copied. If the time taken to perform the copy is significant compared to the time the block takes to execute, it will be slower. So in our simple example, it’s likely to be slower. However, the approach is still good to understand as a potential
candidate if the block that is being dispatched performs much heavier tasks.

Another way to make this approach even faster is to take advantage of the fact that the getters
can run concurrently with one another but not with the setter. This is where the GCD approach comes into its own. The following cannot be easily done with synchronization blocks or locks. Instead of using a serial queue, consider what would happen if
you used a concurrent queue:

Click here to view code image

_syncQueue =

dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);

- (NSString*)someString {

__block NSString *localSomeString;

dispatch_sync(_syncQueue, ^{

localSomeString = _someString;

});

return localSomeString;

}

- (void)setSomeString:(NSString*)someString {

dispatch_async(_syncQueue, ^{

_someString = someString;

});

}

As it stands, that code would not work for synchronization. All reads and writes are executed on the same queue, but that queue being concurrent, reads and writes can all happen
at the same time. This is what we were trying to stop from happening in the first place! However, a simple GCD feature, called a barrier, is available and can solve this. The
functions that a queue barrier blocks are as follows:

Click here to view code image

void dispatch_barrier_async(dispatch_queue_t queue,

dispatch_block_t block);

void dispatch_barrier_sync(dispatch_queue_t queue,

dispatch_block_t block);

A barrier is executed exclusively with respect to all other blocks on that queue. They are
relevant only on concurrent queues, since all blocks on a serial queue are always executed exclusively with respect to one another. When a queue is processed and the next block
is a barrier block, the queue waits for all current blocks to finish and then executes the barrier block. When the barrier block finishes executing, processing of the queue continues
as normal.

Barriers can be used with the property example in the setter. If the setter uses a barrier block, reads of the property will still execute concurrently, but writes will
execute exclusively. Figure 6.3illustrates
the queue with many reads and a single write queued.



Figure 6.3 Concurrent
queue with reads as normal blocks and writes as barrier blocks. Reads are executed concurrently; writes are executed exclusively, as they are barriers.

The code to achieve this is simple:

Click here to view code image

_syncQueue =

dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);

- (NSString*)someString {

__block NSString *localSomeString;

dispatch_sync(_syncQueue, ^{

localSomeString = _someString;

});

return localSomeString;

}

- (void)setSomeString:(NSString*)someString {

dispatch_barrier_async(_syncQueue, ^{

_someString = someString;

});

}

If you were to benchmark this, you would certainly find it quicker than using a serial queue. Note that you could also
use a synchronous barrier in the setter, which may be more efficient for the same reason as explained before. It would be prudent to benchmark each approach and
choose the one that is best for your specific scenario.


Things to Remember


Dispatch
queues can be used to provide synchronization semantics and offer a simpler alternative to
@synchronized
blocks or
NSLock
objects.


Mixing
synchronous and asynchronous dispatches can provide the same synchronized behavior as with normal locking but without blocking the calling thread in the asynchronous dispatches.


Concurrent
queues and barrier blocks can be used to make synchronized behavior more efficient.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐