您的位置:首页 > 移动开发

Locking at an appropriate granularity

2016-01-10 05:49 441 查看
if multiple threads are waiting for the same resource(the cashier at the checkout), then if any thread holds the lock for longer than neces-sary, it will increase the total time spent waiting
(don’t wait until you’ve reached thecheckout to start looking for the cranberry sauce). Where possible, lock a mutex onlywhile actually accessing the shared data; try to do any processing of the data outsidethe lock. In particular, don’t do any really time-consuming
activities like file I/O
whileholding a lock. File
I/O
is typically hundreds (if not thousands) of times slower thanreading or writing the same volume of data from memory. So unless the lock is reallyintended to protect access to the file, performing
I/O
while holding the lock will delayother
threads unnecessarily (because they’ll block while waiting to acquire the lock),potentially eliminating any performance gain from the use of multiple threads.

std::unique_lock
works well in this situation, because you can call
unlock()when the code no longer needs access to the shared data and then call
lock() again ifaccess is required later in the code: 

void get_and_process_data()
{

Don’t need mutexlocked across call
to process() 
std::unique_lock<std::mutex> my_lock(the_mutex);
some_class data_to_process=get_next_data_chunk();
my_lock.unlock(); B: Do not need mutex locked across call to process
result_type result=process(data_to_process);
my_lock.lock(); C: Relock mutex to write result
write_result(data_to_process,result);

}

You don’t need the mutex locked across the call to
process(), so you manuallyunlock it before the call
B
and then lock it again afterward
c.

Hopefully it’s obvious that if you have one mutex protecting an entire data struc-ture, not only is there likely to be more contention for the lock, but also the potential for
reducing the time that the lock is held is less. More of the operation steps willrequire a lock on the same mutex, so the lock must be held longer. This doublewhammy of a cost is thus also a double incentive to move toward finer-grained lockingwherever possible.

As this example shows, locking at an appropriate granularity isn’t only aboutthe amount of data locked; it’s also about how long the lock is held and what oper-ations are performed while the
lock is held. In general, a lock should be held for only theminimum possible time needed to perform the required operations.
This also means that time-consuming operations such as acquiring another lock (even if you know it won’t dead-lock) or waiting for
I/O
to complete shouldn’t be done while holding a lock unlessabsolutely necessary. 
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: