Locking at an appropriate granularity
2016-01-10 05:49
441 查看
if multiple threads are waiting for the same resource(the cashier at the checkout), then if any thread holds the lock for longer than neces-sary, it will increase the total time spent waiting
(don’t wait until you’ve reached thecheckout to start looking for the cranberry sauce). Where possible, lock a mutex onlywhile actually accessing the shared data; try to do any processing of the data outsidethe lock. In particular, don’t do any really time-consuming
activities like file I/O
whileholding a lock. File
I/O
is typically hundreds (if not thousands) of times slower thanreading or writing the same volume of data from memory. So unless the lock is reallyintended to protect access to the file, performing
I/O
while holding the lock will delayother
threads unnecessarily (because they’ll block while waiting to acquire the lock),potentially eliminating any performance gain from the use of multiple threads.
std::unique_lock
works well in this situation, because you can call
unlock()when the code no longer needs access to the shared data and then call
lock() again ifaccess is required later in the code:
Don’t need mutexlocked across call
to process()
std::unique_lock<std::mutex> my_lock(the_mutex);
some_class data_to_process=get_next_data_chunk();
my_lock.unlock(); B: Do not need mutex locked across call to process
result_type result=process(data_to_process);
my_lock.lock(); C: Relock mutex to write result
write_result(data_to_process,result);
}
You don’t need the mutex locked across the call to
process(), so you manuallyunlock it before the call
B
and then lock it again afterward
c.
Hopefully it’s obvious that if you have one mutex protecting an entire data struc-ture, not only is there likely to be more contention for the lock, but also the potential for
reducing the time that the lock is held is less. More of the operation steps willrequire a lock on the same mutex, so the lock must be held longer. This doublewhammy of a cost is thus also a double incentive to move toward finer-grained lockingwherever possible.
As this example shows, locking at an appropriate granularity isn’t only aboutthe amount of data locked; it’s also about how long the lock is held and what oper-ations are performed while the
lock is held. In general, a lock should be held for only theminimum possible time needed to perform the required operations.
This also means that time-consuming operations such as acquiring another lock (even if you know it won’t dead-lock) or waiting for
I/O
to complete shouldn’t be done while holding a lock unlessabsolutely necessary.
(don’t wait until you’ve reached thecheckout to start looking for the cranberry sauce). Where possible, lock a mutex onlywhile actually accessing the shared data; try to do any processing of the data outsidethe lock. In particular, don’t do any really time-consuming
activities like file I/O
whileholding a lock. File
I/O
is typically hundreds (if not thousands) of times slower thanreading or writing the same volume of data from memory. So unless the lock is reallyintended to protect access to the file, performing
I/O
while holding the lock will delayother
threads unnecessarily (because they’ll block while waiting to acquire the lock),potentially eliminating any performance gain from the use of multiple threads.
std::unique_lock
works well in this situation, because you can call
unlock()when the code no longer needs access to the shared data and then call
lock() again ifaccess is required later in the code:
void get_and_process_data() {
Don’t need mutexlocked across call
to process()
std::unique_lock<std::mutex> my_lock(the_mutex);
some_class data_to_process=get_next_data_chunk();
my_lock.unlock(); B: Do not need mutex locked across call to process
result_type result=process(data_to_process);
my_lock.lock(); C: Relock mutex to write result
write_result(data_to_process,result);
}
You don’t need the mutex locked across the call to
process(), so you manuallyunlock it before the call
B
and then lock it again afterward
c.
Hopefully it’s obvious that if you have one mutex protecting an entire data struc-ture, not only is there likely to be more contention for the lock, but also the potential for
reducing the time that the lock is held is less. More of the operation steps willrequire a lock on the same mutex, so the lock must be held longer. This doublewhammy of a cost is thus also a double incentive to move toward finer-grained lockingwherever possible.
As this example shows, locking at an appropriate granularity isn’t only aboutthe amount of data locked; it’s also about how long the lock is held and what oper-ations are performed while the
lock is held. In general, a lock should be held for only theminimum possible time needed to perform the required operations.
This also means that time-consuming operations such as acquiring another lock (even if you know it won’t dead-lock) or waiting for
I/O
to complete shouldn’t be done while holding a lock unlessabsolutely necessary.
相关文章推荐
- Could not inspect the application package
- 如何实现字幕效果,cocos2dx ,Lua
- IOS NSDate NSDateFormatter 导致相差8小时
- IOS NSDate NSDateFormatter 导致相差8小时
- iOS 关于颜色的库 - Wonderful
- 关于MAC下Android SDK manager 更新解决办法(无需翻墙)
- 【微信技能】如何通过微信号知道对方微信的二维码
- 【微信技能】如何通过微信号知道对方微信的二维码
- iOS APP上架流程(可供销售)
- 【android学习3】解决Android界面布局添加EditView之后无法预览问题
- Android Studio中的mipmap和drawable
- Android ListView 属性详解
- 自定义标签换行控件WordWrapperView
- Parse Notification for IOS
- Swift学习笔记2
- Xcode上传ipa时itunes提示you are not authorized to use this service
- 通过Android Studio导入supportDemo,解决编译错误
- Android活动管理工具
- iOS开发CoreAnimation解读之三——几种常用Layer的使用解析
- iOS开发CoreAnimation解读之二——对CALayer的分析