您的位置:首页 > Web前端

Caffe官网 Tutorial: Nets, Layers, and Blobs caffe模型分解分析

2015-08-19 16:40 537 查看
http://caffe.berkeleyvision.org/tutorial/net_layer_blob.html 官网地址

总概况:

caffe 计算模型:一层层的框架,从bottom 到 top 从输入数据到loss, 数据和梯度流通过 forward
and backward passes 流动。

层的连接信息blobs 。

solver 用于模型配置和优化。

一、


Blob storage and communication

For example, in a 4D blob, the value at index (n, k, h, w) is physically located at index ((n * K + k) * H + h) *
W + w.

因为 n 从0 开始,所以要加上 k 等

Number / N is the batch size of the data. Batch processing achieves better throughput for communication and device processing. For an ImageNet training batch of 256 images N = 256.

Channel / K is the feature dimension e.g. for RGB images K = 3.



Implementation Details

Blob 存着 data 和 diff

cpu
和 gpu 之间可以同步

const Dtype* cpu_data() const;
Dtype* mutable_cpu_data()


If you want to check out when a Blob will copy data, here is an illustrative example:
// Assuming that data are on the CPU initially, and we have a blob.
const Dtype* foo;
Dtype* bar;
foo = blob.gpu_data(); // data copied cpu->gpu.
foo = blob.cpu_data(); // no data copied since both have up-to-date contents.
bar = blob.mutable_gpu_data(); // no data copied.
// ... some operations ...
bar = blob.mutable_gpu_data(); // no data copied when we are still on GPU.
foo = blob.cpu_data(); // data copied gpu->cpu, since the gpu side has modified the data
foo = blob.gpu_data(); // no data copied since both have up-to-date contents
bar = blob.mutable_cpu_data(); // still no data copied.
bar = blob.mutable_gpu_data(); // data copied cpu->gpu.
bar = blob.mutable_cpu_data(); // data copied gpu->cpu.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: