caffe.proto源码分析
2017-08-01 10:38
381 查看
一什么是protocol buffer
二caffeproto中的几个重要数据类型
三caffeproto源码
分析caffe源码,看首先看caffe.proto,是明智的选择。好吧,我不是创造者,只是搬运工。
原文地址:http://blog.csdn.net/qq_16055159/article/details/45115359
引言
要看caffe源码,我认为首先应该看的就是caffe.proto。
它位于…\src\caffe\proto目录下,在这个文件夹下还有一个.pb.cc和一个.pb.h文件,这两个文件都是由caffe.proto编译而来的。
在caffe.proto中定义了很多结构化数据,包括:
正文
强烈推荐另外一篇极好的博文是:Protocol Buffer技术详解(C++实例)
简介
什么是 Google Protocol Buffer? 假如您在网上搜索,应该会得到类似这样的文字介绍:
Google Protocol Buffer( 简称 Protobuf) 是 Google 公司内部的混合语言数据标准,目前已经正在使用的有超过 48,162 种报文格式定义和超过 12,183 个 .proto 文件。他们用于 RPC 系统和持续数据存储系统。
Protocol Buffers 是一种轻便高效的结构化数据存储格式,可以用于结构化数据串行化,或者说序列化。它很适合做数据存储或 RPC 数据交换格式。可用于通讯协议、数据存储等领域的语言无关、平台无关、可扩展的序列化结构数据格式。目前提供了 C++、Java、Python 三种语言的 API。
或许您和我一样,在第一次看完这些介绍后还是不明白 Protobuf 究竟是什么,那么我想一个简单的例子应该比较有助于理解它。
一个简单的例子
安装 Google Protocol Buffer
在网站 http://code.google.com/p/protobuf/downloads/list上可以下载 Protobuf 的源代码。然后解压编译安装便可以使用它了。
安装步骤如下所示:
tar -xzf protobuf-2.1.0.tar.gz
cd protobuf-2.1.0
./configure –prefix=$INSTALL_DIR
make
make check
make install
关于简单例子的描述
我打算使用 Protobuf 和 C++ 开发一个十分简单的例子程序。
该程序由两部分组成。第一部分被称为 Writer,第二部分叫做 Reader。
Writer 负责将一些结构化的数据写入一个磁盘文件,Reader 则负责从该磁盘文件中读取结构化数据并打印到屏幕上。
准备用于演示的结构化数据是 HelloWorld,它包含两个基本数据:
书写 .proto 文件
首先我们需要编写一个 proto 文件,定义我们程序中需要处理的结构化数据,在 protobuf 的术语中,结构化数据被称为 Message。proto 文件非常类似 java 或者 C 语言的数据定义。代码清单 1 显示了例子应用中的 proto 文件内容。
清单 1. proto 文件
一个比较好的习惯是认真对待 proto 文件的文件名。比如将命名规则定于
packageName.MessageName.proto
在上例中,package 名字叫做 lm,定义了一个消息 helloworld,该消息有三个成员,类型为 int32 的 id,另一个为类型为 string 的成员 str。opt 是一个可选的成员,即消息中可以不包含该成员。
编译 .proto 文件
写好 proto 文件之后就可以用 Protobuf 编译器将该文件编译成目标语言了。本例中我们将使用 C++。
假设您的 proto 文件存放在 $SRC_DIR 下面,您也想把生成的文件放在同一个目录下,则可以使用如下命令:
命令将生成两个文件:
lm.helloworld.pb.h , 定义了 C++ 类的头文件
lm.helloworld.pb.cc , C++ 类的实现文件
在生成的头文件中,定义了一个 C++ 类 helloworld,后面的 Writer 和 Reader 将使用这个类来对消息进行操作。诸如对消息的成员进行赋值,将消息序列化等等都有相应的方法。
编写 writer 和 Reader
如前所述,Writer将把一个结构化数据写入磁盘,以便其他人来读取。假如我们不使用 Protobuf,其实也有许多的选择。一个可能的方法是将数据转换为字符串,然后将字符串写入磁盘。转换为字符串的方法可以使用sprintf(),这非常简单。数字123可以变成字符串“123”。
这样做似乎没有什么不妥,但是仔细考虑一下就会发现,这样的做法对写 Reader 的那个人的要求比较高,Reader 的作者必须了 Writer 的细节。比如”123”可以是单个数字 123,但也可以是三个数字 1,2 和 3,等等。这么说来,我们还必须让 Writer 定义一种分隔符一样的字符,以便 Reader 可以正确读取。但分隔符也许还会引起其他的什么问题。最后我们发现一个简单的 Helloworld 也需要写许多处理消息格式的代码。
如果使用 Protobuf,那么这些细节就可以不需要应用程序来考虑了。
使用 Protobuf,Writer 的工作很简单,需要处理的结构化数据由 .proto 文件描述,经过上一节中的编译过程后,该数据化结构对应了一个 C++ 的类,并定义在 lm.helloworld.pb.h 中。对于本例,类名为 lm::helloworld。
Writer 需要 include 该头文件,然后便可以使用这个类了。
现在,在 Writer 代码中,将要存入磁盘的结构化数据由一个 lm::helloworld 类的对象表示,它提供了一系列的 get/set 函数用来修改和读取结构化数据中的数据成员,或者叫 field。
当我们需要将该结构化数据保存到磁盘上时,类 lm::helloworld 已经提供相应的方法来把一个复杂的数据变成一个字节序列,我们可以将这个字节序列写入磁盘。
对于想要读取这个数据的程序来说,也只需要使用类 lm::helloworld 的相应反序列化方法来将这个字节序列重新转换会结构化数据。这同我们开始时那个“123”的想法类似,不过 Protobuf 想的远远比我们那个粗糙的字符串转换要全面,因此,我们不如放心将这类事情交给 Protobuf 吧。
程序清单 2 演示了 Writer 的主要代码,您一定会觉得很简单吧?
清单 2. Writer 的主要代码
Msg1 是一个 helloworld 类的对象,set_id() 用来设置 id 的值。SerializeToOstream 将对象序列化后写入一个 fstream 流。
代码清单 3 列出了 reader 的主要代码。
清单 3. Reader
同样,Reader 声明类 helloworld 的对象 msg1,然后利用 ParseFromIstream 从一个 fstream 流中读取信息并反序列化。此后,ListMsg 中采用 get 方法读取消息的内部信息,并进行打印输出操作。
运行结果
运行 Writer 和 Reader 的结果如下:
Reader 读取文件 log 中的序列化信息并打印到屏幕上。本文中所有的例子代码都可以在附件中下载。您可以亲身体验一下。
这个例子本身并无意义,但只要您稍加修改就可以将它变成更加有用的程序。比如将磁盘替换为网络 socket,那么就可以实现基于网络的数据交换任务。而存储和交换正是 Protobuf 最有效的应用领域。
<0> BlobProto
<1> Datum
<2> LayerParameter
<3> NetParameter
<4> SolverParameter
二caffeproto中的几个重要数据类型
三caffeproto源码
分析caffe源码,看首先看caffe.proto,是明智的选择。好吧,我不是创造者,只是搬运工。
原文地址:http://blog.csdn.net/qq_16055159/article/details/45115359
引言
要看caffe源码,我认为首先应该看的就是caffe.proto。
它位于…\src\caffe\proto目录下,在这个文件夹下还有一个.pb.cc和一个.pb.h文件,这两个文件都是由caffe.proto编译而来的。
在caffe.proto中定义了很多结构化数据,包括:
BlobProto Datum FillerParameter NetParameter SolverParameter SolverState LayerParameter ConcatParameter ConvolutionParameter DataParameter DropoutParameter HDF5DataParameter HDF5OutputParameter ImageDataParameter InfogainLossParameter InnerProductParameter LRNParameter MemoryDataParameter PoolingParameter PowerParameter WindowDataParameter V0LayerParameter
正文
一:什么是protocol buffer
以下内容摘自:Google Protocol Buffer 的使用和原理强烈推荐另外一篇极好的博文是:Protocol Buffer技术详解(C++实例)
简介
什么是 Google Protocol Buffer? 假如您在网上搜索,应该会得到类似这样的文字介绍:
Google Protocol Buffer( 简称 Protobuf) 是 Google 公司内部的混合语言数据标准,目前已经正在使用的有超过 48,162 种报文格式定义和超过 12,183 个 .proto 文件。他们用于 RPC 系统和持续数据存储系统。
Protocol Buffers 是一种轻便高效的结构化数据存储格式,可以用于结构化数据串行化,或者说序列化。它很适合做数据存储或 RPC 数据交换格式。可用于通讯协议、数据存储等领域的语言无关、平台无关、可扩展的序列化结构数据格式。目前提供了 C++、Java、Python 三种语言的 API。
或许您和我一样,在第一次看完这些介绍后还是不明白 Protobuf 究竟是什么,那么我想一个简单的例子应该比较有助于理解它。
一个简单的例子
安装 Google Protocol Buffer
在网站 http://code.google.com/p/protobuf/downloads/list上可以下载 Protobuf 的源代码。然后解压编译安装便可以使用它了。
安装步骤如下所示:
tar -xzf protobuf-2.1.0.tar.gz
cd protobuf-2.1.0
./configure –prefix=$INSTALL_DIR
make
make check
make install
关于简单例子的描述
我打算使用 Protobuf 和 C++ 开发一个十分简单的例子程序。
该程序由两部分组成。第一部分被称为 Writer,第二部分叫做 Reader。
Writer 负责将一些结构化的数据写入一个磁盘文件,Reader 则负责从该磁盘文件中读取结构化数据并打印到屏幕上。
准备用于演示的结构化数据是 HelloWorld,它包含两个基本数据:
ID,为一个整数类型的数据 Str,这是一个字符串
书写 .proto 文件
首先我们需要编写一个 proto 文件,定义我们程序中需要处理的结构化数据,在 protobuf 的术语中,结构化数据被称为 Message。proto 文件非常类似 java 或者 C 语言的数据定义。代码清单 1 显示了例子应用中的 proto 文件内容。
清单 1. proto 文件
package lm; message helloworld { required int32 id = 1; // ID required string str = 2; // str optional int32 opt = 3; //optional field }
一个比较好的习惯是认真对待 proto 文件的文件名。比如将命名规则定于
packageName.MessageName.proto
在上例中,package 名字叫做 lm,定义了一个消息 helloworld,该消息有三个成员,类型为 int32 的 id,另一个为类型为 string 的成员 str。opt 是一个可选的成员,即消息中可以不包含该成员。
编译 .proto 文件
写好 proto 文件之后就可以用 Protobuf 编译器将该文件编译成目标语言了。本例中我们将使用 C++。
假设您的 proto 文件存放在 $SRC_DIR 下面,您也想把生成的文件放在同一个目录下,则可以使用如下命令:
protoc -I=$SRC_DIR --cpp_out=$DST_DIR $SRC_DIR/addressbook.proto
命令将生成两个文件:
lm.helloworld.pb.h , 定义了 C++ 类的头文件
lm.helloworld.pb.cc , C++ 类的实现文件
在生成的头文件中,定义了一个 C++ 类 helloworld,后面的 Writer 和 Reader 将使用这个类来对消息进行操作。诸如对消息的成员进行赋值,将消息序列化等等都有相应的方法。
编写 writer 和 Reader
如前所述,Writer将把一个结构化数据写入磁盘,以便其他人来读取。假如我们不使用 Protobuf,其实也有许多的选择。一个可能的方法是将数据转换为字符串,然后将字符串写入磁盘。转换为字符串的方法可以使用sprintf(),这非常简单。数字123可以变成字符串“123”。
这样做似乎没有什么不妥,但是仔细考虑一下就会发现,这样的做法对写 Reader 的那个人的要求比较高,Reader 的作者必须了 Writer 的细节。比如”123”可以是单个数字 123,但也可以是三个数字 1,2 和 3,等等。这么说来,我们还必须让 Writer 定义一种分隔符一样的字符,以便 Reader 可以正确读取。但分隔符也许还会引起其他的什么问题。最后我们发现一个简单的 Helloworld 也需要写许多处理消息格式的代码。
如果使用 Protobuf,那么这些细节就可以不需要应用程序来考虑了。
使用 Protobuf,Writer 的工作很简单,需要处理的结构化数据由 .proto 文件描述,经过上一节中的编译过程后,该数据化结构对应了一个 C++ 的类,并定义在 lm.helloworld.pb.h 中。对于本例,类名为 lm::helloworld。
Writer 需要 include 该头文件,然后便可以使用这个类了。
现在,在 Writer 代码中,将要存入磁盘的结构化数据由一个 lm::helloworld 类的对象表示,它提供了一系列的 get/set 函数用来修改和读取结构化数据中的数据成员,或者叫 field。
当我们需要将该结构化数据保存到磁盘上时,类 lm::helloworld 已经提供相应的方法来把一个复杂的数据变成一个字节序列,我们可以将这个字节序列写入磁盘。
对于想要读取这个数据的程序来说,也只需要使用类 lm::helloworld 的相应反序列化方法来将这个字节序列重新转换会结构化数据。这同我们开始时那个“123”的想法类似,不过 Protobuf 想的远远比我们那个粗糙的字符串转换要全面,因此,我们不如放心将这类事情交给 Protobuf 吧。
程序清单 2 演示了 Writer 的主要代码,您一定会觉得很简单吧?
清单 2. Writer 的主要代码
#include "lm.helloworld.pb.h" … int main(void) { lm::helloworld msg1; msg1.set_id(101); msg1.set_str(“hello”); // Write the new address book back to disk. fstream output("./log", ios::out | ios::trunc | ios::binary); if (!msg1.SerializeToOstream(&output)) { cerr << "Failed to write msg." << endl; return -1; } return 0; }
Msg1 是一个 helloworld 类的对象,set_id() 用来设置 id 的值。SerializeToOstream 将对象序列化后写入一个 fstream 流。
代码清单 3 列出了 reader 的主要代码。
清单 3. Reader
#include "lm.helloworld.pb.h" … void ListMsg(const lm::helloworld & msg) { cout << msg.id() << endl; cout << msg.str() << endl; } int main(int argc, char* argv[]) { lm::helloworld msg1; { fstream input("./log", ios::in | ios::binary); if (!msg1.ParseFromIstream(&input)) { cerr << "Failed to parse address book." << endl; return -1; } } ListMsg(msg1); … }
同样,Reader 声明类 helloworld 的对象 msg1,然后利用 ParseFromIstream 从一个 fstream 流中读取信息并反序列化。此后,ListMsg 中采用 get 方法读取消息的内部信息,并进行打印输出操作。
运行结果
运行 Writer 和 Reader 的结果如下:
\>writer \>reader 101 Hello
Reader 读取文件 log 中的序列化信息并打印到屏幕上。本文中所有的例子代码都可以在附件中下载。您可以亲身体验一下。
这个例子本身并无意义,但只要您稍加修改就可以将它变成更加有用的程序。比如将磁盘替换为网络 socket,那么就可以实现基于网络的数据交换任务。而存储和交换正是 Protobuf 最有效的应用领域。
二、caffe.proto中的几个重要数据类型
看完了上面关于protocol buffer的介绍,大家应该可以知道其实caffe.pb.cc里面的东西都是从caffe.proto编译而来的,无非就是一些关于这些数据结构(类)的标准化操作,比如void CopyFrom(); void MergeFrom(); void CopyFrom(); void MergeFrom; void Clear(); bool IsInitialized() const; int ByteSize() const; bool MergePartialFromCodedStream(); void SerializeWithCachedSizes() const; SerializeWithCachedSizesToArray() const; int GetCachedSize() void SharedCtor(); void SharedDtor(); void SetCachedSize() const;
<0> BlobProto
message BlobProto {//blob的属性以及blob中的数据(data\diff) optional int32 num = 1 [default = 0]; optional int32 channels = 2 [default = 0]; optional int32 height = 3 [default = 0]; optional int32 width = 4 [default = 0]; repeated float data = 5 [packed = true]; repeated float diff = 6 [packed = true]; }
<1> Datum
message Datum { optional int32 channels = 1; optional int32 height = 2; optional int32 width = 3; optional bytes data = 4;//真实的图像数据,以字节存储(bytes) optional int32 label = 5; repeated float float_data = 6;//datum也能存float类型的数据(float) }
<2> LayerParameter
message LayerParameter { repeated string bottom = 2; //输入的blob的名字(string) repeated string top = 3; //输出的blob的名字(string) optional string name = 4; //层的名字 enum LayerType { //层的枚举(enum,和c++中的enum一样) NONE = 0; ACCURACY = 1; BNLL = 2; CONCAT = 3; CONVOLUTION = 4; DATA = 5; DROPOUT = 6; EUCLIDEAN_LOSS = 7; ELTWISE_PRODUCT = 25; FLATTEN = 8; HDF5_DATA = 9; HDF5_OUTPUT = 10; HINGE_LOSS = 28; IM2COL = 11; IMAGE_DATA = 12; INFOGAIN_LOSS = 13; INNER_PRODUCT = 14; LRN = 15; MEMORY_DATA = 29; MULTINOMIAL_LOGISTIC_LOSS = 16; POOLING = 17; POWER = 26; RELU = 18; SIGMOID = 19; SIGMOID_CROSS_ENTROPY_LOSS = 27; SOFTMAX = 20; SOFTMAX_LOSS = 21; SPLIT = 22; TANH = 23; WINDOW_DATA = 24; } optional LayerType type = 5; // 层的类型 repeated BlobProto blobs = 6; //blobs的数值参数 repeated float blobs_lr = 7; //学习速率(repeated),如果你想那个设置一个blob的学习速率,你需要设置所有blob的学习速率。 repeated float weight_decay = 8; //权值衰减(repeated) // 相对于某一特定层的参数(optional) optional ConcatParameter concat_param = 9; optional ConvolutionParameter convolution_param = 10; optional DataParameter data_param = 11; optional DropoutParameter dropout_param = 12; optional HDF5DataParameter hdf5_data_param = 13; optional HDF5OutputParameter hdf5_output_param = 14; optional ImageDataParameter image_data_param = 15; optional InfogainLossParameter infogain_loss_param = 16; optional InnerProductParameter inner_product_param = 17; optional LRNParameter lrn_param = 18; optional MemoryDataParameter memory_data_param = 22; optional PoolingParameter pooling_param = 19; optional PowerParameter power_param = 21; optional WindowDataParameter window_data_param = 20; optional V0LayerParameter layer = 1; }
<3> NetParameter
message NetParameter { optional string name = 1;//网络的名字 repeated LayerParameter layers = 2; //repeated类似于数组 repeated string input = 3;//输入层blob的名字 repeated int32 input_dim = 4;//输入层blob的维度,应该等于(4*#input) optional bool force_backward = 5 [default = false];//网络是否进行反向传播。如果设置为否,则由网络的结构和学习速率来决定是否进行反向传播。 }
<4> SolverParameter
message SolverParameter { optional string train_net = 1; // 训练网络的proto file optional string test_net = 2; // 测试网络的proto file optional int32 test_iter = 3 [default = 0]; // 每次测试时的迭代次数 optional int32 test_interval = 4 [default = 0]; // 两次测试的间隔迭代次数 optional bool test_compute_loss = 19 [default = false]; optional float base_lr = 5; // 基本学习率 optional int32 display = 6; // 两次显示的间隔迭代次数 optional int32 max_iter = 7; // 最大迭代次数 optional string lr_policy = 8; // 学习速率衰减方式 optional float gamma = 9; // 关于梯度下降的一个参数 optional float power = 10; // 计算学习率的一个参数 optional float momentum = 11; // 动量 optional float weight_decay = 12; // 权值衰减 optional int32 stepsize = 13; // 学习速率的衰减步长 optional int32 snapshot = 14 [default = 0]; // snapshot的间隔 optional string snapshot_prefix = 15; // snapshot的前缀 optional bool snapshot_diff = 16 [default = false]; // 是否对于 diff 进行 snapshot enum SolverMode { CPU = 0; GPU = 1; } optional SolverMode solver_mode = 17 [default = GPU]; // solver的模式,默认为GPU optional int32 device_id = 18 [default = 0]; // GPU的ID optional int64 random_seed = 20 [default = -1]; // 随机数种子 optional bool debug_info = 7 [default = false]; // 调试网络很好用,可以打印出前向传播的数据以及反向传播的数据, 在网络出问题时,可以用来看看网络 }
三、caffe.proto源码
以下转载自(http://blog.csdn.net/langb2014/article/details/50395466)////////////////// caffe.proto文件注释, caffe版本:MS-caffe-master github 2016.8.20 caffe版本:BVLC-caffe-master github 2016.8.20 ////////////////// syntax = "proto2"; package caffe; // 数据块形状{指定Blob的形状或维度-4D} message BlobShape { //数据块形状定义为Num×Channel×Height×Wight原因在于caffe基于容器的多维嵌套 //来实现高维数据的封装。即vector(N)>。 repeated int64 dim = 1 [packed = true]; } // 数据块{形状,数据,微分} message BlobProto { optional BlobShape shape = 7; repeated float data = 5 [packed = true]; repeated float diff = 6 [packed = true]; repeated double double_data = 8 [packed = true]; repeated double double_diff = 9 [packed = true]; //数据4D形状 -- 旧版本,已使用"BlobShape shape"代替: optional int32 num = 1 [default = 0]; //样本 optional int32 channels = 2 [default = 0]; optional int32 height = 3 [default = 0]; optional int32 width = 4 [default = 0]; } // 存放多个BlobProto实例的对应Index,易于引用 message BlobProtoVector { repeated BlobProto blobs = 1; } // 数据:{C,H,W,data(uchar&float),label} 图像样本 message Datum { optional int32 channels = 1; optional int32 height = 2; optional int32 width = 3; // the actual image data, in bytes optional bytes data = 4; optional int32 label = 5; // Optionally, the datum could also hold float data. repeated float float_data = 6; // If true data contains an encoded image that need to be decoded optional bool encoded = 7 [default = false]; } //滤波器参数{Type(const|uniform|gauss),} message FillerParameter { // The filler type. optional string type = 1 [default = 'constant']; optional float value = 2 [default = 0]; // the value in constant filler optional float min = 3 [default = 0]; // the min value in uniform filler optional float max = 4 [default = 1]; // the max value in uniform filler optional float mean = 5 [default = 0]; // the mean value in Gaussian filler optional float std = 6 [default = 1]; // the std value in Gaussian filler // 给定输入与权值相乘后应该得到非零输出,默认值-1意为不稀疏化高斯模板。 optional int32 sparse = 7 [default = -1]; // Normalize the filler variance by fan_in, fan_out, or their average. // Applies to 'xavier' and 'msra' fillers.(扇入,扇出) // 通过fanIn,fanOut,及其均值来归一化填充值的方差,有“xavier法”或“msra法” enum VarianceNorm { FAN_IN = 0; FAN_OUT = 1; AVERAGE = 2; } optional VarianceNorm variance_norm = 8 [default = FAN_IN]; } //网络参数{网名,输入参数,数据块形状,forceBack,NetState,debugInfo,} message NetParameter { optional string name = 1; // consider giving the network a name // 旧版--输入网络的数据块Blobs; 改为新版--InputParameter repeated string input = 3; // DEPRECATED. See InputParameter. The shape of the input blobs. // 旧版--输入的Blobs的形状; 改为新版--InputerParameter repeated BlobShape input_shape = 8; // 指定Blobs的4D输入形状 -- 已改为新版:input_shape代替 // 如要使用旧版,对每个输入的blob都需要指定4个参数,Num×Cha×H×W // 因此 input_dim需要重复4次 repeated int32 input_dim = 4; //确定网络是否要让每个层都强制反向传播。 //如果设置为false,将根据网络结构和学习率来自动确定是否需要反向传播。 //网络的当前状态"state"包括"phase","level","stage"。(???) //某些层需要设置phase属性,使其跳过网络运行时的某些状态. optional NetState state = 6; // 当运行Net::Forward/Backward/Update时,打印调试信息,默认false. optional bool debug_info = 7 [default = false]; // 构成net的layers。每个layer的链接和行为通过LayerParameter配置。 repeated LayerParameter layer = 100; // ID 100 so layers are printed last. // DEPRECATED: use 'layer' instead. repeated V1LayerParameter layers = 2; } // NOTE:注意 // Update the next available ID when you add a new SolverParameter field. // 当你添加一个新的SolverParameter属性时,需要更新下一个可获得的ID // SolverParameter next available ID: 41 (last added: type) //求解器参数{网络,} message SolverParameter { ////////////////////////////////////////////////////////////////////////////// // Specifying the train and test networks // // Exactly one train net must be specified using one of the following fields: // train_net_param, train_net, net_param, net // One or more test nets may be specified using any of the following fields: // test_net_param, test_net, net_param, net // If more than one test net field is specified (e.g., both net and // test_net are specified), they will be evaluated in the field order given // above: (1) test_net_param, (2) test_net, (3) net_param/net. // A test_iter must be specified for each test_net. // A test_level and/or a test_stage may also be specified for each test_net. ////////////////////////////////////////////////////////////////////////////// //指定网络,可有以下的多种形式 // Proto filename for the train net, possibly combined with one or more // test nets. optional string net = 24; // Inline train net param, possibly combined with one or more test nets. optional NetParameter net_param = 25; optional string train_net = 1; // Proto filename for the train net. repeated string test_net = 2; // Proto filenames for the test nets. optional NetParameter train_net_param = 21; // Inline train net params. repeated NetParameter test_net_param = 22; // Inline test net params. // 指定网络状态 // The states for the train/test nets. Must be unspecified or // specified once per net. // // By default, all states will have solver = true; // train_state will have phase = TRAIN, // and all test_state's will have phase = TEST. // Other defaults are set according to the NetState defaults. optional NetState train_state = 26; repeated NetState test_state = 27; //测试迭代批次数: //合理设置可使得测试遍历完全部测试样本 //合理值 = 测试样本总数/每批次测试数 = totalTestSamples/batchSize repeated int32 test_iter = 3; //训练迭代批次数: //两次测试之间所经历的训练迭代次数:合理设置可使得训练遍历完全部训练样本 //合理值 = 训练样本总数/每批次训练数 = totalTrainSamples/batchSize optional int32 test_interval = 4 [default = 0]; //训练test_interval个批次,再测试test_iter个批次,为一个回合(epoch) //合理设置应使得每个回合内,遍历覆盖到全部训练样本和测试样本 //默认不计算测试时损失 optional bool test_compute_loss = 19 [default = false]; // 如设置为真,则在训练前运行一次测试,以确保内存足够,并打印初始损失值 optional bool test_initialization = 32 [default = true]; // 基本学习速率 optional float base_lr = 5; // The base learning rate // 打印信息的遍历间隔,遍历多少个批次打印一次信息。设置为0则不打印。 optional int32 display = 6; // Display the loss averaged over the last average_loss iterations // 打印最后一个迭代批次下的平均损失(?) optional int32 average_loss = 33 [default = 1]; // 训练最大迭代次数 optional int32 max_iter = 7; // accumulate gradients over `iter_size` x `batch_size` instances // 累积梯度误差基于“iter_size×batchSize”个样本实例 // “批次数×批量数”=“遍历的批次数×每批的样本数”个样本实例 optional int32 iter_size = 36 [default = 1]; //学习率衰减策略(7种) // The learning rate decay policy. The currently implemented learning rate // policies are as follows: // - fixed: always return base_lr. // - step: return base_lr * gamma ^ (floor(iter / step)) // - exp: return base_lr * gamma ^ iter // - inv: return base_lr * (1 + gamma * iter) ^ (- power) // - multistep: similar to step butallows non uniform steps defined by // stepvalue // - poly: the effective learning rate follows a polynomial decay, to be // zero by the max_iter. return base_lr (1 - iter/max_iter) ^ (power) // - sigmoid: the effective learning rate follows a sigmod decay // return base_lr ( 1/(1 + exp(-gamma * (iter - stepsize)))) // // 在上述参数中,base_lr, max_iter, gamma, step, stepvalue and power 被定义 // 在solver.prototxt文件中,iter是当前迭代次数。 optional string lr_policy = 8; //学习率调节策略 optional float gamma = 9; // The parameter to compute the learning rate. optional float power = 10; // The parameter to compute the learning rate. optional float momentum = 11; // The momentum value.动量 optional float weight_decay = 12; // The weight decay.权值衰减系数 //由权值衰减系数所控制的正则化类型:L1或L2范数,默认L2 optional string regularization_type = 29 [default = "L2"]; //"step"策略下,学习率的步长值 optional int32 stepsize = 13; //"multistep"策略下的步长值 repeated int32 stepvalue = 34; //设置梯度裁剪阈值为>=0,当其实际L2范数超出此值时(?) optional float clip_gradients = 35 [default = -1]; //快照间隔,遍历多少次对模型和求解器状态保存一次 optional int32 snapshot = 14 [default = 0]; // The snapshot interval optional string snapshot_prefix = 15; // The prefix for the snapshot. //是否对diff快照,有助调试,但最终的protocol buffer尺寸会很大 optional bool snapshot_diff = 16 [default = false]; //快照数据保存格式{hdf5,binaryproto(默认)} enum SnapshotFormat { HDF5 = 0; BINARYPROTO = 1; } optional SnapshotFormat snapshot_format = 37 [default = BINARYPROTO]; // the mode solver will use: 0 for CPU and 1 for GPU. Use GPU in default. enum SolverMode { CPU = 0; GPU = 1; } 求解模式{GPU(device_id),CPU} optional SolverMode solver_mode = 17 [default = GPU]; optional int32 device_id = 18 [default = 0]; //随机数种子,设为正则表示Solver会以此为随机数初始化caffe,可产生重复随机 //数,易于重复试验;设为默认-1代表使用系统时钟作为种子。 optional int64 random_seed = 20 [default = -1]; //求解器类型=SGD(默认) optional string type = 40 [default = "SGD"]; // numerical stability for RMSProp, AdaGrad and AdaDelta and Adam optional float delta = 31 [default = 1e-8]; // parameters for the Adam solver optional float momentum2 = 39 [default = 0.999]; // RMSProp decay value // MeanSquare(t) = rms_decay*MeanSquare(t-1) + (1-rms_decay)*SquareGradient(t) optional float rms_decay = 38; //若真,则打印网络状态信息,有助于调试问题 optional bool debug_info = 23 [default = false]; //若假,则不会在训练后保存快照 optional bool snapshot_after_train = 28 [default = true]; // DEPRECATED: old solver enum types, use string instead enum SolverType { SGD = 0; NESTEROV = 1; ADAGRAD = 2; RMSPROP = 3; ADADELTA = 4; ADAM = 5; } // DEPRECATED: use type instead of solver_type optional SolverType solver_type = 30 [default = SGD]; } //对求解器状态进行快照的消息 message SolverState { optional int32 iter = 1; // The current iteration optional string learned_net = 2; // The file that stores the learned net. repeated BlobProto history = 3; // The history for sgd solvers optional int32 current_step = 4 [default = 0]; // The current step for learning rate } enum Phase { TRAIN = 0; TEST = 1; } //NetState{phase,level,stage} message NetState { optional Phase phase = 1 [default = TEST]; optional int32 level = 2 [default = 0]; repeated string stage = 3; } //网络状态规则{phases,levels,stages} message NetStateRule { //在NetState中设置phase值(TRAIN|TEST),使其符合此规则 optional Phase phase = 1; //设置layer中所使用的最小最大levels。使其不定义以满足忽视level的规则。 optional int32 min_level = 2; optional int32 max_level = 3; // Customizable sets of stages to include or exclude. // The net must have ALL of the specified stages and NONE of the specified // "not_stage"s to meet the rule. // (Use multiple NetStateRules to specify conjunctions of stages.) //可定制的stages集合,用于include或exclude在网络中。网络必须包含全 //部制定的"stages"或不包含全部制定的"not_stage" repeated string stage = 4; repeated string not_stage = 5; } // Specifies training parameters (multipliers on global learning constants, // and the name and other settings used for weight sharing). //指定训练参数(乘数及全局学习率常数)和其名称,以及其他用于权值共享的设置。 message ParamSpec { // 设定参数blobs的名称--用于在层间共享参数,若无此需求则不用设计。 optional string name = 1; //共享权重时是否需要其形状相同或仅仅数量相同,默认为形状相同 optional DimCheckMode share_mode = 2; enum DimCheckMode { // STRICT (default) 形状相同(num, channels, height, width)都匹配. STRICT = 0; // PERMISSIVE 数量相同 PERMISSIVE = 1; } // The multiplier on the global learning rate for this parameter. // 全局学习率的乘数 optional float lr_mult = 3 [default = 1.0]; // The multiplier on the global weight decay for this parameter. // 全局权值衰减系数的乘数 optional float decay_mult = 4 [default = 1.0]; } //注意: //当在LayerParameter中新增字段时,需要为其更新下一个可用ID。 //比如,最近新增了smooth_l1_loss_param层,则为其指定层专属ID:149。 //层参数{名称,类型,输入底,输出顶,阶段,损失加权系数,全局乘数,} message LayerParameter { optional string name = 1; // 类名称 optional string type = 2; // 类类型 repeated string bottom = 3; // the name of each bottom blob 输入blob名称 repeated string top = 4; // the name of each top blob 输出blob名称 // The train / test phase for computation. //阶段,运行时状态 optional Phase phase = 10; //每层输出blob在目标损失函数中的加权系数,每层默认为0或1 repeated float loss_weight = 5; //指定训练参数(全局学习率上的乘数lr_mrlt) repeated ParamSpec param = 6; // The blobs containing the numeric parameters of the layer. //包含每层数值参数的blobs repeated BlobProto blobs = 7; // Specifies whether to backpropagate to each bottom. If unspecified, // Caffe will automatically infer whether each input needs backpropagation // to compute parameter gradients. If set to true for some inputs, // backpropagation to those inputs is forced; if set false for some inputs, // backpropagation to those inputs is skipped. // // The size must be either 0 or equal to the number of bottoms. repeated bool propagate_down = 11; // Rules控制每层是否被包含在网络中,基于当前的NetState. 可使用非0数规则来 // include或exclude,但不能兼有。如果未指定include或exclude规则,则该层总是 // 被包含在内。 repeated NetStateRule include = 8; repeated NetStateRule exclude = 9; // 用于数据预处理的参数 optional TransformationParameter transform_param = 100; // 由loss层共享的参数. optional LossParameter loss_param = 101; // Layer type-specific parameters. // // Note: certain layers may have more than one computational engine // for their implementation. These layers include an Engine type and // engine parameter for selecting the implementation. // The default for the engine is set by the ENGINE switch at compile-time. // 层类型指定参数 // 注意: optional AccuracyParameter accuracy_param = 102; optional ArgMaxParameter argmax_param = 103; optional BatchNormParameter batch_norm_param = 139; optional BiasParameter bias_param = 141; optional ConcatParameter concat_param = 104; optional ContrastiveLossParameter contrastive_loss_param = 105; optional ConvolutionParameter convolution_param = 106; optional CropParameter crop_param = 144; optional DataParameter data_param = 107; optional DropoutParameter dropout_param = 108; optional DummyDataParameter dummy_data_param = 109; optional EltwiseParameter eltwise_param = 110; optional ELUParameter elu_param = 140; optional EmbedParameter embed_param = 137; optional ExpParameter exp_param = 111; optional FlattenParameter flatten_param = 135; optional HDF5DataParameter hdf5_data_param = 112; optional HDF5OutputParameter hdf5_output_param = 113; optional HingeLossParameter hinge_loss_param = 114; optional ImageDataParameter image_data_param = 115; optional InfogainLossParameter infogain_loss_param = 116; optional InnerProductParameter inner_product_param = 117; optional InputParameter input_param = 143; optional LogParameter log_param = 134; optional LRNParameter lrn_param = 118; optional MemoryDataParameter memory_data_param = 119; optional MVNParameter mvn_param = 120; optional ParameterParameter parameter_param = 145; optional PoolingParameter pooling_param = 121; optional PowerParameter power_param = 122; optional PReLUParameter prelu_param = 131; optional PythonParameter python_param = 130; optional RecurrentParameter recurrent_param = 146; optional ReductionParameter reduction_param = 136; optional ReLUParameter relu_param = 123; optional ReshapeParameter reshape_param = 133; optional ROIPoolingParameter roi_pooling_param = 147; optional ScaleParameter scale_param = 142; optional SigmoidParameter sigmoid_param = 124; optional SmoothL1LossParameter smooth_l1_loss_param = 148; optional SoftmaxParameter softmax_param = 125; optional SPPParameter spp_param = 132; optional SliceParameter slice_param = 126; optional TanHParameter tanh_param = 127; optional ThresholdParameter threshold_param = 128; optional TileParameter tile_param = 138; optional WindowDataParameter window_data_param = 129; optional MILDataParameter mil_data_param = 0x004d4944; //"MID" optional MILParameter mil_param = 0x004d494c; //"MIL" } // 对数据层进行转换的参数 message TransformationParameter { // 对data执行预处理,比如简单缩放,去均值。 optional float scale = 1 [default = 1]; // Specify if we want to randomly mirror data.//镜像 optional bool mirror = 2 [default = false]; // Specify if we would like to randomly crop an image.//随机裁剪 optional uint32 crop_size = 3 [default = 0]; // 指定均值文件或均值,二者不可兼有;在对应通道上减去此均值; optional string mean_file = 4; repeated float mean_value = 5; // 强制转换图像为3通道彩色 optional bool force_color = 6 [default = false]; // 强制转换为灰度图 optional bool force_gray = 7 [default = false]; } // Loss层参数 message LossParameter { // 如果被指定,则忽略给定label的实例 optional int32 ignore_label = 1; // 如何对loss层损失归一化,使其跨越"batches,spatial(H*W)"或其他维度。 // 目前仅仅在SoftmaxWithLoss层中实现。 // 归一化模式 enum NormalizationMode { // 基于batchSize×spatialDim归一化.所设定的忽略标签将不被忽略。 FULL = 0; // 基于输出位置的总数量(batchSize×H×W)归一化,不包括被忽视的标签。 // 若未设置被忽视标签,则其行为与FULL相同。 VALID = 1; // Divide by the batch size.基于batchSize进行归一化。 BATCH_SIZE = 2; // Do not normalize the loss.不归一化损失 NONE = 3; } optional NormalizationMode normalization = 3 [default = VALID]; // 旧版--新版如上所述。 // 若"normalization"被指定则忽略此参数;若未被指定,可设置下值为false // 则基于batchSize归一化。 optional bool normalize = 2; } // Messages that store parameters used by individual layer types follow, in // alphabetical order. message AccuracyParameter { // When computing accuracy, count as correct by comparing the true label to // the top k scoring classes. By default, only compare to the top scoring // class (i.e. argmax). //Topk正确率计算 optional uint32 top_k = 1 [default = 1]; // The "label" axis of the prediction blob, whose argmax corresponds to the // predicted label -- may be negative to index from the end (e.g.,-1 for the // last axis). For example, if axis == 1 and the predictions are // (N x C x H x W), the label blob is expected to contain N*H*W ground truth // labels with integer values in {0, 1, ..., C-1}. // 预测blob的"label"轴--其最大值才对应于预测标签--的索引有可能从负值开始。 // 即: predicted_labels=argmax(predictions blob,label_axis) // 比如axis==1,其预测blob为(N x C x H x W), 而标签blob被期望包含(N×H×W)个 // 真实标签,且标签值为{0,1,2...C-1}。 optional int32 axis = 2 [default = 1]; // If specified, ignore instances with the given label. // 如果指定,则忽略给定标签的对应实例 optional int32 ignore_label = 3; } //输出最大化参数,对预测标签进行最大化 message ArgMaxParameter { // If true produce pairs (argmax, maxval) // 如果真,则产生(argmax,maxval)对 optional bool out_max_val = 1 [default = false]; optional uint32 top_k = 2 [default = 1]; // The axis along which to maximise -- may be negative to index from the // end (e.g., -1 for the last axis). // By default ArgMaxLayer maximizes over the flattened trailing dimensions // for each index of the first / num dimension. ?? // optional int32 axis = 3; } //拼接参数 message ConcatParameter { // The axis along which to concatenate -- may be negative to index from the // end (e.g., -1 for the last axis). Other axes must have the // same dimension for all the bottom blobs. // By default, ConcatLayer concatenates blobs along the "channels" axis (1). optional int32 axis = 2 [default = 1]; // DEPRECATED: alias for "axis" -- does not support negative indexing. optional uint32 concat_dim = 1 [default = 1]; } //BatchNormParameter参数,源于论文batchNorm message BatchNormParameter { // If false, accumulate global mean/variance values via a moving average. If // true, use those accumulated values instead of computing mean/variance // across the batch. optional bool use_global_stats = 1; // How much does the moving average decay each iteration? optional float moving_average_fraction = 2 [default = .999]; // Small value to add to the variance estimate so that we don't divide by // zero. optional float eps = 3 [default = 1e-5]; } //偏置参数 message BiasParameter { // The first axis of bottom[0] (the first input Blob) along which to apply // bottom[1] (the second input Blob). May be negative to index from the end // (e.g., -1 for the last axis). // // For example, if bottom[0] is 4D with shape 100x3x40x60, the output // top[0] will have the same shape, and bottom[1] may have any of the // following shapes (for the given value of axis): // (axis == 0 == -4) 100; 100x3; 100x3x40; 100x3x40x60 // (axis == 1 == -3) 3; 3x40; 3x40x60 // (axis == 2 == -2) 40; 40x60 // (axis == 3 == -1) 60 // Furthermore, bottom[1] may have the empty shape (regardless of the value of // "axis") -- a scalar bias. optional int32 axis = 1 [default = 1]; // (num_axes is ignored unless just one bottom is given and the bias is // a learned parameter of the layer. Otherwise, num_axes is determined by the // number of axes by the second bottom.) // The number of axes of the input (bottom[0]) covered by the bias // parameter, or -1 to cover all axes of bottom[0] starting from `axis`. // Set num_axes := 0, to add a zero-axis Blob: a scalar. optional int32 num_axes = 2 [default = 1]; // (filler is ignored unless just one bottom is given and the bias is // a learned parameter of the layer.) // The initialization for the learned bias parameter. // Default is the zero (0) initialization, resulting in the BiasLayer // initially performing the identity operation. optional FillerParameter filler = 3; } //对比度损失参数 message ContrastiveLossParameter { // margin for dissimilar pair optional float margin = 1 [default = 1.0]; // The first implementation of this cost did not exactly match the cost of // Hadsell et al 2006 -- using (margin - d^2) instead of (margin - d)^2. // legacy_version = false (the default) uses (margin - d)^2 as proposed in the // Hadsell paper. New models should probably use this version. // legacy_version = true uses (margin - d^2). This is kept to support / // reproduce existing models and results optional bool legacy_version = 2 [default = false]; } //卷积参数 message ConvolutionParameter { optional uint32 num_output = 1; // The number of outputs for the layer optional bool bias_term = 2 [default = true]; // whether to have bias terms // Pad, kernel size, and stride are all given as a single value for equal // dimensions in all spatial dimensions, or once per spatial dimension. repeated uint32 pad = 3; // The padding size; defaults to 0 repeated uint32 kernel_size = 4; // The kernel size repeated uint32 stride = 6; // The stride; defaults to 1 // Factor used to dilate the kernel, (implicitly) zero-filling the resulting // holes. (Kernel dilation is sometimes referred to by its use in the // algorithme 脿 trous from Holschneider et al. 1987.) repeated uint32 dilation = 18; // The dilation; defaults to 1 // For 2D convolution only, the *_h and *_w versions may also be used to // specify both spatial dimensions. optional uint32 pad_h = 9 [default = 0]; // The padding height (2D only) optional uint32 pad_w = 10 [default = 0]; // The padding width (2D only) optional uint32 kernel_h = 11; // The kernel height (2D only) optional uint32 kernel_w = 12; // The kernel width (2D only) optional uint32 stride_h = 13; // The stride height (2D only) optional uint32 stride_w = 14; // The stride width (2D only) optional uint32 group = 5 [default = 1]; // The group size for group conv optional FillerParameter weight_filler = 7; // The filler for the weight optional FillerParameter bias_filler = 8; // The filler for the bias enum Engine { DEFAULT = 0; //CPU CAFFE = 1; //GPU-CUDA CUDNN = 2; //GPU-CUDA-CUDNN } optional Engine engine = 15 [default = DEFAULT]; // The axis to interpret as "channels" when performing convolution. // Preceding dimensions are treated as independent inputs; // succeeding dimensions are treated as "spatial". // With (N, C, H, W) inputs, and axis == 1 (the default), we perform // N independent 2D convolutions, sliding C-channel (or (C/g)-channels, for // groups g>1) filters across the spatial axes (H, W) of the input. // With (N, C, D, H, W) inputs, and axis == 1, we perform // N independent 3D convolutions, sliding (C/g)-channels // filters across the spatial axes (D, H, W) of the input. optional int32 axis = 16 [default = 1]; // Whether to force use of the general ND convolution, even if a specific // implementation for blobs of the appropriate number of spatial dimensions // is available. (Currently, there is only a 2D-specific convolution // implementation; for input blobs with num_axes != 2, this option is // ignored and the ND implementation will be used.) optional bool force_nd_im2col = 17 [default = false]; } //裁剪参数 message CropParameter { // To crop, elements of the first bottom are selected to fit the dimensions // of the second, reference bottom. The crop is configured by // - the crop `axis` to pick the dimensions for cropping // - the crop `offset` to set the shift for all/each dimension // to align the cropped bottom with the reference bottom. // All dimensions up to but excluding `axis` are preserved, while // the dimensions including and trailing `axis` are cropped. // If only one `offset` is set, then all dimensions are offset by this amount. // Otherwise, the number of offsets must equal the number of cropped axes to // shift the crop in each dimension accordingly. // Note: standard dimensions are N,C,H,W so the default is a spatial crop, // and `axis` may be negative to index from the end (e.g., -1 for the last // axis). optional int32 axis = 1 [default = 2]; repeated uint32 offset = 2; } //数据参数 message DataParameter { enum DB { LEVELDB = 0; LMDB = 1; } // Specify the data source. optional string source = 1; // Specify the batch size. optional uint32 batch_size = 4; // The rand_skip variable is for the data layer to skip a few data points // to avoid all asynchronous sgd clients to start at the same point. The skip // point would be set as rand_skip * rand(0,1). Note that rand_skip should not // be larger than the number of keys in the database. // DEPRECATED. Each solver accesses a different subset of the database. optional uint32 rand_skip = 7 [default = 0]; optional DB backend = 8 [default = LEVELDB]; // DEPRECATED. See TransformationParameter. For data pre-processing, we can do // simple scaling and subtracting the data mean, if provided. Note that the // mean subtraction is always carried out before scaling. optional float scale = 2 [default = 1]; optional string mean_file = 3; // DEPRECATED. See TransformationParameter. Specify if we would like to randomly // crop an image. optional uint32 crop_size = 5 [default = 0]; // DEPRECATED. See TransformationParameter. Specify if we want to randomly mirror // data. optional bool mirror = 6 [default = false]; // Force the encoded image to have 3 color channels optional bool force_encoded_color = 9 [default = false]; // Prefetch queue (Number of batches to prefetch to host memory, increase if // data access bandwidth varies). optional uint32 prefetch = 10 [default = 4]; } //DropoutParameter参数 message DropoutParameter { optional float dropout_ratio = 1 [default = 0.5]; // dropout ratio optional bool scale_train = 2 [default = true]; // scale train or test phase } // DummyDataLayer fills any number of arbitrarily shaped blobs with random // (or constant) data generated by "Fillers" (see "message FillerParameter"). message DummyDataParameter { // This layer produces N >= 1 top blobs. DummyDataParameter must specify 1 or N // shape fields, and 0, 1 or N data_fillers. // // If 0 data_fillers are specified, ConstantFiller with a value of 0 is used. // If 1 data_filler is specified, it is applied to all top blobs. If N are // specified, the ith is applied to the ith top blob. repeated FillerParameter data_filler = 1; repeated BlobShape shape = 6; // 4D dimensions -- deprecated. Use "shape" instead. repeated uint32 num = 2; repeated uint32 channels = 3; repeated uint32 height = 4; repeated uint32 width = 5; } message EltwiseParameter { enum EltwiseOp { PROD = 0; SUM = 1; MAX = 2; } optional EltwiseOp operation = 1 [default = SUM]; // element-wise operation repeated float coeff = 2; // blob-wise coefficient for SUM operation // Whether to use an asymptotically slower (for >2 inputs) but stabler method // of computing the gradient for the PROD operation. (No effect for SUM op.) optional bool stable_prod_grad = 3 [default = true]; } // Message that stores parameters used by ELULayer message ELUParameter { // Described in: // Clevert, D.-A., Unterthiner, T., & Hochreiter, S. (2015). Fast and Accurate // Deep Network Learning by Exponential Linear Units (ELUs). arXiv optional float alpha = 1 [default = 1]; } // Message that stores parameters used by EmbedLayer message EmbedParameter { optional uint32 num_output = 1; // The number of outputs for the layer // The input is given as integers to be interpreted as one-hot // vector indices with dimension num_input. Hence num_input should be // 1 greater than the maximum possible input value. optional uint32 input_dim = 2; optional bool bias_term = 3 [default = true]; // Whether to use a bias term optional FillerParameter weight_filler = 4; // The filler for the weight optional FillerParameter bias_filler = 5; // The filler for the bias } // Message that stores parameters used by ExpLayer message ExpParameter { // ExpLayer computes outputs y = base ^ (shift + scale * x), for base > 0. // Or if base is set to the default (-1), base is set to e, // so y = exp(shift + scale * x). optional float base = 1 [default = -1.0]; optional float scale = 2 [default = 1.0]; optional float shift = 3 [default = 0.0]; } /// Message that stores parameters used by FlattenLayer message FlattenParameter { // The first axis to flatten: all preceding axes are retained in the output. // May be negative to index from the end (e.g., -1 for the last axis). optional int32 axis = 1 [default = 1]; // The last axis to flatten: all following axes are retained in the output. // May be negative to index from the end (e.g., the default -1 for the last // axis). optional int32 end_axis = 2 [default = -1]; } // Message that stores parameters used by HDF5DataLayer message HDF5DataParameter { // Specify the data source. optional string source = 1; // Specify the batch size. optional uint32 batch_size = 2; // Specify whether to shuffle the data. // If shuffle == true, the ordering of the HDF5 files is shuffled, // and the ordering of data within any given HDF5 file is shuffled, // but data between different files are not interleaved; all of a file's // data are output (in a random order) before moving onto another file. optional bool shuffle = 3 [default = false]; } message HDF5OutputParameter { optional string file_name = 1; } message HingeLossParameter { enum Norm { L1 = 1; L2 = 2; } // Specify the Norm to use L1 or L2 optional Norm norm = 1 [default = L1]; } //数据集参数 message ImageDataParameter { // 指定数据源文件 optional string source = 1; // 指定批量大小batchSize optional uint32 batch_size = 4 [default = 1]; // The rand_skip variable is for the data layer to skip a few data points // to avoid all asynchronous sgd clients to start at the same point. The skip // point would be set as rand_skip * rand(0,1). Note that rand_skip should not // be larger than the number of keys in the database. // 随机跳过rand_skip * rand(0,1)个样本,以使得SGD从不同状态点启动 optional uint32 rand_skip = 7 [default = 0]; // Whether or not ImageLayer should shuffle the list of files at every epoch.是否在每个回合都混排图片,默认否 optional bool shuffle = 8 [default = false]; // It will also resize images if new_height or new_width are not zero. // 若以下2个值不为0,则将图片缩放为下面的形状 optional uint32 new_height = 9 [default = 0]; optional uint32 new_width = 10 [default = 0]; // Specify if the images are color or gray指明是彩色还是灰度图 optional bool is_color = 11 [default = true]; // DEPRECATED. See TransformationParameter. For data pre-processing, we can do // simple scaling and subtracting the data mean, if provided. Note that the // mean subtraction is always carried out before scaling. // 旧版--图片预处理参数,新版用TransformationParameter optional float scale = 2 [default = 1]; optional string mean_file = 3; // DEPRECATED. See TransformationParameter. Specify if we would like to randomly // crop an image. optional uint32 crop_size = 5 [default = 0]; // DEPRECATED. See TransformationParameter. Specify if we want to randomly mirror // data. optional bool mirror = 6 [default = false]; optional string root_folder = 12 [default = ""]; } //信息增益损失参数 message InfogainLossParameter { // Specify the infogain matrix source. optional string source = 1; } //内积参数 message InnerProductParameter { optional uint32 num_output = 1; // The number of outputs for the layer optional bool bias_term = 2 [default = true]; // whether to have bias terms optional FillerParameter weight_filler = 3; // The filler for the weight optional FillerParameter bias_filler = 4; // The filler for the bias // The first axis to be lumped into a single inner product computation; // all preceding axes are retained in the output. // May be negative to index from the end (e.g., -1 for the last axis). optional int32 axis = 5 [default = 1]; // Specify whether to transpose the weight matrix or not. // If transpose == true, any operations will be performed on the transpose // of the weight matrix. The weight matrix itself is not going to be transposed // but rather the transfer flag of operations will be toggled accordingly. optional bool transpose = 6 [default = false]; } //输入参数 message InputParameter { // This layer produces N >= 1 top blob(s) to be assigned manually. // Define N shapes to set a shape for each top. // Define 1 shape to set the same shape for every top. // Define no shape to defer to reshaping manually. // 此层管理输入(top)blobs,当输入blob个数N≥1,可使其自动分配。 // 设定N个shapes为N个输入blob;设定1个shape使得全部输入blob形状相同; // 不设定,可手动调整。 // 可查看.\models\bvlc_reference_caffenet\deploy.prototxt中指定1个shape repeated BlobShape shape = 1; } // LogLayer的参数 message LogParameter { // LogLayer computes outputs y = log_base(shift + scale * x), for base > 0. // Or if base is set to the default (-1), base is set to e, // so y = ln(shift + scale * x) = log_e(shift + scale * x) optional float base = 1 [default = -1.0]; optional float scale = 2 [default = 1.0]; optional float shift = 3 [default = 0.0]; } // LRNLayer层参数 message LRNParameter { optional uint32 local_size = 1 [default = 5]; optional float alpha = 2 [default = 1.]; optional float beta = 3 [default = 0.75]; enum NormRegion { ACROSS_CHANNELS = 0; WITHIN_CHANNEL = 1; } optional NormRegion norm_region = 4 [default = ACROSS_CHANNELS]; optional float k = 5 [default = 1.]; enum Engine { DEFAULT = 0; CAFFE = 1; CUDNN = 2; } optional Engine engine = 6 [default = DEFAULT]; } //数据内存占用参数 message MemoryDataParameter { optional uint32 batch_size = 1; optional uint32 channels = 2; optional uint32 height = 3; optional uint32 width = 4; } //MVN参数{均值,方差,跨通道}(mean-varance-normalization) message MVNParameter { // This parameter can be set to false to normalize mean only // 设定为false时仅归一化均值,否则包括方差 optional bool normalize_variance = 1 [default = true]; // This parameter can be set to true to perform DNN-like MVN // 执行跨通道归一化,类似于DNN的MVN;默认否,只执行Spatial内归一化。 optional bool across_channels = 2 [default = false]; // Epsilon for not dividing by zero while normalizing variance // 防止除0的极小数 optional float eps = 3 [default = 1e-9]; } //?? message ParameterParameter { optional BlobShape shape = 1; } //池化层参数 message PoolingParameter { enum PoolMethod { MAX = 0; AVE = 1; STOCHASTIC = 2; } optional PoolMethod pool = 1 [default = MAX]; // The pooling method // Pad, kernel size, and stride are all given as a single value for equal // dimensions in height and width or as Y, X pairs. optional uint32 pad = 4 [default = 0]; // The padding size (equal in Y, X) optional uint32 pad_h = 9 [default = 0]; // The padding height optional uint32 pad_w = 10 [default = 0]; // The padding width optional uint32 kernel_size = 2; // The kernel size (square) optional uint32 kernel_h = 5; // The kernel height optional uint32 kernel_w = 6; // The kernel width optional uint32 stride = 3 [default = 1]; // The stride (equal in Y, X) optional uint32 stride_h = 7; // The stride height optional uint32 stride_w = 8; // The stride width enum Engine { DEFAULT = 0; CAFFE = 1; CUDNN = 2; } optional Engine engine = 11 [default = DEFAULT]; // If global_pooling then it will pool over the size of the bottom by doing // kernel_h = bottom->height and kernel_w = bottom->width optional bool global_pooling = 12 [default = false]; } message PowerParameter { // PowerLayer computes outputs y = (shift + scale * x) ^ power. optional float power = 1 [default = 1.0]; optional float scale = 2 [default = 1.0]; optional float shift = 3 [default = 0.0]; } //Python参数 message PythonParameter { optional string module = 1; optional string layer = 2; // This value is set to the attribute `param_str` of the `PythonLayer` object // in Python before calling the `setup()` method. This could be a number, // string, dictionary in Python dict format, JSON, etc. You may parse this // string in `setup` method and use it in `forward` and `backward`. optional string param_str = 3 [default = '']; // Whether this PythonLayer is shared among worker solvers during data parallelism. // If true, each worker solver sequentially run forward from this layer. // This value should be set true if you are using it as a data layer. optional bool share_in_parallel = 4 [default = false]; } // RecurrentLayer参数 message RecurrentParameter { // 输出表示的维度必须是非0的 optional uint32 num_output = 1 [default = 0]; optional FillerParameter weight_filler = 2; //weight权值参数 optional FillerParameter bias_filler = 3; //bias偏置参数 // Whether to enable displaying debug_info in the unrolled recurrent net. // 在展开RCNN时是否打印deuginfo optional bool debug_info = 4 [default = false]; // Whether to add as additional inputs (bottoms) the initial hidden state // blobs, and add as additional outputs (tops) the final timestep hidden state // blobs. The number of additional bottom/top blobs required depends on the // recurrent architecture -- e.g., 1 for RNNs, 2 for LSTMs. // 是否添加初始化的隐藏blobs作为额外输入(bottoms),以及添加最终的timestep隐 // 藏blobs作为额外输出(tops)。 optional bool expose_hidden = 5 [default = false]; } // ReductionLayer参数 message ReductionParameter { enum ReductionOp { SUM = 1; ASUM = 2; SUMSQ = 3; MEAN = 4; } optional ReductionOp operation = 1 [default = SUM]; // reduction operation // The first axis to reduce to a scalar -- may be negative to index from the // end (e.g., -1 for the last axis). // (Currently, only reduction along ALL "tail" axes is supported; reduction // of axis M through N, where N < num_axes - 1, is unsupported.) // Suppose we have an n-axis bottom Blob with shape: // (d0, d1, d2, ..., d(m-1), dm, d(m+1), ..., d(n-1)). // If axis == m, the output Blob will have shape // (d0, d1, d2, ..., d(m-1)), // and the ReductionOp operation is performed (d0 * d1 * d2 * ... * d(m-1)) // times, each including (dm * d(m+1) * ... * d(n-1)) individual data. // If axis == 0 (the default), the output Blob always has the empty shape // (count 1), performing reduction across the entire input -- // often useful for creating new loss functions. optional int32 axis = 2 [default = 0]; optional float coeff = 3 [default = 1.0]; // coefficient for output } // ReLULayer参数 message ReLUParameter { // 允许非0斜率可以加速优化: // Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013). Rectifier nonlinearities // improve neural network acoustic models. In ICML Workshop on Deep Learning // for Audio, Speech, and Language Processing. optional float negative_slope = 1 [default = 0]; enum Engine { DEFAULT = 0; CAFFE = 1; CUDNN = 2; } optional Engine engine = 2 [default = DEFAULT]; } message ReshapeParameter { // Specify the output dimensions. If some of the dimensions are set to 0, // the corresponding dimension from the bottom layer is used (unchanged). // Exactly one dimension may be set to -1, in which case its value is // inferred from the count of the bottom blob and the remaining dimensions. // For example, suppose we want to reshape a 2D blob "input" with shape 2 x 8: // // layer { // type: "Reshape" bottom: "input" top: "output" // reshape_param { ... } // } // // If "input" is 2D with shape 2 x 8, then the following reshape_param // specifications are all equivalent, producing a 3D blob "output" with shape // 2 x 2 x 4: // // reshape_param { shape { dim: 2 dim: 2 dim: 4 } } // reshape_param { shape { dim: 0 dim: 2 dim: 4 } } // reshape_param { shape { dim: 0 dim: 2 dim: -1 } } // reshape_param { shape { dim: 0 dim:-1 dim: 4 } } // optional BlobShape shape = 1; // axis and num_axes control the portion of the bottom blob's shape that are // replaced by (included in) the reshape. By default (axis == 0 and // num_axes == -1), the entire bottom blob shape is included in the reshape, // and hence the shape field must specify the entire output shape. // // axis may be non-zero to retain some portion of the beginning of the input // shape (and may be negative to index from the end; e.g., -1 to begin the // reshape after the last axis, including nothing in the reshape, // -2 to include only the last axis, etc.). // // For example, suppose "input" is a 2D blob with shape 2 x 8. // Then the following ReshapeLayer specifications are all equivalent, // producing a blob "output" with shape 2 x 2 x 4: // // reshape_param { shape { dim: 2 dim: 2 dim: 4 } } // reshape_param { shape { dim: 2 dim: 4 } axis: 1 } // reshape_param { shape { dim: 2 dim: 4 } axis: -3 } // // num_axes specifies the extent of the reshape. // If num_axes >= 0 (and axis >= 0), the reshape will be performed only on // input axes in the range [axis, axis+num_axes]. // num_axes may also be -1, the default, to include all remaining axes // (starting from axis). // // For example, suppose "input" is a 2D blob with shape 2 x 8. // Then the following ReshapeLayer specifications are equivalent, // producing a blob "output" with shape 1 x 2 x 8. // // reshape_param { shape { dim: 1 dim: 2 dim: 8 } } // reshape_param { shape { dim: 1 dim: 2 } num_axes: 1 } // reshape_param { shape { dim: 1 } num_axes: 0 } // // On the other hand, these would produce output blob shape 2 x 1 x 8: // // reshape_param { shape { dim: 2 dim: 1 dim: 8 } } // reshape_param { shape { dim: 1 } axis: 1 num_axes: 0 } // optional int32 axis = 2 [default = 0]; optional int32 num_axes = 3 [default = -1]; } // ROIPoolingLayer参数 message ROIPoolingParameter { // Pad, kernel size, and stride are all given as a single value for equal // dimensions in height and width or as Y, X pairs. optional uint32 pooled_h = 1 [default = 0]; // The pooled output height optional uint32 pooled_w = 2 [default = 0]; // The pooled output width // Multiplicative spatial scale factor to translate ROI coords from their // input scale to the scale used when pooling optional float spatial_scale = 3 [default = 1]; } //ScaleParameter参数 message ScaleParameter { // The first axis of bottom[0] (the first input Blob) along which to apply // bottom[1] (the second input Blob). May be negative to index from the end // (e.g., -1 for the last axis). // ??????????????????????????????? // 第一个输入Blob的首axis,被应用到第二个输入Blob。但第2个Blob的形状可能不同 // For example, if bottom[0] is 4D with shape 100x3x40x60, the output // top[0] will have the same shape, and bottom[1] may have any of the // following shapes (for the given value of axis): // (axis == 0 == -4) 100; 100x3; 100x3x40; 100x3x40x60 // (axis == 1 == -3) 3; 3x40; 3x40x60 // (axis == 2 == -2) 40; 40x60 // (axis == 3 == -1) 60 // Furthermore,bottom[1]may have the empty shape (regardless of the value of // "axis") -- a scalar multiplier. optional int32 axis = 1 [default = 1]; // (num_axes is ignored unless just one bottom is given and the scale is // a learned parameter of the layer. Otherwise, num_axes is determined by the // number of axes by the second bottom.) // The number of axes of the input (bottom[0]) covered by the scale // parameter, or -1 to cover all axes of bottom[0] starting from `axis`. // Set num_axes := 0, to multiply with a zero-axis Blob: a scalar. optional int32 num_axes = 2 [default = 1]; // (filler is ignored unless just one bottom is given and the scale is // a learned parameter of the layer.) // The initialization for the learned scale parameter. // Default is the unit (1) initialization, resulting in the ScaleLayer // initially performing the identity operation. optional FillerParameter filler = 3; // Whether to also learn a bias (equivalent to a ScaleLayer+BiasLayer, but // may be more efficient). Initialized with bias_filler (defaults to 0). optional bool bias_term = 4 [default = false]; optional FillerParameter bias_filler = 5; } //SigmoidParameter参数 message SigmoidParameter { enum Engine { DEFAULT = 0; CAFFE = 1; CUDNN = 2; } optional Engine engine = 1 [default = DEFAULT]; } //SliceParameter参数 message SliceParameter { // The axis along which to slice -- may be negative to index from the end // (e.g., -1 for the last axis). // By default, SliceLayer concatenates blobs along the "channels" axis (1). optional int32 axis = 3 [default = 1]; repeated uint32 slice_point = 2; // DEPRECATED: alias for "axis" -- does not support negative indexing. optional uint32 slice_dim = 1 [default = 1]; } //SmoothL1LossParameter参数 message SmoothL1LossParameter { // SmoothL1Loss(x) = // 0.5 * (sigma * x) ** 2 -- if x < 1.0 / sigma / sigma // |x| - 0.5 / sigma / sigma -- otherwise optional float sigma = 1 [default = 1]; } //SoftmaxLayer, SoftmaxWithLossLayer的参数 message SoftmaxParameter { enum Engine { DEFAULT = 0; CAFFE = 1; CUDNN = 2; } optional Engine engine = 1 [default = DEFAULT]; // The axis along which to perform the softmax -- may be negative to index // from the end (e.g., -1 for the last axis). // Any other axes will be evaluated as independent softmaxes. // 沿着哪一个轴运用softmax,该轴上必须是相互独立的分量。 // eg.预测时对类标签运用,计算损失时对每个类的对数损失运用。 optional int32 axis = 2 [default = 1]; } //TanHParameter参数 message TanHParameter { enum Engine { DEFAULT = 0; CAFFE = 1; CUDNN = 2; } optional Engine engine = 1 [default = DEFAULT]; } // TileLayer参数 message TileParameter { // The index of the axis to tile. optional int32 axis = 1 [default = 1]; // The number of copies (tiles) of the blob to output. optional int32 tiles = 2; } // ThresholdLayer参数 message ThresholdParameter { optional float threshold = 1 [default = 0]; // Strictly positive values } // MILLayer参数 message MILParameter { enum MILType { MAX = 0; NOR = 1; } optional MILType type = 1 [default = MAX]; // The MIL method } //窗口数据参数:专用于目标检测或分割 message WindowDataParameter { // Specify the data source.指定数据源 optional string source = 1; // 数据预处理:尺度缩放,去均值等。去均值应在缩放前执行。 optional float scale = 2 [default = 1]; optional string mean_file = 3; // 指定批处理的数据量 optional uint32 batch_size = 4; // 是否随机裁剪 optional uint32 crop_size = 5 [default = 0]; // 是否镜像变换 optional bool mirror = 6 [default = false]; // Foreground (object) overlap threshold 前景目标重合阈值 optional float fg_threshold = 7 [default = 0.5]; // Background (non-object) overlap threshold背景重合阈值 optional float bg_threshold = 8 [default = 0.5]; // Fraction of batch that should be foreground objects // 前景目标在batch中的比例 optional float fg_fraction = 9 [default = 0.25]; // Amount of contextual padding to add around a window // (used only by the window_data_layer) // 窗口周边需要添加的上下文padding optional uint32 context_pad = 10 [default = 0]; // Mode for cropping out a detection window // warp: cropped window is warped to a fixed size and aspect ratio // square: the tightest square around the window is cropped // mode:裁剪出一个检测窗口的模式 // warp:裁剪窗口被扭曲为某个固定尺寸和形状 // square:裁剪窗口周边最紧?的方框 optional string crop_mode = 11 [default = "warp"]; // cache_images: will load all images in memory for faster access //将全部图像(裁剪得到的小图像)放入内存以便快速存取 optional bool cache_images = 12 [default = false]; // append root_folder to locate images // 添加根文件夹以定位文件 optional string root_folder = 13 [default = ""]; } //MILDataParameter参数 message MILDataParameter { // Specify the data source. optional string source = 1; // Number of scales for each image optional uint32 num_scales = 2 [default = 1]; // Side length ratio between neighbouring scales optional float scale_factor = 6 [default = 1]; // Number of channels in the image optional uint32 channels = 4 [default = 3]; // Specify the number of images per batch optional uint32 images_per_batch = 3; // Specify the number of classes optional uint32 n_classes = 5; // specify the box_dir and label_dir optional string label_file = 7; // Root directory which contains all the images optional string root_dir = 11; // Extention for the file optional string ext = 12; // To randomize or not optional bool randomize = 13 [default = true]; } //SPP参数,源于论文SPPNet message SPPParameter { enum PoolMethod { MAX = 0; AVE = 1; STOCHASTIC = 2; } //池化方法,获得金字塔的方法,最大/平均/随机 optional uint32 pyramid_height = 1; //金字塔高度 optional PoolMethod pool = 2 [default = MAX]; // The pooling method enum Engine { DEFAULT = 0; CAFFE = 1; CUDNN = 2; } optional Engine engine = 6 [default = DEFAULT]; } // DEPRECATED: use LayerParameter. // 旧版:使用层参数。 V1可能是第一版version1的意思 message V1LayerParameter { repeated string bottom = 2; //输入 repeated string top = 3; //输出 optional string name = 4; //层名称 repeated NetStateRule include = 32; //运行时状态:包含 repeated NetStateRule exclude = 33; //运行时状态:不包含 enum LayerType { //层类型 NONE = 0; ABSVAL = 35; ACCURACY = 1; ARGMAX = 30; BNLL = 2; CONCAT = 3; CONTRASTIVE_LOSS = 37; CONVOLUTION = 4; DATA = 5; DECONVOLUTION = 39; DROPOUT = 6; DUMMY_DATA = 32; EUCLIDEAN_LOSS = 7; ELTWISE = 25; EXP = 38; FLATTEN = 8; HDF5_DATA = 9; HDF5_OUTPUT = 10; HINGE_LOSS = 28; IM2COL = 11; IMAGE_DATA = 12; INFOGAIN_LOSS = 13; INNER_PRODUCT = 14; LRN = 15; MEMORY_DATA = 29; MULTINOMIAL_LOGISTIC_LOSS = 16; MVN = 34; POOLING = 17; POWER = 26; RELU = 18; SIGMOID = 19; SIGMOID_CROSS_ENTROPY_LOSS = 27; SILENCE = 36; SOFTMAX = 20; SOFTMAX_LOSS = 21; SPLIT = 22; SLICE = 33; TANH = 23; WINDOW_DATA = 24; THRESHOLD = 31; } optional LayerType type = 5; repeated BlobProto blobs = 6; repeated string param = 1001; repeated DimCheckMode blob_share_mode = 1002; enum DimCheckMode { STRICT = 0; PERMISSIVE = 1; } repeated float blobs_lr = 7; repeated float weight_decay = 8; repeated float loss_weight = 35; optional AccuracyParameter accuracy_param = 27; optional ArgMaxParameter argmax_param = 23; optional ConcatParameter concat_param = 9; optional ContrastiveLossParameter contrastive_loss_param = 40; optional ConvolutionParameter convolution_param = 10; optional DataParameter data_param = 11; optional DropoutParameter dropout_param = 12; optional DummyDataParameter dummy_data_param = 26; optional EltwiseParameter eltwise_param = 24; optional ExpParameter exp_param = 41; optional HDF5DataParameter hdf5_data_param = 13; optional HDF5OutputParameter hdf5_output_param = 14; optional HingeLossParameter hinge_loss_param = 29; optional ImageDataParameter image_data_param = 15; optional InfogainLossParameter infogain_loss_param = 16; optional InnerProductParameter inner_product_param = 17; optional LRNParameter lrn_param = 18; optional MemoryDataParameter memory_data_param = 22; optional MVNParameter mvn_param = 34; optional PoolingParameter pooling_param = 19; optional PowerParameter power_param = 21; optional ReLUParameter relu_param = 30; optional SigmoidParameter sigmoid_param = 38; optional SoftmaxParameter softmax_param = 39; optional SliceParameter slice_param = 31; optional TanHParameter tanh_param = 37; optional ThresholdParameter threshold_param = 25; optional WindowDataParameter window_data_param = 20; optional TransformationParameter transform_param = 36; optional LossParameter loss_param = 42; optional V0LayerParameter layer = 1; } // DEPRECATED: V0LayerParameter is the old way of specifying layer parameters // in Caffe. We keep this message type around for legacy support. // 旧版本:V0LayerParameter version-0版 message V0LayerParameter { optional string name = 1; // the layer name optional string type = 2; // the string to specify the layer type // Parameters to specify layers with inner products. optional uint32 num_output = 3; // The number of outputs for the layer optional bool biasterm = 4 [default = true]; // whether to have bias terms optional FillerParameter weight_filler = 5; // The filler for the weight optional FillerParameter bias_filler = 6; // The filler for the bias optional uint32 pad = 7 [default = 0]; // The padding size optional uint32 kernelsize = 8; // The kernel size optional uint32 group = 9 [default = 1]; // The group size for group conv optional uint32 stride = 10 [default = 1]; // The stride enum PoolMethod { MAX = 0; AVE = 1; STOCHASTIC = 2; } optional PoolMethod pool = 11 [default = MAX]; // The pooling method optional float dropout_ratio = 12 [default = 0.5]; // dropout ratio optional uint32 local_size = 13 [default = 5]; // for local response norm optional float alpha = 14 [default = 1.]; // for local response norm optional float beta = 15 [default = 0.75]; // for local response norm optional float k = 22 [default = 1.]; // For data layers, specify the data source optional string source = 16; // For data pre-processing, we can do simple scaling and subtracting the // data mean, if provided. Note that the mean subtraction is always carried // out before scaling. optional float scale = 17 [default = 1]; optional string meanfile = 18; // For data layers, specify the batch size. optional uint32 batchsize = 19; // For data layers, specify if we would like to randomly crop an image. optional uint32 cropsize = 20 [default = 0]; // For data layers, specify if we want to randomly mirror data. optional bool mirror = 21 [default = false]; // The blobs containing the numeric parameters of the layer repeated BlobProto blobs = 50; // The ratio that is multiplied on the global learning rate. If you want to // set the learning ratio for one blob, you need to set it for all blobs. repeated float blobs_lr = 51; // The weight decay that is multiplied on the global weight decay. repeated float weight_decay = 52; // The rand_skip variable is for the data layer to skip a few data points // to avoid all asynchronous sgd clients to start at the same point. The skip // point would be set as rand_skip * rand(0,1). Note that rand_skip should not // be larger than the number of keys in the database. optional uint32 rand_skip = 53 [default = 0]; // Fields related to detection (det_*) // foreground (object) overlap threshold optional float det_fg_threshold = 54 [default = 0.5]; // background (non-object) overlap threshold optional float det_bg_threshold = 55 [default = 0.5]; // Fraction of batch that should be foreground objects optional float det_fg_fraction = 56 [default = 0.25]; // optional bool OBSOLETE_can_clobber = 57 [default = true]; // Amount of contextual padding to add around a window // (used only by the window_data_layer) optional uint32 det_context_pad = 58 [default = 0]; // Mode for cropping out a detection window // warp: cropped window is warped to a fixed size and aspect ratio // square: the tightest square around the window is cropped optional string det_crop_mode = 59 [default = "warp"]; // For ReshapeLayer, one needs to specify the new dimensions. optional int32 new_num = 60 [default = 0]; optional int32 new_channels = 61 [default = 0]; optional int32 new_height = 62 [default = 0]; optional int32 new_width = 63 [default = 0]; // Whether or not ImageLayer should shuffle the list of files at every epoch. // It will also resize images if new_height or new_width are not zero. optional bool shuffle_images = 64 [default = false]; // For ConcatLayer, one needs to specify the dimension for concatenation, and // the other dimensions must be the same for all the bottom blobs. // By default it will concatenate blobs along the channels dimension. optional uint32 concat_dim = 65 [default = 1]; optional HDF5OutputParameter hdf5_output_param = 1001; } //PReLUParameter,源于论文 message PReLUParameter { // Parametric ReLU described in K. He et al, Delving Deep into Rectifiers: // Surpassing Human-Level Performance on ImageNet Classification, 2015. // Initial value of a_i. Default is a_i=0.25 for all i. optional FillerParameter filler = 1; // Whether or not slope paramters are shared across channels. optional bool channel_shared = 2 [default = false]; }
相关文章推荐
- Caffe源码中caffe.proto文件分析
- Caffe源码中caffe.proto文件分析
- Caffe源码:blob 分析
- 从Caffe源码分析训练过程
- Caffe源码中syncedmem文件分析
- Caffe源码中blob文件分析
- Caffe 源码(九):euclidean_loss_layer 分析
- 从Caffe源码分析训练过程
- caffe源码分析 vector<Blob<Dtype>*>& bottom
- caffe源码解析 — caffe.proto
- Caffe源码(三):layer 分析
- Caffe 源码(九):euclidean_loss_layer 分析
- caffe源码分析--poolinger_layer.cpp
- caffe源码解析 — caffe.proto
- caffe源码分析:softmax_layer.cpp && softmax_loss_layer.cpp
- 从Caffe源码分析训练过程
- caffe_.mexa64: undefined symbol:protobuf8internal10WireFormat 分析思路与解决方案
- Caffe源码中Net文件分析
- caffe源码分析--data_layer.cpp
- Caffe源码(一):math_functions 分析