您的位置:首页 > Web前端

修改caffe源代码从添加loss(层)函数开始

2017-05-27 19:03 447 查看

DeepLearning(基于caffe)实战项目(8)--修改caffe源代码从添加loss(层)函数开始

标签:
Caffe实战项目LossLayer修改源代码

2017-05-27 19:03
479人阅读 评论(0)
收藏
举报



分类:
Caffe实战项目(初级)(9)




作者同类文章X

版权声明:本文为博主原创文章,未经博主允许不得转载。

目录(?)[+]

第一步在caffeproto增加对应的LayerParameter message
第二步在includecaffelayers下增加相应的layer的声明
第三步在srccaffeutilmath_functionscpp下增加绝对值求和模板函数
第四步在srccaffelayers下增加相应layer的CPUGPU实现文件
第五步用vs打开caffe进行编译编译成功后恭喜你功力又上升一个段位哈哈

在caffe中摸爬滚打了一个多月了,修改caffe源代码,早就想练练手了,loss层是一个比较独立的一个层,而且可以仿照caffe给的样例进行添加,难度会稍微小点。caffe自带了十种loss层(contrastive、euclidean、hinge、multinomial_logistic、sigmoid_cross_entropy、smooth_L1、smooth_L1_ohem、softmax、softmax_ohem、infogain)

详细见:http://blog.csdn.net/sihailongwang/article/details/72657637

公式含义推荐:http://blog.csdn.net/u012177034/article/details/52144325

接下来,就是自己添加一个新的loss(层)函数了,我打算添加:Absolute loss

第一步:在caffe.proto增加对应的LayerParameter message

[plain]
view plain
copy

print?

optional AbsoluteLossParameter Absolute_loss_param = 151;

optional AbsoluteLossParameter Absolute_loss_param = 151;


[plain]
view plain
copy

print?

message AbsoluteLossParameter { optional float dis = 1 [default = 1]; }

message AbsoluteLossParameter
{
optional float dis = 1 [default = 1];
}


第二步:在./include/caffe/layers/下增加相应的layer的声明

[plain]
view plain
copy

print?

#ifndef CAFFE_ABSOLUTE_LOSS_LAYER_HPP_ #define CAFFE_ABSOLUTE_LOSS_LAYER_HPP_ #include <vector> #include "caffe/blob.hpp" #include "caffe/layer.hpp" #include "caffe/proto/caffe.pb.h" #include "caffe/layers/loss_layer.hpp" namespace caffe { template <typename Dtype> class AbsoluteLossLayer : public LossLayer<Dtype> { public: explicit AbsoluteLossLayer(const LayerParameter& param) : LossLayer<Dtype>(param), dis_() {} virtual void Reshape(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top); virtual inline const char* type() const { return "AbsoluteLoss"; } virtual inline bool AllowForceBackward(const int bottom_index) const { return true; } protected: /// @copydoc AbsoluteLossLayer virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top); virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top); virtual void Backward_cpu(const vector<Blob<Dtype>*>& top, const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom); virtual void Backward_gpu(const vector<Blob<Dtype>*>& top, const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom); Blob<Dtype> dis_; }; } // namespace caffe #endif // CAFFE_EUCLIDEAN_LOSS_LAYER_HPP_ #endif // CAFFE_EUCLIDEAN_LOSS_LAYER_HPP_

#ifndef CAFFE_ABSOLUTE_LOSS_LAYER_HPP_
#define CAFFE_ABSOLUTE_LOSS_LAYER_HPP_

#include <vector>

#include "caffe/blob.hpp"
#include "caffe/layer.hpp"
#include "caffe/proto/caffe.pb.h"

#include "caffe/layers/loss_layer.hpp"

namespace caffe {

template <typename Dtype>
class AbsoluteLossLayer : public LossLayer<Dtype> {
public:
explicit AbsoluteLossLayer(const LayerParameter& param)
: LossLayer<Dtype>(param), dis_() {}
virtual void Reshape(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top);

virtual inline const char* type() const { return "AbsoluteLoss"; }

virtual inline bool AllowForceBackward(const int bottom_index) const {
return true;
}

protected:
/// @copydoc AbsoluteLossLayer
virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top);
virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top);

virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);
virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);

Blob<Dtype> dis_;
};

}  // namespace caffe

#endif  // CAFFE_EUCLIDEAN_LOSS_LAYER_HPP_

#endif  // CAFFE_EUCLIDEAN_LOSS_LAYER_HPP_


第三步:在./src/caffe/util/math_functions.cpp/下增加“绝对值求和”模板函数

说明:因为AbsoluteLoss需要绝对值求和,所以在math_fuction.cpp中需要增加一个“绝对值”模板函数(与此同时,我惊喜的发现了BLAS、CBLAS)

/********************************************************************************************************************

TIPS:科普一下什么是BLAS/CBLAS

Basic Linear Algebra Subprograms,即基础线性代数子程序库,里边拥有大量的已经编好的关于线性代数运算的程序,主要用于向量和矩阵的计算的高性能数学库,本身是由Fortran编写的,为了方便C/C++程序使用,就有了BLAS的C接口库CBLAS,详细列表:http://www.netlib.org/blas/

/********************************************************************************************************************

[plain]
view plain
copy

print?

//--------------------------add------------------------------------------ template <> float caffe_cpu_asum<float>(const int n, const float* x) { return cblas_sasum(n, x, 1); //sum of absolute values } template <> double caffe_cpu_asum<double>(const int n, const double* x) { return cblas_dasum(n, x, 1); //sum of absolute values } template <typename Dtype> Dtype caffe_cpu_abs_sum(const int n, const Dtype* x) { return caffe_cpu_asum(n, x); } template float caffe_cpu_asum<float>(const int n, const float* x); template double caffe_cpu_asum<double>(const int n, const double* x); //-------------------------add-------------------------------------------

//--------------------------add------------------------------------------
template <>
float caffe_cpu_asum<float>(const int n, const float* x) {
return cblas_sasum(n, x, 1);      //sum of absolute values
}

template <>
double caffe_cpu_asum<double>(const int n, const double* x) {
return cblas_dasum(n, x, 1);      //sum of absolute values
}

template <typename Dtype>
Dtype caffe_cpu_abs_sum(const int n, const Dtype* x) {
return caffe_cpu_asum(n, x);
}

template
float caffe_cpu_asum<float>(const int n, const float* x);

template
double caffe_cpu_asum<double>(const int n, const double* x);
//-------------------------add-------------------------------------------


第四步:在./src/caffe/layers/下增加相应layer的CPU/GPU实现文件

CPU版本(absolute_loss_layer.cpp):

[plain]
view plain
copy

print?

#include <vector>

#include "caffe/layers/absolute_loss_layer.hpp"
#include "caffe/util/math_functions.hpp"

namespace caffe {

template <typename Dtype>
void AbsoluteLossLayer<Dtype>::Reshape(
const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {
LossLayer<Dtype>::Reshape(bottom, top); //在LossLayer 中定义
CHECK_EQ(bottom[0]->count(1), bottom[1]->count(1)) //保证输入维度相同
<< "Inputs must have the same dimension.";
dis_.ReshapeLike(*bottom[0]); //Blob 类型的diff_用来存放两个bottom的差,和bottom具有相同的
}

template <typename Dtype>
void AbsoluteLossLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) {
int count = bottom[0]->count(); //总共有count个featuremap
caffe_sub(
count,
bottom[0]->cpu_data(),
bottom[1]->cpu_data(),
dis_.mutable_cpu_data()); //diff_ = bottom[0] - bottom[1]
Dtype loss_param = this->layer_param_.absolute_loss_param().dis();
Dtype abs_sum = caffe_cpu_abs_sum(count,dis_.cpu_data());
//Dtype dot = caffe_cpu_abs_sum()(count, diff_.cpu_data(), dis_.cpu_data());
Dtype loss = loss_param * abs_sum / bottom[0]->num();
top[0]->mutable_cpu_data()[0] = loss;
}

template <typename Dtype>
void AbsoluteLossLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {
for (int i = 0; i < 2; ++i) {
if (propagate_down[i]) {
//对于输入的label bottom propagate_dowm为0
const Dtype sign = (i == 0) ? 1 : -1;
const Dtype alpha = sign * top[0]->cpu_diff()[0] / bottom[i]->num();
caffe_cpu_axpby(
bottom[i]->count(), // count
alpha, // alpha
dis_.cpu_data(), // a
Dtype(0), // beta
bottom[i]->mutable_cpu_diff()); // b
} //bottom[i]->mutable_cpu_diff()) = alpha*dis_.cpu_data()
}
}

#ifdef CPU_ONLY
STUB_GPU(AbsoluteLossLayer);
#endif

INSTANTIATE_CLASS(AbsoluteLossLayer);
REGISTER_LAYER_CLASS(AbsoluteLoss);

} // namespace caffe

#include <vector>

#include "caffe/layers/absolute_loss_layer.hpp"
#include "caffe/util/math_functions.hpp"

namespace caffe {

template <typename Dtype>
void AbsoluteLossLayer<Dtype>::Reshape(
const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {
LossLayer<Dtype>::Reshape(bottom, top);   //在LossLayer 中定义
CHECK_EQ(bottom[0]->count(1), bottom[1]->count(1))  //保证输入维度相同
<< "Inputs must have the same dimension.";
dis_.ReshapeLike(*bottom[0]);           //Blob 类型的diff_用来存放两个bottom的差,和bottom具有相同的
}

template <typename Dtype>
void AbsoluteLossLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) {
int count = bottom[0]->count();   //总共有count个featuremap
caffe_sub(
count,
bottom[0]->cpu_data(),
bottom[1]->cpu_data(),
dis_.mutable_cpu_data());    //diff_ = bottom[0] - bottom[1]
Dtype loss_param = this->layer_param_.absolute_loss_param().dis();
Dtype abs_sum = caffe_cpu_abs_sum(count,dis_.cpu_data());
//Dtype dot = caffe_cpu_abs_sum()(count, diff_.cpu_data(), dis_.cpu_data());
Dtype loss = loss_param * abs_sum / bottom[0]->num();
top[0]->mutable_cpu_data()[0] = loss;
}

template <typename Dtype>
void AbsoluteLossLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {
for (int i = 0; i < 2; ++i) {
if (propagate_down[i]) {
//对于输入的label bottom propagate_dowm为0
const Dtype sign = (i == 0) ? 1 : -1;
const Dtype alpha = sign * top[0]->cpu_diff()[0] / bottom[i]->num();
caffe_cpu_axpby(
bottom[i]->count(),                       // count
alpha,                             // alpha
dis_.cpu_data(),                        // a
Dtype(0),                           // beta
bottom[i]->mutable_cpu_diff());                 // b
}     //bottom[i]->mutable_cpu_diff()) = alpha*dis_.cpu_data()
}
}

#ifdef CPU_ONLY
STUB_GPU(AbsoluteLossLayer);
#endif

INSTANTIATE_CLASS(AbsoluteLossLayer);
REGISTER_LAYER_CLASS(AbsoluteLoss);

}  // namespace caffe
GPU版本(absolute_loss_layer.cu):

[plain]
view plain
copy

print?

#include <vector> #include "caffe/layers/absolute_loss_layer.hpp" #include "caffe/util/math_functions.hpp" namespace caffe { template <typename Dtype> void AbsoluteLossLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) { int count = bottom[0]->count(); //总共有count个featuremap caffe_gpu_sub( count, bottom[0]->gpu_data(), bottom[1]->gpu_data(), dis_.mutable_gpu_data()); Dtype loss_param = this->layer_param_. Dtype abs_sum; caffe_gpu_asum(count, dis_.gpu_data(), &abs_sum); Dtype loss = loss_param * abs_sum/ bottom[0]->num(); top[0]->mutable_cpu_data()[0] = loss; }

#include <vector>

#include "caffe/layers/absolute_loss_layer.hpp"
#include "caffe/util/math_functions.hpp"

namespace caffe {

template <typename Dtype>
void AbsoluteLossLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) {
int count = bottom[0]->count();   //总共有count个featuremap
caffe_gpu_sub(
count,
bottom[0]->gpu_data(),
bottom[1]->gpu_data(),
dis_.mutable_gpu_data());
Dtype loss_param = this->layer_param_.
Dtype abs_sum;
caffe_gpu_asum(count, dis_.gpu_data(), &abs_sum);
Dtype loss = loss_param * abs_sum/ bottom[0]->num();
top[0]->mutable_cpu_data()[0] = loss;
}


[plain]
view plain
copy

print?

template <typename Dtype>
void AbsoluteLossLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {
for (int i = 0; i < 2; ++i) {
if (propagate_down[i]) {
const Dtype sign = (i == 0) ? 1 : -1;
const Dtype alpha = sign * top[0]->cpu_diff()[0] / bottom[i]->num();
caffe_gpu_axpby(
bottom[i]->count(), // count
alpha, // alpha
dis_.gpu_data(), // a
Dtype(0), // beta
bottom[i]->mutable_gpu_diff()); // b
}
}
}

INSTANTIATE_LAYER_GPU_FUNCS(AbsoluteLossLayer);

} // namespace caffe

template <typename Dtype>
void AbsoluteLossLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {
for (int i = 0; i < 2; ++i) {
if (propagate_down[i]) {
const Dtype sign = (i == 0) ? 1 : -1;
const Dtype alpha = sign * top[0]->cpu_diff()[0] / bottom[i]->num();
caffe_gpu_axpby(
bottom[i]->count(),                       // count
alpha,                             // alpha
dis_.gpu_data(),                        // a
Dtype(0),                           // beta
bottom[i]->mutable_gpu_diff());                 // b
}
}
}

INSTANTIATE_LAYER_GPU_FUNCS(AbsoluteLossLayer);

}  // namespace caffe


第五步:用vs打开caffe,进行编译,编译成功后,恭喜你,功力又上升一个段位,哈哈

1.修改../windows/libcaffe下的两个文件:libcaffe.vcxproj和libcaffe.vcxproj.filters

libcaffe.vcxproj增加:

[plain]
view plain
copy

print?

<ClCompile Include="..\..\src\caffe\layers\absolute_loss_layer.cpp" /> <ClInclude Include="..\..\include\caffe\layers\absolute_loss_layer.hpp" /> <CudaCompile Include="..\..\src\caffe\layers\absolute_loss_layer.cu" />

<ClCompile Include="..\..\src\caffe\layers\absolute_loss_layer.cpp" />
<ClInclude Include="..\..\include\caffe\layers\absolute_loss_layer.hpp" />
<CudaCompile Include="..\..\src\caffe\layers\absolute_loss_layer.cu" />
libcaffe.vcxproj.filter增加:

[plain]
view plain
copy

print?

<ClInclude Include="..\..\include\caffe\layers\absolute_loss_layer.hpp"> <Filter>include\layers</Filter> </ClInclude> <CudaCompile Include="..\..\src\caffe\layers\absolute_loss_layer.cu"> <Filter>cu\layers</Filter> </CudaCompile> <ClCompile Include="..\..\src\caffe\layers\absolute_loss_layer.cpp"> <Filter>src\layers</Filter> </ClCompile>
<ClInclude Include="..\..\include\caffe\layers\absolute_loss_layer.hpp">
<Filter>include\layers</Filter>
</ClInclude>
<CudaCompile Include="..\..\src\caffe\layers\absolute_loss_layer.cu">
<Filter>cu\layers</Filter>
</CudaCompile>
<ClCompile Include="..\..\src\caffe\layers\absolute_loss_layer.cpp">
<Filter>src\layers</Filter>
</ClCompile>
2.打开caffe.sln,再进行重新生成新的解决方案,编译通过后,恭喜你功力又升一级!



顶 0
踩 0

上一篇DeepLearning(基于caffe)实战项目(7)--从caffe结构里函数总结一览caffe

下一篇DeepLearning(基于caffe)实战项目(9)--Python测试训练好的model

相关文章推荐


caffe添加新层教程

【免费】深入理解Docker内部原理及网络配置--王渊命

pthread_create 传递参数时指针跑飞问题

SDCC 2017之区块链技术实战线上峰会--蔡栋

【C++11新特性】 C++11智能指针之shared_ptr

php零基础到项目实战

从零开始山寨Caffe·壹:仰望星空与脚踏实地

C语言及程序设计入门指导


LTE系统信息(1)-MIB

Android入门实战

typedef和define具体的详细区别

5天搞定深度学习框架Caffe

C语言的一个关键字——static

最快速度找到内存泄漏

DeepLearning(基于caffe)实战项目(8)--修改caffe源代码从添加loss(层)函数开始

DeepLearning(基于caffe)实战项目(1)--mnist_convert函数分析
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  caffe
相关文章推荐