您的位置:首页 > Web前端

[caffe]extract feature

2016-06-08 18:27 423 查看
记得最开始看caffe的时候就在纠结
train_val.prototxt
deploy.prototxt
到底有啥区别, 今天要用别人的一个模型提特征, 网站上(https://staff.fnwi.uva.nl/p.s.m.mettes/index.html#data)提供的是caffe model 和 deploy.prototxt, 查了一下午到底怎么提…

deploy.prototxt

name: "CaffeNet"
input: "data"
input_shape {
dim: 10
dim: 3
dim: 227
dim: 227
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
convolution_param {
num_output: 96
kernel_size: 11
stride: 4
}
}


看,deploy.prototxt里没有data层!那怎么输入图像???

网上说
train_val.prototxt
是用来训练的,

deploy.prototxt
是用来提特征/分类的.

1. c++

caffe官方的提特征是用的
train_val.prototxt


http://caffe.berkeleyvision.org/gathered/examples/feature_extraction.html

我照着这个把deploy.prototxt加了一层data层然后出错了..

好像大家都是拿pyhton+deploy.prototxt去提取特征.

2. python

tutorial

http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/01-learning-lenet.ipynb

http://stackoverflow.com/questions/32379878/cheat-sheet-for-caffe-pycaffe

http://christopher5106.github.io/deep/learning/2015/09/04/Deep-learning-tutorial-on-Caffe-Technology.html

code

http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb

这是单张图的

#Load an image (that comes with Caffe) and perform the preprocessing we've set up.
image = caffe.io.load_image(caffe_root + 'examples/images/cat.jpg')
transformed_image = transformer.preprocess('data', image)
#copy the image data into the memory allocated for the net
net.blobs['data'].data[ ...] = transformed_image


我修改了下,可以读入一个txt,提取多张图的特征

https://github.com/apsvvfb/caffe_extract_features/tree/master

image = caffe.io.load_image(caffe_root + 'examples/images/cat.jpg')
transformed_image = transformer.preprocess('data', image)
net.blobs['data'].data[0, ...] = transformed_image
image = caffe.io.load_image(caffe_root + 'examples/images/fish-bike.jpg')
transformed_image = transformer.preprocess('data', image)
net.blobs['data'].data[1, ...] = transformed_image


最后读数据的时候

# The first layer output of first image, conv1 (rectified responses of the filters above, first 36 only)
feat = net.blobs['conv1'].data[0, :36]
# The first layer output of second image
feat = net.blobs['conv1'].data[1, :36]


2.这里用了
caffe.Classifier()
, 我看到说现在都用
forward()
函数了

explanation: http://www.marekrei.com/blog/transforming-images-to-feature-vectors/

github: https://gist.github.com/marekrei/7adc87d2c4fde941cea6

http://blog.csdn.net/guoyilin/article/details/42886365
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: