Caffe学习:使用pycaffe定义网络
2015-09-04 21:02
661 查看
使用pycaffe定义网络:
参考链接:Learning LeNet引入库:
import caffe from caffe import layers as L from caffe import params as P
使用pycaffe定义Net:
n = caffe.NetSpec()
定义DataLayer:
n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb, transform_param=dict(scale=1. / 255), ntop=2) # 效果如下: layer { name: "data" type: "Data" top: "data" top: "label" transform_param { scale: 0.00392156862745 } data_param { source: "mnist/mnist_train_lmdb" batch_size: 64 backend: LMDB } }
定义ConvolutionLayer:
n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier')) # 效果如下: layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" convolution_param { num_output: 20 kernel_size: 5 weight_filler { type: "xavier" } } }
定义PoolingLayer:
n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX) # 效果如下: layer { name: "pool1" type: "Pooling" bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_size: 2 stride: 2 } }
定义InnerProductLayer:
n.ip1 = L.InnerProduct(n.pool2, num_output=500, weight_filler=dict(type='xavier')) # 效果如下: layer { name: "ip1" type: "InnerProduct" bottom: "pool2" top: "ip1" inner_product_param { num_output: 500 weight_filler { type: "xavier" } } }
定义ReLULayer:
n.relu1 = L.ReLU(n.ip1, in_place=True) # 效果如下: layer { name: "relu1" type: "ReLU" bottom: "ip1" top: "ip1" }
定义SoftmaxWithLossLayer:
n.loss = L.SoftmaxWithLoss(n.ip2, n.label) # 效果如下: layer { name: "loss" type: "SoftmaxWithLoss" bottom: "ip2" bottom: "label" top: "loss" }
定义minst网络:
import caffe from caffe import layers as L from caffe import params as P
def lenet(lmdb, batch_size):
n = caffe.NetSpec()
n.data, n.label = L.Data(batch_size=batch_size,
backend=P.Data.LMDB, source=lmdb,
transform_param=dict(scale=1. / 255), ntop=2)
n.conv1 = L.Convolution(n.data, kernel_size=5,
num_output=20, weight_filler=dict(type='xavier'))
n.pool1 = L.Pooling(n.conv1, kernel_size=2,
stride=2, pool=P.Pooling.MAX)
n.conv2 = L.Convolution(n.pool1, kernel_size=5,
num_output=50, weight_filler=dict(type='xavier'))
n.pool2 = L.Pooling(n.conv2, kernel_size=2,
stride=2, pool=P.Pooling.MAX)
n.ip1 = L.InnerProduct(n.pool2, num_output=500,
weight_filler=dict(type='xavier'))
n.relu1 = L.ReLU(n.ip1, in_place=True)
n.ip2 = L.InnerProduct(n.relu1, num_output=10,
weight_filler=dict(type='xavier'))
n.loss = L.SoftmaxWithLoss(n.ip2, n.label)
return n.to_proto()
保存网络定义:
with open('mnist/lenet_auto_train.prototxt', 'w') as f: f.write(str(lenet('mnist/mnist_train_lmdb', 64))) with open('mnist/lenet_auto_test.prototxt', 'w') as f: f.write(str(lenet('mnist/mnist_test_lmdb', 100)))
得
lenet_auto_train.prototxt文件如下:(
lenet_auto_test.prototxt文件类似):
layer { name: "data" type: "Data" top: "data" top: "label" transform_param { scale: 0.00392156862745 } data_param { source: "mnist/mnist_train_lmdb" batch_size: 64 backend: LMDB } } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" convolution_param { num_output: 20 kernel_size: 5 weight_filler { type: "xavier" } } } layer { name: "pool1" type: "Pooling" bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "conv2" type: "Convolution" bottom: "pool1" top: "conv2" convolution_param { num_output: 50 kernel_size: 5 weight_filler { type: "xavier" } } } layer { name: "pool2" type: "Pooling" bottom: "conv2" top: "pool2" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "ip1" type: "InnerProduct" bottom: "pool2" top: "ip1" inner_product_param { num_output: 500 weight_filler { type: "xavier" } } } layer { name: "relu1" type: "ReLU" bottom: "ip1" top: "ip1" } layer { name: "ip2" type: "InnerProduct" bottom: "ip1" top: "ip2" inner_product_param { num_output: 10 weight_filler { type: "xavier" } } } layer { name: "loss" type: "SoftmaxWithLoss" bottom: "ip2" bottom: "label" top: "loss" }
相关文章推荐
- 深入理解HTTP Session
- Stanford机器学习---第四讲. 神经网络的表示 Neural Networks representation
- 前向型神经网络之BPNN(附源码)
- hdu 3127 WHU girls 2009武汉网络赛 dp
- 机器学习之深度学习(Deep Learning)
- TCP: time wait bucket table overflow
- 机器学习之基于3D卷积神经网络的人体行为理解(论文笔记)
- backtrack5R3之基础网络常见命令
- CCF 201503-4 网络延时 (树的直径)
- CCF 201403-4 无线网络 (二维最短路)
- 神经网络的cost function
- [华为MU203] 使用AT命令实现网络访问
- HttpServlect详情
- http状态码
- 【.Net实现】TCP级别的反向代理、Socket 连接池和数据包解析器
- TCP的流量控制
- HTTP状态码详解
- 配额的软限制和硬限制 http://blog.chinaunix.net/uid-23177306-id-2531124.html
- IP地址处理模块IPy之多网络计算
- Linux 硬限制和软限制 http://www.cppblog.com/API/archive/2012/03/19/168289.html