《tensorflow实战》之实现复杂的卷积网络(五)
2018-03-17 23:08
573 查看
一 基本概念
1. 正则化
L1正则化和L2正则化的定义:L1正则化是指权值向量w中各个元素的绝对值之和,通常表示为||w||1。
L2正则化是指权值向量w中各个元素的平方和然后再求平方根(可以看到Ridge回归的L2正则化项有平方符号),通常表示为||w||2。
L1正则化和L2正则化的作用
L1正则化可以产生稀疏权值矩阵,即产生一个稀疏模型,可以用于特征选择,大部分无用特征的值置为0。
L2正则化可以防止模型过拟合(overfitting),让权重不过大,使得特征权重比较平均;一定程度上,L1也可以防止过拟合。
2.LRN
LRN层模仿了生物神经系统的“侧抑制”机制,对局部神经元的活动产生创建竞争环境,使得其中响应较大的值变的相对更大并且抑制其他反馈较小的神经元,增强了模型的泛化能力。局部响应归一化包含两种模式:一种是通道间归一化,局部区域范围在相邻通道内,没有空间拓展;另一种是通道内归一化,局部区域在空间上扩展,但是只针对独立通道进行。二 Tensorflow实现
1.导入库
batch_size是训练批次大小max_steps训练轮数
import cifar10,cifar10_input import tensorflow as tf import numpy as np import time max_steps = 3000 batch_size = 128 data_dir = '/tmp/cifar10_data/cifar-10-batches-bin'
2.定义初始化函数
用wl控制L2loss的大小,用tf.nn.l2_loss产生L2 loos,将其与wl相乘,即为weight loss。用tf.nn.sparse_softmax_cross_entropy_with_logits一次性计算了softmax和cross entropy loos。
total_loss是最终的loos,其包含了weight loss和cross entropy loos。
def variable_with_weight_loss(shape, stddev, wl): var = tf.Variable(tf.truncated_normal(shape, stddev=stddev)) if wl is not None: weight_loss = tf.multiply(tf.nn.l2_loss(var), wl, name='weight_loss') tf.add_to_collection('losses', weight_loss) return var def loss(logits, labels): labels = tf.cast(labels, tf.int64) cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits( logits=logits, labels=labels, name='cross_entropy_per_example') cross_entropy_mean = tf.reduce_mean(cross_entropy, name='cross_entropy') tf.add_to_collection('losses', cross_entropy_mean) return tf.add_n(tf.get_collection('losses'), name='total_loss')
3.初始化模型结构
cifar10.maybe_download_and_extract()下载数据并解压tf.nn.in_top_k求输出分数最高的准确率。
tf.train.start_queue_runners()用了16路线程进行加速。
cifar10.maybe_download_and_extract() images_train, labels_train = cifar10_input.distorted_inputs(data_dir=data_dir, batch_size=batch_size) images_test, labels_test = cifar10_input.inputs(eval_data=True,data_dir=data_dir, batch_size=batch_size) #images_train, labels_train = cifar10.distorted_inputs() #images_test, labels_test = cifar10.inputs(eval_data=True) image_holder = tf.placeholder(tf.float32, [batch_size, 24, 24, 3]) label_holder = tf.placeholder(tf.int32, [batch_size]) #logits = inference(image_holder) weight1 = variable_with_weight_loss(shape=[5, 5, 3, 64], stddev=5e-2, wl=0.0) kernel1 = tf.nn.conv2d(image_holder, weight1, [1, 1, 1, 1], padding='SAME') bias1 = tf.Variable(tf.constant(0.0, shape=[64])) conv1 = tf.nn.relu(tf.nn.bias_add(kernel1, bias1)) pool1 = tf.nn.max_pool(conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1],padding='SAME') norm1 = tf.nn.lrn(pool1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75) weight2 = variable_with_weight_loss(shape=[5, 5, 64, 64], stddev=5e-2, wl=0.0) kernel2 = tf.nn.conv2d(norm1, weight2, [1, 1, 1, 1], padding='SAME') bias2 = tf.Variable(tf.constant(0.1, shape=[64])) conv2 = tf.nn.relu(tf.nn.bias_add(kernel2, bias2)) norm2 = tf.nn.lrn(conv2, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75) pool2 = tf.nn.max_pool(norm2, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME') reshape = tf.reshape(pool2, [batch_size, -1]) dim = reshape.get_shape()[1].value weight3 = variable_with_weight_loss(shape=[dim, 384], stddev=0.04, wl=0.004) bias3 = tf.Variable(tf.constant(0.1, shape=[384])) local3 = tf.nn.relu(tf.matmul(reshape, weight3) + bias3) weight4 = variable_with_weight_loss(shape=[384, 192], stddev=0.04, wl=0.004) bias4 = tf.Variable(tf.constant(0.1, shape=[192])) local4 = tf.nn.relu(tf.matmul(local3, weight4) + bias4) weight5 = variable_with_weight_loss(shape=[192, 10], stddev=1/192.0, wl=0.0) bias5 = tf.Variable(tf.constant(0.0, shape=[10])) logits = tf.add(tf.matmul(local4, weight5), bias5) loss = loss(logits, label_holder) train_op = tf.train.AdamOptimizer(1e-3).minimize(loss) #0.72 top_k_op = tf.nn.in_top_k(logits, label_holder, 1) sess = tf.InteractiveSession() tf.global_variables_initializer().run() tf.train.start_queue_runners()
4.模型训练和测试
第一个sess.run产生一个批次的数据第二个sess.run计算loos和优化数据
feed_dict喂入数据
for step in range(max_steps): start_time = time.time() image_batch,label_batch = sess.run([images_train,labels_train]) _, loss_value = sess.run([train_op, loss],feed_dict={image_holder: image_batch, label_holder:label_batch}) duration = time.time() - start_time if step % 10 == 0: examples_per_sec = batch_size / duration sec_per_batch = float(duration) format_str = ('step %d, loss = %.2f (%.1f examples/sec; %.3f sec/batch)') print(format_str % (step, loss_value, examples_per_sec, sec_per_batch)) ### num_examples = 10000 import math num_iter = int(math.ceil(num_examples / batch_size)) true_count = 0 total_sample_count = num_iter * batch_size step = 0 while step < num_iter: image_batch,label_batch = sess.run([images_test,labels_test]) predictions = sess.run([top_k_op],feed_dict={image_holder: image_batch, label_holder:label_batch}) true_count += np.sum(predictions) step += 1 precision = adcc true_count / total_sample_count print('precision @ 1 = %.3f' % precision)
相关文章推荐
- 《tensorflow实战》之实现简单的卷积网络(四)
- tensorflow41《TensorFlow实战》笔记-08-02 TensorFlow实现深度强化学习-估值网络 code
- TensorFlow 实现简单的卷积网络
- 利用tensorflow实现神经网络卷积层、池化层、全连接层
- TensorFlow深度学习新手教程:TensorFlow实现简单的卷积网络
- 用深度卷积网络实现图像超分辨率
- 《tensorflow实战》之实现AlexNet网络(六)
- 医疗行业多层级复杂网络环境下的消息传输(远程会诊)架构与实现
- tensorflow学习之路——实现简单的卷积网络
- 复杂网络K-Shell算法及其Python实现
- tensorflow:3)实现简单的卷积网络
- 使用 Linux on Power 刀片服务器实现复杂的网络
- tensorflow:4)实现'进击'的卷积网络-CIFAR数据集
- TensorFlow实现简单的卷积网络
- 全卷积网络实现语义分割综述
- 关于卷积网络的模型融合---使用softmax实现
- TensorFlow实现简单的卷积网络
- Python NetworkX库实现复杂网络
- NMS——卷积网络改进实现
- 神经光流网络——用卷积网络实现光流预测(FlowNet: Learning Optical Flow with Convolutional Networks)