CIFAR10 代码分析详解——cifar10.py
2017-03-28 21:19
393 查看
引入库,定义各种参数
创建一个 summary 函数来记录 histogram 和 scalar,注意 scalar 记录的是x中0占的比例,用以衡量x的稀疏性sparsity。
创建 variable 生成函数,该变量存储在 cpu:0 上。输入参数 initializer 是用来指定何种方式初始化变量,比如:initializer=tf.constant_initializer(0.0)/tf.truncated_normal_initializer(stddev,dtype))。
创建 variable 生成函数,该函数与上面的函数不同,只用以生成制定标准差的正态分布变量,此外,如果想对该变量进行 weight decay,需要指定参数 wd,并把 weight decay 项用
tf.add_to_collection 加入到 'losses' 中。
调用 cifar10_input.distorted_inputs 来生成 distorted 的图像,用以扩充训练集。
如果不想用 distorted 图像,调用 cifar10_input.inputs ,用来做测试而非训练用。
下面构建网络模型,第一层卷积层 tf.nn.conv2d
池化和归一化
下面撘层与上面相似,只不过在 local4 中卷积层的权重参数引入 weight decay。
输出 softmax 层,这里没有直接使用堆层函数是因为下面损失函数使用的是 tf.nn.sparse_softmax_cross_entropy_with_logits,该函数自行计算 softmax ,所以直接输出应作为 softmax 层输入的 tensor 。
构建计算损失的函数,用
cross entropy 来作为损失函数,同时加上需要 weight decay 的参数损失项。
下面的函数用来记录损失,而且对 tf.get_collection('losses') 和 total_loss 进行滑动均值处理,记录平滑后的值。该函数返回的是进行均值的 op 。
下面的函数用来训练模型
用
tf.train.exponential_decay 来对 learning rate 进行衰退。
tf.control_dependencies 用来设置控制依赖项, 'with' 下面包含的 ops 只能在依赖项列表中的 ops 和 vars 都执行和计算完后才被执行。即 计算损失-计算梯度-更新参数。
记录 vars/grads ,并且生成一个影子变量用来存储滑动均值后的变量值(variable_average.apply)。
这里
tf.no_op 实际上是一个不做任何事情的 op,但是由于上面有依赖项,所以可以用它来控制 flow ,把它作为 train_op ,用以触发 梯度计算和参数更新来进行训练,但是他本身并不作任何事情。最终该训练函数返回一个用以训练(apply_grad, var_movave) 的op 。
from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import re import sys import tarfile from six.moves import urllib import tensorflow as tf import cifar10_input FLAGS = tf.app.flags.FLAGS # Basic model parameters. tf.app.flags.DEFINE_integer('batch_size', 128, """Number of images to process in a batch.""") tf.app.flags.DEFINE_string('data_dir', '/tmp/cifar10_data', """Path to the CIFAR-10 data directory.""") tf.app.flags.DEFINE_boolean('use_fp16', False, """Train the model using fp16.""") # Global constants describing the CIFAR-10 data set. IMAGE_SIZE = cifar10_input.IMAGE_SIZE NUM_CLASSES = cifar10_input.NUM_CLASSES NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN = cifar10_input.NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN NUM_EXAMPLES_PER_EPOCH_FOR_EVAL = cifar10_input.NUM_EXAMPLES_PER_EPOCH_FOR_EVAL # Constants describing the training process. MOVING_AVERAGE_DECAY = 0.9999 # The decay to use for the moving average. NUM_EPOCHS_PER_DECAY = 350.0 # Epochs after which learning rate decays. LEARNING_RATE_DECAY_FACTOR = 0.1 # Learning rate decay factor. INITIAL_LEARNING_RATE = 0.1 # Initial learning rate. # If a model is trained with multiple GPUs, prefix all Op names with tower_name # to differentiate the operations. Note that this prefix is removed from the # names of the summaries when visualizing a model. TOWER_NAME = 'tower' DATA_URL = 'http://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz'
创建一个 summary 函数来记录 histogram 和 scalar,注意 scalar 记录的是x中0占的比例,用以衡量x的稀疏性sparsity。
def _activation_summary(x): """Helper to create summaries for activations. Creates a summary that provides a histogram of activations. Creates a summary that measures the sparsity of activations. Args: x: Tensor Returns: nothing """ # Remove 'tower_[0-9]/' from the name in case this is a multi-GPU training # session. This helps the clarity of presentation on tensorboard. tensor_name = re.sub('%s_[0-9]*/' % TOWER_NAME, '', x.op.name) tf.summary.histogram(tensor_name + '/activations', x) tf.summary.scalar(tensor_name + '/sparsity', tf.nn.zero_fraction(x))
创建 variable 生成函数,该变量存储在 cpu:0 上。输入参数 initializer 是用来指定何种方式初始化变量,比如:initializer=tf.constant_initializer(0.0)/tf.truncated_normal_initializer(stddev,dtype))。
def _variable_on_cpu(name, shape, initializer): """Helper to create a Variable stored on CPU memory. Args: name: name of the variable shape: list of ints initializer: initializer for Variable Returns: Variable Tensor """ with tf.device('/cpu:0'): dtype = tf.float16 if FLAGS.use_fp16 else tf.float32 var = tf.get_variable(name, shape, initializer=initializer, dtype=dtype) return var
创建 variable 生成函数,该函数与上面的函数不同,只用以生成制定标准差的正态分布变量,此外,如果想对该变量进行 weight decay,需要指定参数 wd,并把 weight decay 项用
tf.add_to_collection 加入到 'losses' 中。
def _variable_with_weight_decay(name, shape, stddev, wd): """Helper to create an initialized Variable with weight decay. Note that the Variable is initialized with a truncated normal distribution. A weight decay is added only if one is specified. Args: name: name of the variable shape: list of ints stddev: standard deviation of a truncated Gaussian wd: add L2Loss weight decay multiplied by this float. If None, weight decay is not added for this Variable. Returns: Variable Tensor """ dtype = tf.float16 if FLAGS.use_fp16 else tf.float32 var = _variable_on_cpu( name, shape, tf.truncated_normal_initializer(stddev=stddev, dtype=dtype))
#计算weight decay项并加入到loss中。 if wd is not None: weight_decay = tf.multiply(tf.nn.l2_loss(var), wd, name='weight_loss') tf.add_to_collection('losses', weight_decay) return var
调用 cifar10_input.distorted_inputs 来生成 distorted 的图像,用以扩充训练集。
def distorted_inputs(): """Construct distorted input for CIFAR training using the Reader ops. Returns: images: Images. 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size. labels: Labels. 1D tensor of [batch_size] size. Raises: ValueError: If no data_dir """ if not FLAGS.data_dir: raise ValueError('Please supply a data_dir') data_dir = os.path.join(FLAGS.data_dir, 'cifar-10-batches-bin') images, labels = cifar10_input.distorted_inputs(data_dir=data_dir, batch_size=FLAGS.batch_size) if FLAGS.use_fp16: images = tf.cast(images, tf.float16) labels = tf.cast(labels, tf.float16) return images, labels
如果不想用 distorted 图像,调用 cifar10_input.inputs ,用来做测试而非训练用。
def inputs(eval_data): """Construct input for CIFAR evaluation using the Reader ops. Args: eval_data: bool, indicating if one should use the train or eval data set. Returns: images: Images. 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size. labels: Labels. 1D tensor of [batch_size] size. Raises: ValueError: If no data_dir """ if not FLAGS.data_dir: raise ValueError('Please supply a data_dir') data_dir = os.path.join(FLAGS.data_dir, 'cifar-10-batches-bin') images, labels = cifar10_input.inputs(eval_data=eval_data, data_dir=data_dir, batch_size=FLAGS.batch_size) if FLAGS.use_fp16: images = tf.cast(images, tf.float16) labels = tf.cast(labels, tf.float16) return images, labels
下面构建网络模型,第一层卷积层 tf.nn.conv2d
def inference(images): """Build the CIFAR-10 model. Args: images: Images returned from distorted_inputs() or inputs(). Returns: Logits. """ # We instantiate all variables using tf.get_variable() instead of # tf.Variable() in order to share variables across multiple GPU training runs. # If we only ran this model on a single GPU, we could simplify this function # by replacing all instances of tf.get_variable() wit 4000 h tf.Variable(). # # conv1 with tf.variable_scope('conv1') as scope: kernel = _variable_with_weight_decay('weights', shape=[5, 5, 3, 64], stddev=5e-2, wd=0.0) conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME') biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0)) pre_activation = tf.nn.bias_add(conv, biases) conv1 = tf.nn.relu(pre_activation, name=scope.name) _activation_summary(conv1)
池化和归一化
# pool1 pool1 = tf.nn.max_pool(conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool1') # norm1 norm1 = tf.nn.lrn(pool1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm1')
下面撘层与上面相似,只不过在 local4 中卷积层的权重参数引入 weight decay。
# conv2 with tf.variable_scope('conv2') as scope: kernel = _variable_with_weight_decay('weights', shape=[5, 5, 64, 64], stddev=5e-2, wd=0.0) conv = tf.nn.conv2d(norm1, kernel, [1, 1, 1, 1], padding='SAME') biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.1)) pre_activation = tf.nn.bias_add(conv, biases) conv2 = tf.nn.relu(pre_activation, name=scope.name) _activation_summary(conv2) # norm2 norm2 = tf.nn.lrn(conv2, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm2') # pool2 pool2 = tf.nn.max_pool(norm2, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool2') # local3 with tf.variable_scope('local3') as scope: # Move everything into depth so we can perform a single matrix multiply. reshape = tf.reshape(pool2, [FLAGS.batch_size, -1]) dim = reshape.get_shape()[1].value weights = _variable_with_weight_decay('weights', shape=[dim, 384], stddev=0.04, wd=0.004) biases = _variable_on_cpu('biases', [384], tf.constant_initializer(0.1)) local3 = tf.nn.relu(tf.matmul(reshape, weights) + biases, name=scope.name) _activation_summary(local3) # local4 with tf.variable_scope('local4') as scope: weights = _variable_with_weight_decay('weights', shape=[384, 192], stddev=0.04, wd=0.004) biases = _variable_on_cpu('biases', [192], tf.constant_initializer(0.1)) local4 = tf.nn.relu(tf.matmul(local3, weights) + biases, name=scope.name) _activation_summary(local4)
输出 softmax 层,这里没有直接使用堆层函数是因为下面损失函数使用的是 tf.nn.sparse_softmax_cross_entropy_with_logits,该函数自行计算 softmax ,所以直接输出应作为 softmax 层输入的 tensor 。
with tf.variable_scope('softmax_linear') as scope: weights = _variable_with_weight_decay('weights', [192, NUM_CLASSES], stddev=1/192.0, wd=0.0) biases = _variable_on_cpu('biases', [NUM_CLASSES], tf.constant_initializer(0.0)) softmax_linear = tf.add(tf.matmul(local4, weights), biases, name=scope.name) _activation_summary(softmax_linear) return softmax_linear
构建计算损失的函数,用
cross entropy 来作为损失函数,同时加上需要 weight decay 的参数损失项。
def loss(logits, labels): """Add L2Loss to all the trainable variables. Add summary for "Loss" and "Loss/avg". Args: logits: Logits from inference(). labels: Labels from distorted_inputs or inputs(). 1-D tensor of shape [batch_size] Returns: Loss tensor of type float. """ # Calculate the average cross entropy loss across the batch. labels = tf.cast(labels, tf.int64) cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits( labels=labels, logits=logits, name='cross_entropy_per_example') cross_entropy_mean = tf.reduce_mean(cross_entropy, name='cross_entropy') tf.add_to_collection('losses', cross_entropy_mean) # The total loss is defined as the cross entropy loss plus all of the weight # decay terms (L2 loss). return tf.add_n(tf.get_collection('losses'), name='total_loss')
下面的函数用来记录损失,而且对 tf.get_collection('losses') 和 total_loss 进行滑动均值处理,记录平滑后的值。该函数返回的是进行均值的 op 。
def _add_loss_summaries(total_loss): """Add summaries for losses in CIFAR-10 model. Generates moving average for all losses and associated summaries for visualizing the performance of the network. Args: total_loss: Total loss from loss(). Returns: loss_averages_op: op for generating moving averages of losses. """ # Compute the moving average of all individual losses and the total loss. loss_averages = tf.train.ExponentialMovingAverage(0.9, name='avg') losses = tf.get_collection('losses') loss_averages_op = loss_averages.apply(losses + [total_loss]) # Attach a scalar summary to all individual losses and the total loss; do the # same for the averaged version of the losses. for l in losses + [total_loss]: # Name each loss as '(raw)' and name the moving average version of the loss # as the original loss name. tf.summary.scalar(l.op.name + ' (raw)', l) tf.summary.scalar(l.op.name, loss_averages.average(l)) return loss_averages_op
下面的函数用来训练模型
def train(total_loss, global_step): """Train CIFAR-10 model. Create an optimizer and apply to all trainable variables. Add moving average for all trainable variables. Args: total_loss: Total loss from loss(). global_step: Integer Variable counting the number of training steps processed. Returns: train_op: op for training. """ # Variables that affect learning rate. num_batches_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN / FLAGS.batch_size decay_steps = int(num_batches_per_epoch * NUM_EPOCHS_PER_DECAY)
用
tf.train.exponential_decay 来对 learning rate 进行衰退。
# Decay the learning rate exponentially based on the number of steps. lr = tf.train.exponential_decay(INITIAL_LEARNING_RATE, global_step, decay_steps, LEARNING_RATE_DECAY_FACTOR, staircase=True) tf.summary.scalar('learning_rate', lr) # Generate moving averages of all losses and associated summaries. loss_averages_op = _add_loss_summaries(total_loss)
tf.control_dependencies 用来设置控制依赖项, 'with' 下面包含的 ops 只能在依赖项列表中的 ops 和 vars 都执行和计算完后才被执行。即 计算损失-计算梯度-更新参数。
# Compute gradients.ly with tf.control_dependencies([loss_averages_op]): opt = tf.train.GradientDescentOptimizer(lr) grads = opt.compute_gradients(total_loss) # Apply gradients. apply_gradient_op = opt.apply_gradients(grads, global_step=global_step)
记录 vars/grads ,并且生成一个影子变量用来存储滑动均值后的变量值(variable_average.apply)。
# Add histograms for trainable variables. for var in tf.trainable_variables(): tf.summary.histogram(var.op.name, var) # Add histograms for gradients. for grad, var in grads: if grad is not None: tf.summary.histogram(var.op.name + '/gradients', grad) # Track the moving averages of all trainable variables. variable_averages = tf.train.ExponentialMovingAverage( MOVING_AVERAGE_DECAY, global_step) variables_averages_op = variable_averages.apply(tf.trainable_variables())
这里
tf.no_op 实际上是一个不做任何事情的 op,但是由于上面有依赖项,所以可以用它来控制 flow ,把它作为 train_op ,用以触发 梯度计算和参数更新来进行训练,但是他本身并不作任何事情。最终该训练函数返回一个用以训练(apply_grad, var_movave) 的op 。
with tf.control_dependencies([apply_gradient_op, variables_averages_op]): train_op = tf.no_op(name='train') return train_op
相关文章推荐
- CIFAR10 代码分析详解——cifar10_train.py
- CIFAR10 代码分析详解——cifar10_input.py
- wav文件格式分析详解和解析代码
- 四极管:2410启动代码分析之 vector.s详解一
- Hadoop RCFile存储格式详解(源码分析、代码示例)
- HADOOP 网页日志分析代码详解
- 关于DS18B20温度传感器的时序详解及代码分析
- php代码出现错误分析详解第1/2页
- C++卷积神经网络实例:tiny_cnn代码详解(9)——partial_connected_layer层结构类分析(下)
- SLIC超像素分割详解(二):关键代码分析
- OpenCv实现卷积神经网络实例:tiny_cnn代码详解(6)——average_pooling_layer层结构类分析
- C++卷积神经网络实例:tiny_cnn代码详解(7)——fully_connected_layer层结构类分析
- Windows C++代码heap分析详解
- 【代码】PHP的json_encode详解分析
- x264 代码重点详解 详细分析
- tableView中cell的删除、插入、移动、复制粘贴问题详解代码分析
- wav文件格式分析详解和解析代码
- 深入XPath的详解以及Java示例代码分析
- 三色球和荷兰国旗问题 分析 c语言代码详解
- 编译原理 LR(0)项目集规范族的构造 LR(0)分析表+分析语句 详解分析+代码