Debugging TensorFlow models 调试 TensorFlow 模型
2017-08-15 15:09
489 查看
Debugging TensorFlow models
Symbolic nature of TensorFlow makes it relatively more difficult to debug TensorFlow code compared to regular python code. Here we introduce a number of tools included with TensorFlow that make debugging much easier.Probably the most common error one can make when using TensorFlow is passing Tensors of wrong shape to ops. Many TensorFlow ops can operate on tensors of different ranks and shapes. This can be convenient when using the API, but may lead to extra headache when things go wrong.
For example, consider the tf.matmul op, it can multiply two matrices:
a = tf.random_uniform([2, 3]) b = tf.random_uniform([3, 4]) c = tf.matmul(a, b) # c is a tensor of shape [2, 4]
But the same function also does batch matrix multiplication:
a = tf.random_uniform([10, 2, 3]) b = tf.random_uniform([10, 3, 4]) tf.matmul(a, b) # c is a tensor of shape [10, 2, 4]
Another example that we talked about before in the broadcasting section is add operation which supports broadcasting:
a = tf.constant([[1.], [2.]]) b = tf.constant([1., 2.]) c = a + b # c is a tensor of shape [2, 2]
Validating your tensors with tf.assert* ops
One way to reduce the chance of unwanted behavior is to explicitly verify the rank or shape of intermediate tensors with tf.assert* ops.a = tf.constant([[1.], [2.]]) b = tf.constant([1., 2.]) check_a = tf.assert_rank(a, 1) # This will raise an InvalidArgumentError exception check_b = tf.assert_rank(b, 1) with tf.control_dependencies([check_a, check_b]): c = a + b # c is a tensor of shape [2, 2]
Remember that assertion nodes like other operations are part of the graph and if not evaluated would get pruned during Session.run(). So make sure to create explicit dependencies to assertion ops, to force TensorFlow to execute them.
You can also use assertions to validate the value of tensors at runtime:
check_pos = tf.assert_positive(a)
See the official docs for a full list of assertion ops.
Logging tensor values with tf.Print
Another useful built-in function for debugging is tf.Print which logs the given tensors to the standard error:input_copy = tf.Print(input, tensors_to_print_list)
Note that tf.Print returns a copy of its first argument as output. One way to force tf.Print to run is to pass its output to another op that gets executed. For example if we want to print the value of tensors a and b before adding them we could do something like this:
a = ... b = ... a = tf.Print(a, [a, b]) c = a + b
Alternatively we could manually define a control dependency.
Check your gradients with tf.compute_gradient_error
Not all the operations in TensorFlow come with gradients, and it's easy to unintentionally build graphs for which TensorFlow can not compute the gradients.Let's look at an example:
import tensorflow as tf def non_differentiable_entropy(logits): probs = tf.nn.softmax(logits) return tf.nn.softmax_cross_entropy_with_logits(labels=probs, logits=logits) w = tf.get_variable('w', shape=[5]) y = -non_differentiable_entropy(w) opt = tf.train.AdamOptimizer() train_op = opt.minimize(y) sess = tf.Session() sess.run(tf.global_variables_initializer()) for i in range(10000): sess.run(train_op) print(sess.run(tf.nn.softmax(w)))
We are using tf.nn.softmax_cross_entropy_with_logits to define entropy over a categorical distribution. We then use Adam optimizer to find the weights with maximum entropy. If you have passed a course on information theory, you would know that uniform distribution contains maximum entropy. So you would expect for the result to be [0.2, 0.2, 0.2, 0.2, 0.2]. But if you run this you may get unexpected results like this:
[ 0.34081486 0.24287023 0.23465775 0.08935683 0.09230034]
It turns out tf.nn.softmax_cross_entropy_with_logits has undefined gradients with respect to labels! But how may we spot this if we didn't know?
Fortunately for us TensorFlow comes with a numerical differentiator that can be used to find symbolic gradient errors. Let's see how we can use it:
with tf.Session(): diff = tf.test.compute_gradient_error(w, [5], y, []) print(diff)
If you run this, you would see that the difference between the numerical and symbolic gradients are pretty high (0.06 - 0.1 in my tries).
Now let's fix our function with a differentiable version of the entropy and check again:
import tensorflow as tf import numpy as np def entropy(logits, dim=-1): probs = tf.nn.softmax(logits, dim) nplogp = probs * (tf.reduce_logsumexp(logits, dim, keep_dims=True) - logits) return tf.reduce_sum(nplogp, dim) w = tf.get_variable('w', shape=[5]) y = -entropy(w) print(w.get_shape()) print(y.get_shape()) with tf.Session() as sess: diff = tf.test.compute_gradient_error(w, [5], y, []) print(diff)
The difference should be ~0.0001 which looks much better.
Now if you run the optimizer again with the correct version you can see the final weights would be:
[ 0.2 0.2 0.2 0.2 0.2]
which are exactly what we wanted.
TensorFlow summaries, and tfdbg (TensorFlow Debugger) are other tools that can be used for debugging. Please refer to the official docs to learn more.
更多教程:http://www.tensorflownews.com/
相关文章推荐
- Effective TensorFlow Chapter 5: 在TensorFlow中,给模型喂数据(feed data)
- tensorflow tutorials(三):用tensorflow建立逻辑回归模型
- 第二阶段-tensorflow程序图文详解(八)Debugging TensorFlow Programs
- 谷歌发布TensorFlow 1.4与TensorFlow Lattice:利用先验知识提升模型准确度 搜狐科技 10-12 15:29 选自:Google Research Blog 参与:李泽南、
- tensorflow.models.rnn.rnn_cell.linear在tensorflow1.0版本之后找不到(附tensorflow1.0 API新变化)
- tensorflow 中的 tensorflow.models.rnn.ptb.reader 不存在
- TensorFlow学习笔记7----Large-scale Linear Models with TensorFlow
- Effective TensorFlow Chapter 9: TensorFlow模型原型的设计和利用python ops的高级可视化
- debug tensorflow / 使用gdb调试tensorflow底层C++代码
- tensorflow tutorials(二):用tensorflow建立岭回归模型
- TensorFlow学习篇【1】Getting Started With TensorFlow
- (Tensorflow之六)滑动平均模型ExponentialMovingAverage
- Tensorflow安装问题: Could not find a version that satisfies the requirement tensorflow
- 对tensorflow 中的attention encoder-decoder模型调试分析
- TensorFlow 学习(十五)—— tensorflow.python.platform
- 在浏览器中进行深度学习:TensorFlow.js (三)更多的基本模型
- [TensorFlow] Introducing TensorFlow Feature Columns
- Centos6安装TensorFlow及TensorFlowOnSpark
- tensorflow tutorials(六):用tensorflow实现多层感知器(Multilayer Perceptron)
- Win10下基于Docker使用tensorflow serving部署模型