您的位置:首页 > 其它

tensorflow学习之常用函数总结

2018-03-31 11:59 751 查看

Ⅰ.class tf.train.Optimizer

 

优化器(optimizers)类的基类。这个类定义了在训练模型的时候添加一个操作的API。你基本上不会直接使用这个类,但是你会用到他的子类比如
GradientDescentOptimizer
AdagradOptimizer
MomentumOptimizer
.等等这些。 
后面讲的时候会详细讲一下
GradientDescentOptimizer
 这个类的一些函数,然后其他的类只会讲构造函数,因为类中剩下的函数都是大同小异的

 

Ⅱ.class tf.train.GradientDescentOptimizer

 

这个类是实现梯度下降算法的优化器。(结合理论可以看到,这个构造函数需要的一个学习率就行了)

 

__init__
(learning_rate, use_locking=False,name=’GradientDescent’)

 

作用:创建一个梯度下降优化器对象 
参数: 
learning_rate: A Tensor or a floating point value. 要使用的学习率 
use_locking: 要是True的话,就对于更新操作(update operations.)使用锁 
name: 名字,可选,默认是”GradientDescent”.

 

compute_gradients(loss,var_list=None,gate_gradients=GATE_OP,aggregation_method=None,colocate_gradients_with_ops=False,grad_loss=None)

 

作用:对于在变量列表(var_list)中的变量计算对于损失函数的梯度,这个函数返回一个(梯度,变量)对的列表,其中梯度就是相对应变量的梯度了。这是minimize()函数的第一个部分, 
参数: 
loss: 待减小的值 
var_list: 默认是在GraphKey.TRAINABLE_VARIABLES. 
gate_gradients: How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH. 
aggregation_method: Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod. 
colocate_gradients_with_ops: If True, try colocating gradients with the corresponding op. 
grad_loss: Optional. A Tensor holding the gradient computed for loss.

 

apply_gradients(grads_and_vars,global_step=None,name=None)

 

作用:把梯度“应用”(Apply)到变量上面去。其实就是按照梯度下降的方式加到上面去。这是minimize()函数的第二个步骤。 返回一个应用的操作。 
参数: 
grads_and_vars: compute_gradients()函数返回的(gradient, variable)对的列表 
global_step: Optional Variable to increment by one after the variables have been updated. 
name: 可选,名字

 

get_name()

 

minimize(loss,global_step=None,var_list=None,gate_gradients=GATE_OP,aggregation_method=None,colocate_gradients_with_ops=False,name=None,grad_loss=None)

 

作用:非常常用的一个函数 
通过更新var_list来减小loss,这个函数就是前面compute_gradients() 和apply_gradients().的结合

 

Ⅲ.class tf.train.AdadeltaOptimizer

 

实现了 Adadelta算法的优化器,可以算是下面的Adagrad算法改进版本

 

构造函数: 
tf.train.AdadeltaOptimizer.[b]init(learning_rate=0.001, rho=0.95, epsilon=1e-08, use_locking=False, name=’Adadelta’)[/b]

 

作用:构造一个使用Adadelta算法的优化器 
参数: 
learning_rate: tensor或者浮点数,学习率 
rho: tensor或者浮点数. The decay rate. 
epsilon: A Tensor or a floating point value. A constant epsilon used to better conditioning the grad update. 
use_locking: If True use locks for update operations. 
name: 【可选】这个操作的名字,默认是”Adadelta”

 

tf.square(x, name=None)

Computes square of x element-wise.对x内的所有元素进行平方操作I.e., (y = x * x = x^2).即 (y = x * x = x^2)。Args:
x: A Tensor or SparseTensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128.
name: A name for the operation (optional).参数:
x: 一个张量或者是稀疏张量。必须是下列类型之一:half, float32, float64, int32, int64, complex64, complex128.
name: 操作的名字 (可选参数).Returns:
一个张量或者是稀疏张量。有着跟x相同的类型。

作者:HabileBadger
链接:http://www.jianshu.com/p/7a201b0814e3
來源:简书
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。

tf.split(split_dim, num_split, value, name='split')

函數解說:將大的tensor分割成更小的tensor,第一個參數代表沿著那一維開始分割,第二個參數代表切成幾段,如下面例子,「5,30」沿著第一維也就是column開始切割,切成三段,因此就有3個「5,10」的tensor被分割出來 函數範例:



 

tf.reduce_mean(input_tensor,reduction_indices=None,keep_dims=False, name=None)

 函數解說:將tensor取平均,第二個參數代表沿著那一維取平均,例如範例第二個,沿著第0維也就是row取平均得到「1.5,1.5」,手指沿著row的方向掃過,再如第3個範例,沿著第1維也就是column取平均得到[1,2],手指沿著column方向掃過 函數範例:

 

tf.reduce_sum(input_tensor,reduction_indices=None, keep_dims=False, name=None)

函數解說:將tensor加總起來,第二個參數代表沿著那一維加總,例如範例第二個,沿著第0維也就是row加總得到[2,2,2],手指沿著row的方向掃過,再如第3個範例,沿著第1維也就是column加總得到[3,3],手指沿著column方向掃過,第3個參數如果為True的話,那麼所切出來的長度會回復到1,如同第四個例子一樣,第五個例子表示可以不只沿著其中一個維度加總。 函數範例:

 

tf.reshape(tensor, shape, name=None)

函數解說:將tensor的維度重新改寫,-1代表自動計算該維度的數量 函數範例: 

tf.matmul(a,b,ranspose_a=False,transpose_b=False,_is_sparse=False,_is_sparse=False,ame=None)

函數解說:將a,b兩個矩陣相乘,如果需要事先轉置的話,可以把個別的選項調成True,如果矩陣裏面包括很多0的話,可以調用spare=True轉為更有效率的演算法 函數範例: 

tf.argmin(input, dimension, name=None)

函數解說:沿著需要的維度找尋最小值的索引值,最小由0開始 

tf.mul(x, y, name=None)

函數解說:實現兩個矩陣點乘,兩個矩陣必須要相同維度 

tf.one_hot(indices, depth, on_value=None, off_value=None, axis=None, dtype=None, name=None)

函數解說:沿著需要的深度製造出one_hot出來,-1代表全都給off_value,indices代表在那一格給on_value0 2          -------->[1 0 0]    [0 0 1]1 -1                     [0 1 0]    [0 0 0]函數範例:

 

tf.cast(x, dtype, name=None)

函數解說:將tensor轉換成其他type 函數範例:   

 

tf.Variable.eval(session=None)

函數解說:顯示出某個tensor變數的值 函數範例: 

 

tf.Variable.assign(value, use_locking=False)

函數解說:將tensor中的變數用value值取代,這function可以用來實現double DQN 

tf.Variable.assign_sub(delta, use_locking=False)

函數解說:將tensor中的變數-delta值,然後取代 

tf.train.Saver.save(sess,save_path,global_step=None,latest_filename=None,meta_graph_suffix='meta',write_meta_graph=True)

函數解說:將sess中的訓練參數存起來,一般都會先把前面縮寫,打成saver = tf.train.Saver(),然後以後就可以用saver直接調用 函數範例:


 

tf.train.Saver.restore(sess, save_path)

函數解說:用剛剛存sess的網路回復數據,save_path 沒有打的話會自動去找尋最近一次save所存的latest_checkpoint() 

tf.train.get_checkpoint_state(checkpoint_dir,latest_filename=None)

函數解說:檢查checkpoint_dir資料夾中有無checkpoint這個資料,如果有,回傳checkpoint的資訊給它,底下是範例checkpoint檔案 

 

tf.trainable_variables()

函數解說:回傳trainable=True的所有變數,並存成list的型態 函數說明: 

以此網路為說明,v是一個list,如需要觀察各個變數的話,需要給出索引值才可取出,變數以兩兩為一組合,前面代表weight,後面代表bias
 


 輸出為:

   

tf.concat(concat_dim, values, name='concat')

函數解說:將兩個tensor 連接在一起,dim=0代表從row開始連接起,1代表從col連接起 函數說明: t1 = [[1, 2, 3], [4, 5, 6]]
t2 = [[7, 8, 9], [10, 11, 12]]
tf.concat(0, [t1, t2]) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]
tf.concat(1, [t1, t2]) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]]

# tensor t3 with shape [2, 3]
# tensor t4 with shape [2, 3]
tf.shape(tf.concat(0, [t3, t4])) ==> [4, 3]
tf.shape(tf.concat(1, [t3, t4])) ==> [2, 6]文章標籤tensorflow 常用函數 tensorflow function  

Optimizer

 
本篇博客探索所用tensorflow的优化器解决最优化问题
In [1]:
import tensorflow as tf
 定义目标函数, loss=(x−3)2loss=(x−3)2, 求goal最小时,x的值:In [2]:
# x = tf.placeholder(tf.float32)
x = tf.Variable(tf.truncated_normal([1]), name="x")
goal = tf.pow(x-3,2, name="goal")
In [3]:
with tf.Session() as sess:
x.initializer.run()
print x.eval()
print goal.eval()
 
[-0.15094033]
[ 9.92842579]
 使用梯度下降优化器解决问题。

1. 使用minimize()

In [4]:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
train_step = optimizer.minimize(goal)
In [5]:
def train():
with tf.Session() as sess:
x.initializer.run()
for i in range(10):
print "x: ", x.eval()
train_step.run()
print "goal: ",goal.eval()
train()
 
x:  [ 0.0178078]
goal:  [ 5.69182014]
x:  [ 0.61424625]
goal:  [ 3.64276576]
x:  [ 1.09139693]
goal:  [ 2.33137012]
x:  [ 1.47311759]
goal:  [ 1.49207664]
x:  [ 1.77849412]
goal:  [ 0.95492917]
x:  [ 2.0227952]
goal:  [ 0.61115462]
x:  [ 2.21823621]
goal:  [ 0.39113891]
x:  [ 2.37458897]
goal:  [ 0.25032887]
x:  [ 2.49967122]
goal:  [ 0.16021053]
x:  [ 2.59973693]
goal:  [ 0.10253474]
 

如果是最大化呢?

很简单,给目标函数套上tf.negative就可以了。不过好像写成-1 * 完全没有区别~~In [6]:
y = tf.Variable(tf.truncated_normal([1]))
max_goal = tf.sin(y)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
# train_step = optimizer.minimize(tf.negative(max_goal))
train_step = optimizer.minimize(-1 * max_goal)
In [7]:
with tf.Session() as sess:
y.initializer.run()
for i in range(10):
print "y: ", y.eval()
train_step.run()
print "max_goal: ",max_goal.eval()
 
y:  [ 0.44078666]
max_goal:  [ 0.50659275]
y:  [ 0.5312283]
max_goal:  [ 0.57895529]
y:  [ 0.61744684]
max_goal:  [ 0.64343935]
y:  [ 0.69898278]
max_goal:  [ 0.70009637]
y:  [ 0.77553248]
max_goal:  [ 0.7492556]
y:  [ 0.8469373]
max_goal:  [ 0.79144251]
y:  [ 0.91316539]
max_goal:  [ 0.82730311]
y:  [ 0.97428977]
max_goal:  [ 0.85753846]
y:  [ 1.03046536]
max_goal:  [ 0.88285518]
y:  [ 1.08190739]
max_goal:  [ 0.90393031]
 

2 . minimize() = compute_gradients() + apply_gradients()

拆分成计算梯度和应用梯度两个步骤。In [8]:
# compute_gradients 返回的是:A list of (gradient, variable) pairs
gra_and_var = optimizer.compute_gradients(goal)
train_step = optimizer.apply_gradients(gra_and_var)
train()
 
x:  [ 0.40001234]
goal:  [ 4.32635927]
x:  [ 0.92000991]
goal:  [ 2.7688694]
x:  [ 1.33600795]
goal:  [ 1.77207625]
x:  [ 1.66880643]
goal:  [ 1.13412893]
x:  [ 1.93504512]
goal:  [ 0.72584224]
x:  [ 2.14803624]
goal:  [ 0.46453902]
x:  [ 2.31842899]
goal:  [ 0.29730505]
x:  [ 2.45474315]
goal:  [ 0.19027515]
x:  [ 2.56379461]
goal:  [ 0.12177601]
x:  [ 2.65103579]
goal:  [ 0.07793671]
 

3. 进一步

clip_by_global_norm:修正梯度值

用于控制梯度爆炸的问题。梯度爆炸和梯度弥散的原因一样,都是因为链式法则求导的关系,导致梯度的指数级衰减。为了避免梯度爆炸,需要对梯度进行修剪。In [9]:
gradients, vriables = zip(*optimizer.compute_gradients(goal))
gradients, _ = tf.clip_by_global_norm(gradients, 1.25)
train_step = optimizer.apply_gradients(zip(gradients, vriables))
train()
 
x:  [-0.76665598]
goal:  [ 13.26165771]
x:  [-0.64165598]
goal:  [ 12.36686897]
x:  [-0.51665598]
goal:  [ 11.50333118]
x:  [-0.39165598]
goal:  [ 10.67104053]
x:  [-0.26665598]
goal:  [ 9.87000275]
x:  [-0.14165597]
goal:  [ 9.10021305]
x:  [-0.01665596]
goal:  [ 8.36167431]
x:  [ 0.10834403]
goal:  [ 7.65438461]
x:  [ 0.23334403]
goal:  [ 6.97834587]
x:  [ 0.35834405]
goal:  [ 6.33355713]
 

exponential_decay 加入学习率衰减:

In [10]:
# global_step 记录当前是第几个batch
global_step = tf.Variable(0)
learning_rate = tf.train.exponential_decay(
3.0, global_step, 3, 0.3, staircase=True)
optimizer2 = tf.train.GradientDescentOptimizer(learning_rate)
gradients, vriables = zip(*optimizer2.compute_gradients(goal))
gradients, _ = tf.clip_by_global_norm(gradients, 1.25)
train_step = optimizer2.apply_gradients(zip(gradients, vriables),
global_step=global_step)
with tf.Session() as sess:
global_step.initializer.run()
x.initializer.run()
for i in range(10):
print "x: ", x.eval()
train_step.run()
print "goal: ",goal.eval()
 
x:  [-0.92852646]
goal:  [ 0.03187185]
x:  [ 2.82147312]
goal:  [ 0.79679614]
x:  [ 3.89263439]
goal:  [ 8.16453552]
x:  [ 0.14263475]
goal:  [ 3.00108886]
x:  [ 1.26763487]
goal:  [ 0.3688924]
x:  [ 2.39263487]
goal:  [ 0.23609133]
x:  [ 3.4858923]
goal:  [ 0.04995694]
x:  [ 3.2235105]
goal:  [ 0.01057091]
x:  [ 3.10281491]
goal:  [ 0.0022368]
x:  [ 3.04729486]
goal:  [ 0.00157078]
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: