您的位置:首页 > 理论基础 > 计算机网络

神经网络与深度学习学习笔记:神经网络的优化(二)

2017-10-12 11:01 681 查看
本文为吴恩达课程的编程作业。

依赖库与首选项

import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import math
import sklearn
import sklearn.datasets

from opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
from testCases import *

%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'


自定义库代码见文末附录。

梯度下降

先回顾一下普通梯度下降法的代码:

def update_parameters_with_gd(parameters, grads, learning_rate):
"""
Update parameters using one step of gradient descent
"""

L = len(parameters) // 2

# Update rule for each parameter
for l in range(L):
parameters["W" + str(l+1)] = parameters["W"+str(l+1)]-learning_rate*grads["dW"+str(l+1)]
parameters["b" + str(l+1)] = parameters["b"+str(l+1)]-learning_rate*grads["db"+str(l+1)]

return parameters


通常神经网络模型中的参数是使用梯度下降的方法来进行迭代更新以达到优化目的的,在上面的程序中,梯度下降的每一次迭代都用到了整个parameters矩阵。可以预见如果在样本数非常多的情况下,显然是没有办法一次性载入这么多数据的(或者载入非常慢)。

Mini-batch梯度下降

原理

为实现超大规模样本下的梯度下降,给出一种将样本分割来进行梯度下降的方法,其基本思想如下:

将样本按照一定的规模进行分组,将原始梯度下降法中每一次对整个样本的处理转化成分批次地对每一个样本分组的处理。

下面看一下其具体实现过程。

洗牌

对样本矩阵X、Y进行同步随机列重排,打乱原有样本的顺序:



分组

设定一个组规模,对重排后的X、Y进行分组。需要注意样本数量不一定能被设定的组规模整除,所以最后一组的样本数很可能要小于组规模:



代码实现

def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
"""
Creates a list of random minibatches from (X, Y)

Arguments:
mini_batch_size -- size of the mini-batches, integer

Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
"""

np.random.seed(seed)
m = X.shape[1]                  # number of training examples
mini_batches = []

# Step 1: Shuffle
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((1,m))

# Step 2: Partition
num_complete_minibatches = math.floor(m/mini_batch_size) #能够达到组规模的分组数量
for k in range(0, num_complete_minibatches):
mini_batch_X = shuffled_X[:,k*mini_batch_size:(k+1)*mini_batch_size]
mini_batch_Y = shuffled_Y[:,k*mini_batch_size:(k+1)*mini_batch_size]
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)

#处理最后一组
if m % mini_batch_size != 0:
mini_batch_X = shuffled_X[:,(math.floor(m/mini_batch_size))*mini_batch_size:]
mini_batch_Y = shuffled_Y[:,(math.floor(m/mini_batch_size))*mini_batch_size:]
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)

return mini_batches


注意:根据计算机组成原理,组规模
mini_batch_size
的大小最好设置成2的次数。

性能预测

先回顾一下原始梯度下降在做什么:原始梯度下降方法中,使用整个样本集的参数矩阵,正向传播计算损失,反向传播计算梯度,这个梯度的方向是一定指向问题空间最优解(至少指向局部最优解)的:



如果将整个样本集分割,每次梯度下降只用到部分样本,那么每次反向传播计算的梯度只是指向部分样本集的最优解,而部分样本集显然不能代表整个样本集,这就会造成震荡。所以可以预测Mini-batch梯度下降法的收敛速度要慢于原始梯度下降法,这也是时间换空间的一个体现:



特别地,当
mini_batch_size=1
时,即每次梯度下降只使用单个样本,算法演变成随机梯度下降,震荡更大,收敛更慢,但是每次迭代只需要缓存一个样本;而当
mini_batch_size=m
时,算法退化成普通的梯度下降法,震荡小,收敛快,但是每次迭代需要缓存所有样本。

动量梯度下降

原理

前面看到Mini-batch梯度下降的振荡导致了收敛速度变慢,如果能减小振荡就能够加快收敛,给出一种能够减小Mini-batch梯度下降振荡的方法,其基本思想如下:

用过单片机的人应该知道,有一种专门用于平缓数据的简单算法,老化算法,其表达式为:

valuecur=valueprev∗β+datacur∗(1−β)

其中β为老化系数,valuecur为当前输出值,valueprev为上次输出值,datacur为当前数据值。由表达式可以看出老化算法并不取每个数据的全值,而是将其赋一个权值1−β,并且这个权值随着时间的增加不断减小,这样就可以平缓突变数据造成的影响。

对于梯度下降中使用的参数θ,计算式变更为:

vdθ[l]=βvdθ[l]+(1−β)dθ[l]

θ[l]=θ[l]−αvdθ[l]

代码实现

初始化:

def initialize_velocity(parameters):
"""
Returns:
v -- python dictionary containing the current velocity.
v['dW' + str(l)] = velocity of dWl
v['db' + str(l)] = velocity of dbl
"""

L = len(parameters) // 2
v = {}

# Initialize
for l in range(L):
v["dW" + str(l+1)] = np.zeros((parameters['W' + str(l+1)].shape[0],parameters['W' + str(l+1)].shape[1]))
v["db" + str(l+1)] = np.zeros((parameters['b' + str(l+1)].shape[0],parameters['b' + str(l+1)].shape[1]))

return v


注意:此处把v的初始值置位了0,这有一个弊端,就是不管数据是如何分布的,曲线都会从原点开始变化,这就给曲线的前一截带来了误差。不过随着时间的增长,这一误差会逐渐消失。

参数更新:

def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):
"""
Update parameters using Momentum

Arguments:
v -- python dictionary containing the current velocity:
v['dW' + str(l)] = ...
v['db' + str(l)] = ...
beta -- the momentum hyperparameter, scalar

Returns:
v -- python dictionary containing your updated velocities
"""

L = len(parameters) // 2

# Momentum update for each parameter
for l in range(L):
v["dW" + str(l+1)] = beta*v["dW" + str(l+1)]+(1-beta)*grads["dW" + str(l+1)]
v["db" + str(l+1)] = beta*v["db" + str(l+1)]+(1-beta)*grads["db" + str(l+1)]
parameters["W" + str(l+1)] = parameters["W" + str(l+1)]-learning_rate*v["dW"+str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)]-learning_rate*v["db"+str(l+1)]

return parameters, v


特别地,当超参数β=0时,即不使用老化算法,算法便退化成普通的梯度下降法。

性能预测

当老化系数β值越大时,算法会越重视之前的数据,而当前数据对整体造成的影响越小。β的常用值为0.9。

可以看到动量梯度下降法保留了先前的梯度方向,然后再根据新数据计算得到的梯度方向进行调整,这样就减小了梯度下降过程中的振荡。

Adam

Adam是训练神经网络算法最有效的优化方法之一,它结合了RMSProp与动量梯度下降法。

原理

先来看一下其计算式,对于梯度下降法中的参数θ,有:

vdθ[l]=β1vdθ[l]+(1−β1∂J∂θ[l])

vcorrecteddθ[l]=vdθ[l]1−βt1

sdθ[l]=β2sdθ[l]+(1−β2)(∂J∂θ[l])2

scorrecteddθ[l]=sdθ[l]1−βt2

θ[l]=θ[l]−αvcorrecteddθ[l]scorrecteddθ[l]−−−−−−−√+ϵ

t表示Adam的迭代次数

ϵ是一个极小值,加上来避免除0操作

其中带corrected上标的运算值表示对老化算法零初始值的偏差补偿。

代码实现

初始化:

def initialize_adam(parameters) :
"""
Initializes v and s as two python dictionaries with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.

Returns:
v -- python dictionary that will contain the exponentially weighted average of the gradient.
v["dW" + str(l)] = ...
v["db" + str(l)] = ...
s -- python dictionary that will contain the exponentially weighted average of the squared gradient.
s["dW" + str(l)] = ...
s["db" + str(l)] = ...

"""

L = len(parameters) // 2
v = {}
s = {}

# Initialize v, s
for l in range(L):
v["dW" + str(l+1)] = np.zeros((parameters["W" + str(l+1)].shape[0],parameters["W" + str(l+1)].shape[1]))
v["db" + str(l+1)] = np.zeros((parameters["b" + str(l+1)].shape[0],parameters["b" + str(l+1)].shape[1]))
s["dW" + str(l+1)] = np.zeros((parameters["W" + str(l+1)].shape[0],parameters["W" + str(l+1)].shape[1]))
s["db" + str(l+1)] = np.zeros((parameters["b" + str(l+1)].shape[0],parameters["b" + str(l+1)].shape[1]))

return v, s


更新参数:

def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,
beta1 = 0.9, beta2 = 0.999,  epsilon = 1e-8):
"""
Update parameters using Adam

Arguments:
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
"""

L = len(parameters) // 2
v_corrected = {}
s_corrected = {}

# Perform Adam update on all parameters
for l in range(L):
v["dW" + str(l+1)] = beta1*v["dW" + str(l+1)]+(1-beta1)*grads["dW" + str(l+1)]
v["db" + str(l+1)] = beta1*v["db" + str(l+1)]+(1-beta1)*grads["db" + str(l+1)]

# Compute bias-corrected first moment estimate
v_corrected["dW" + str(l+1)] = v["dW" + str(l+1)]/(1-beta1**t)
v_corrected["db" + str(l+1)] = v["db" + str(l+1)]/(1-beta1**t)

s["dW" + str(l+1)] = beta2*s["dW" + str(l+1)]+(1-beta2)*(grads["dW" + str(l+1)]**2)
s["db" + str(l+1)] = beta2*s["db" + str(l+1)]+(1-beta2)*(grads["db" + str(l+1)]**2)

# Compute bias-corrected second raw moment estimate
s_corrected["dW" + str(l+1)] = s["dW" + str(l+1)]/(1-beta2**t)
s_corrected["db" + str(l+1)] = s["db" + str(l+1)]/(1-beta2**t)

# Update parameters
parameters["W" + str(l+1)] = parameters["W" + str(l+1)]-learning_rate*v_corrected["dW" + str(l+1)]/(np.sqrt(s_corrected["dW" + str(l+1)])+epsilon)
parameters["b" + str(l+1)] = parameters["b" + str(l+1)]-learning_rate*v_corrected["db" + str(l+1)]/(np.sqrt(s_corrected["db" + str(l+1)])+epsilon)

return parameters, v, s


性能预测

不同优化下的模型

测试数据

train_X, train_Y = load_dataset()




整合优化方法

def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,
beta1 = 0.9, beta2 = 0.999,  epsilon = 1e-8, num_epochs = 10000, print_cost = True):
"""
3-layer neural network model which can be run in different optimizer modes.

Arguments:
mini_batch_size -- the size of a mini batch
beta -- Momentum hyperparameter
beta1 -- Exponential decay hyperparameter for the past gradients estimates
beta2 -- Exponential decay hyperparameter for the past squared gradients estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
num_epochs -- number of epochs
print_cost -- True to print the cost every 1000 epochs
"""

L = len(layers_dims)
costs = []                       # to keep track of the cost
t = 0                            # initializing the counter required for Adam update
seed = 10

parameters = initialize_parameters(layers_dims)

if optimizer == "gd":
pass # no initialization required for gradient descent
elif optimizer == "momentum":
v = initialize_velocity(parameters)
elif optimizer == "adam":
v, s = initialize_adam(parameters)

for i in range(num_epochs):
# Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch
seed = seed + 1
minibatches = random_mini_batches(X, Y, mini_batch_size, seed)

for minibatch in minibatches:
(minibatch_X, minibatch_Y) = minibatch

a3, caches = forward_propagation(minibatch_X, parameters)

cost = compute_cost(a3, minibatch_Y)

grads = backward_propagation(minibatch_X, minibatch_Y, caches)

if optimizer == "gd":
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
elif optimizer == "momentum":
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)
elif optimizer == "adam":
t = t + 1 # Adam counter
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,
t, learning_rate, beta1, beta2,  epsilon)

# Print the cost every 1000 epoch
if print_cost and i % 1000 == 0:
print ("Cost after epoch %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)

# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('epochs (per 100)')
plt.title("Learning rate = " + str(learning_rate))
plt.show()

return parameters


Mini-batch梯度下降法

训练模型

# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "gd")

predictions = predict(train_X, train_Y, parameters)

# Plot decision boundary
plt.title("Model with Gradient Descent optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)


性能表现

代价曲线:



决策边界:



带动量的Mini-batch梯度下降法

训练模型

# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum")

predictions = predict(train_X, train_Y, parameters)

# Plot decision boundary
plt.title("Model with Momentum optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)


性能表现

代价曲线:



决策边界:



(这个怎么跟不带动量的没有区别?)

带Adam的Mini-batch梯度下降法

训练模型

# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "adam")

predictions = predict(train_X, train_Y, parameters)

# Plot decision boundary
plt.title("Model with Adam optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)


性能表现

代价曲线:



决策边界:



总结

带动量的梯度下降法通常是有帮助的,不过在小数据规模下其作用微乎其微。

另一方面,Adam要明显优于mini-batch梯度下降法与动量梯度下降法。如果增加以上例子的迭代次数,三种方法都会有很好的性能表现,不过Adam的收敛速度要快得多。

Adam的优点:

相对较低的内存需求(高于梯度下降与动量梯度下降),不过可以与mini-batch梯度下降相结合。

即使只轻微的改动超参数(学习率α除外),也会有很好的表现。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  神经网络
相关文章推荐