您的位置:首页 > 运维架构

[Keras]学习笔记(1):optimizers

2016-04-07 13:28 435 查看

Keras学习笔记(1):optimizers

keras.optimizers.Optimizer()


SGD(Stochastic gradient descent)

随机梯度下降

Stochastic gradient descent, with support for momentum, decay, and Nesterov momentum.

keras.optimizers.SGD(lr=0.01, momentum=0.0, decay=0.0, nesterov=False)


参数

lr : float >= 0. Learning rate.学习速率

momentum: float >= 0. Parameter updates momentum. 冲量单元

decay: float >= 0. Learning rate decay over each update.每次训练学习速率的衰减率

nesterov: boolean. Whether to apply Nesterov momentum.Nesterov 冲量单元

RMSprop

RMSProp optimizer.

It is recommended to leave the parameters of this optimizer at their default values.

This optimizer is usually a good choice for recurrent neural networks.

keras.optimizers.Adamax(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=1e-08)


参数

lr : float >= 0. Learning rate.

beta_1/beta_2: floats, 0 < beta < 1. Generally close to 1.

decay: float >= 0. Learning rate decay over each update.

nesterov: boolean. Whether to apply Nesterov momentum.

Adagrad

Adagrad optimizer.

It is recommended to leave the parameters of this optimizer at their default values.

keras.optimizers.Adagrad(lr=0.01, epsilon=1e-06)


参数

lr: float >= 0. Learning rate.

epsilon: float >= 0.

Adadelta

Adadelta optimizer.

It is recommended to leave the parameters of this optimizer at their default values.

This optimizer is usually a good choice for recurrent neural networks.

keras.optimizers.Adadelta(lr=1.0, rho=0.95, epsilon=1e-06)


参数

lr: float >= 0. Learning rate. It is recommended to leave it at the default value.

rho: float >= 0.

epsilon: float >= 0. Fuzz factor.

Adam

Adam optimizer.

Default parameters follow those provided in the original paper.

keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08)


参数

lr: float >= 0. Learning rate.

beta_1/beta_2: floats, 0 < beta < 1. Generally close to 1.

epsilon: float >= 0. Fuzz factor.

Adamax

Adam optimizer.

Adamax optimizer from Adam paper’s Section 7. It is a variant of Adam based on the infinity norm.

Default parameters follow those provided in the paper.

keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08)


参数

lr: float >= 0. Learning rate.

beta_1/beta_2: floats, 0 < beta < 1. Generally close to 1.

epsilon: float >= 0. Fuzz factor.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  Keras optimizers