[Keras]学习笔记(1):optimizers
2016-04-07 13:28
435 查看
Keras学习笔记(1):optimizers
keras.optimizers.Optimizer()
SGD(Stochastic gradient descent)
随机梯度下降
Stochastic gradient descent, with support for momentum, decay, and Nesterov momentum.keras.optimizers.SGD(lr=0.01, momentum=0.0, decay=0.0, nesterov=False)
参数
lr : float >= 0. Learning rate.学习速率momentum: float >= 0. Parameter updates momentum. 冲量单元
decay: float >= 0. Learning rate decay over each update.每次训练学习速率的衰减率
nesterov: boolean. Whether to apply Nesterov momentum.Nesterov 冲量单元
RMSprop
RMSProp optimizer.
It is recommended to leave the parameters of this optimizer at their default values.
This optimizer is usually a good choice for recurrent neural networks.
keras.optimizers.Adamax(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
参数
lr : float >= 0. Learning rate.beta_1/beta_2: floats, 0 < beta < 1. Generally close to 1.
decay: float >= 0. Learning rate decay over each update.
nesterov: boolean. Whether to apply Nesterov momentum.
Adagrad
Adagrad optimizer.
It is recommended to leave the parameters of this optimizer at their default values.
keras.optimizers.Adagrad(lr=0.01, epsilon=1e-06)
参数
lr: float >= 0. Learning rate.epsilon: float >= 0.
Adadelta
Adadelta optimizer.
It is recommended to leave the parameters of this optimizer at their default values.
This optimizer is usually a good choice for recurrent neural networks.
keras.optimizers.Adadelta(lr=1.0, rho=0.95, epsilon=1e-06)
参数
lr: float >= 0. Learning rate. It is recommended to leave it at the default value.rho: float >= 0.
epsilon: float >= 0. Fuzz factor.
Adam
Adam optimizer.
Default parameters follow those provided in the original paper.
keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
参数
lr: float >= 0. Learning rate.beta_1/beta_2: floats, 0 < beta < 1. Generally close to 1.
epsilon: float >= 0. Fuzz factor.
Adamax
Adam optimizer.
Adamax optimizer from Adam paper’s Section 7. It is a variant of Adam based on the infinity norm.
Default parameters follow those provided in the paper.
keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
参数
lr: float >= 0. Learning rate.beta_1/beta_2: floats, 0 < beta < 1. Generally close to 1.
epsilon: float >= 0. Fuzz factor.
相关文章推荐
- 在windows7(64位)下, keras的安装及错误处理
- keras学习(2)
- 安装keras,教育网速度比较快
- Keras 深度学习框架Python Example:CNN/mnist
- Keras(1):Keras安装与简介
- Keras笔记 -- objective
- 基于Theano的深度学习(Deep Learning)框架Keras
- 基于Theano的深度学习(Deep Learning)框架Keras学习随笔-01-FAQ
- 基于Theano的深度学习(Deep Learning)框架Keras学习随笔-02-Example
- 基于Theano的深度学习(Deep Learning)框架Keras学习随笔-03-优化器
- 基于Theano的深度学习(Deep Learning)框架Keras学习随笔-04-目标函数
- 基于Theano的深度学习(Deep Learning)框架Keras学习随笔-05-模型
- 基于Theano的深度学习(Deep Learning)框架Keras学习随笔-06-激活函数
- 基于Theano的深度学习(Deep Learning)框架Keras学习随笔-07-初始化权值
- 基于Theano的深度学习(Deep Learning)框架Keras学习随笔-08-规则化(规格化)
- 基于Theano的深度学习(Deep Learning)框架Keras学习随笔-09-约束限制
- 基于Theano的深度学习(Deep Learning)框架Keras学习随笔-10-回调
- 基于Theano的深度学习(Deep Learning)框架Keras学习随笔-11-数据集
- 基于Theano的深度学习(Deep Learning)框架Keras学习随笔-12-核心层
- 一边学,一边写出的人工智能教程(二)