keras Embedding layer [Arguments input_dim: int > 0. Size of the vocabulary, i.e. maximum integer ]
2017-07-06 11:46
471 查看
keras.layers.embeddings.Embedding(input_dim,
output_dim, embeddings_initializer='uniform',
embeddings_regularizer=None,
activity_regularizer=None,
embeddings_constraint=None,
mask_zero=False,
input_length=None)
Turns positive integers (indexes) into dense vectors of fixed size. eg. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]
This layer can only be used as the first layer in a model.
Example
Arguments
input_dim: int > 0. Size of the vocabulary, i.e. maximum integer index + 1.
output_dim: int >= 0. Dimension of the dense embedding.
embeddings_initializer: Initializer for the
(see initializers).
embeddings_regularizer: Regularizer function applied to the
(see regularizer).
embeddings_constraint: Constraint function applied to the
(see constraints).
mask_zero: Whether or not the input value 0 is a special "padding" value that should be masked out. This is useful when using recurrent
layers which may take variable length input. If this is
all subsequent layers in the model need to support masking or an exception will be raised. If mask_zero is set to True, as a consequence, index 0 cannot be used in the vocabulary (input_dim should equal size of vocabulary + 1).
input_length: Length of input sequences, when it is constant. This argument is required if you are going to connect
upstream (without it, the shape of the dense outputs cannot be computed).
Input shape
2D tensor with shape:
Output shape
3D tensor with shape:
References
A Theoretically Grounded Application of Dropout in Recurrent Neural Networks
output_dim, embeddings_initializer='uniform',
embeddings_regularizer=None,
activity_regularizer=None,
embeddings_constraint=None,
mask_zero=False,
input_length=None)
Turns positive integers (indexes) into dense vectors of fixed size. eg. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]
This layer can only be used as the first layer in a model.
Example
model = Sequential() model.add(Embedding(1000, 64, input_length=10)) # the model will take as input an integer matrix of size (batch, input_length). # the largest integer (i.e. word index) in the input should be no larger than 999 (vocabulary size). # now model.output_shape == (None, 10, 64), where None is the batch dimension. input_array = np.random.randint(1000, size=(32, 10)) model.compile('rmsprop', 'mse') output_array = model.predict(input_array) assert output_array.shape == (32, 10, 64)
Arguments
input_dim: int > 0. Size of the vocabulary, i.e. maximum integer index + 1.
output_dim: int >= 0. Dimension of the dense embedding.
embeddings_initializer: Initializer for the
embeddingsmatrix
(see initializers).
embeddings_regularizer: Regularizer function applied to the
embeddingsmatrix
(see regularizer).
embeddings_constraint: Constraint function applied to the
embeddingsmatrix
(see constraints).
mask_zero: Whether or not the input value 0 is a special "padding" value that should be masked out. This is useful when using recurrent
layers which may take variable length input. If this is
Truethen
all subsequent layers in the model need to support masking or an exception will be raised. If mask_zero is set to True, as a consequence, index 0 cannot be used in the vocabulary (input_dim should equal size of vocabulary + 1).
input_length: Length of input sequences, when it is constant. This argument is required if you are going to connect
Flattenthen
Denselayers
upstream (without it, the shape of the dense outputs cannot be computed).
Input shape
2D tensor with shape:
(batch_size, sequence_length).
Output shape
3D tensor with shape:
(batch_size, sequence_length, output_dim).
References
A Theoretically Grounded Application of Dropout in Recurrent Neural Networks
相关文章推荐
- The type List is not generic; it cannot be parameterized with arguments <Integer>
- The size of the object heap + VM data exceeds the maximum representable size问题解决办法
- 【tomcat8】consider increasing the maximum size of the cache
- consider increasing the maximum size of the cache. After eviction approximately [10,239] KB of data
- Spring Boot 批量上传: The field files exceeds its maximum permitted size of 1048576 bytes.
- .\ethercat.axf: error: L6047U: The size of this image (33488 bytes) exceeds the maximum allowed for
- Spring Boot修改最大上传文件限制:The field file exceeds its maximum permitted size of 1048576 bytes.
- 警告:consider increasing the maximum size of the cache
- The solution of html <input type = "file "> in webview not work
- keras--example (batch_size,sequence_length,embedding_dim)与(batch_size,sequence_length)如何对应相乘
- 优化unity发布 iOS应用大小 Optimizing the Size of the Built iOS Player<转>
- Tomcat8启动报there was insufficient free space available after evicting expired cache entries - consider increasing the maximum size of the cache
- consider increasing the maximum size of the cache.
- Leetcode--->Given n points on a 2D plane, find the maximum number of points that lie on the same str
- <Socket> <BEA-000402> <There are: 5 active sockets, but the maximum number of socket reader threads
- tomcat启动中提示 - consider increasing the maximum size of the cache
- springboot tomcat8 duplicate springSecurityFilterChain and increasing the maximum size of the cache
- here was insufficient free space available after evicting expired cache entries - consider increasing the maximum size of the cache
- tomcat consider increasing the maximum size of the cache java.lang.IncompatibleClassChangeError: Imp
- [转]Spring Boot修改最大上传文件限制:The field file exceeds its maximum permitted size of 1048576 bytes.