【ML】ICML2015_Unsupervised Learning of Video Representations using LSTMs
2016-03-21 23:46
676 查看
Unsupervised Learning of Video Representations using LSTMs
Note here: it's a learning notes on new LSTMs architecture used as an unsupervised learning way of video representations.
(More unsupervised learning related topics, you can refer to:
Learning Temporal Embeddings for Complex Video Analysis
[b]Unsupervised Learning of Visual Representations using Videos[/b]
Unsupervised Visual Representation Learning by Context Prediction)
Link: http://arxiv.org/abs/1502.04681
Motivation:
- Understanding temporal sequences is important for solving many video related problems. We should utilize temporal structure of videos as a supervisory signal for unsupervised learning.
Proposed model:
In this paper, the author proposed three models based on LSTM:
1) LSTM Autoencoder Model:
![](http://images2015.cnblogs.com/blog/814681/201603/814681-20160321230806995-1607440738.png)
This model is composed of two parts, the encoder and the decoder.
The encoder accepts sequences of frames as input, and the learned representation generated from encoder are copied to decoder as initial input. Then the decoder should reconstruct similar images like input frames in reverse order.
(This is called unconditional version, while a conditional version receives last generated output of decoder as input, shown as the dashed boxes below)
Intuition: The reconstruction work requires the network to capture information about the appearance of objects and the background, this is exactly the information that we would like the representation to contain.
2) LSTM Future Predictor Model:
![](http://images2015.cnblogs.com/blog/814681/201603/814681-20160321233447198-1001467699.png)
This model is similar with the one above. The main difference lies in the output. Output of this model is the prediction of frames that come just after the input sequences. It also varies with conditional/unconditional versions just like the description above.
Intuition: In order to predict the next few frames correctly, the model needs information about which objects are present and how they are moving so that the motion can be extrapolated.
3) A Composite Model:
![](http://images2015.cnblogs.com/blog/814681/201603/814681-20160321234032964-102561825.png)
This model combines "input reconstruction" and "future prediction" together to form a more powerful model. These two modules share a same encoder, which encodes input sequences into a feature vector and copy them to different decoders.
Intuition: this only encoder learns representations that contain not only static appearance of objects&background, but also the dynamic informations like moving objects and their moving pattern.
Note here: it's a learning notes on new LSTMs architecture used as an unsupervised learning way of video representations.
(More unsupervised learning related topics, you can refer to:
Learning Temporal Embeddings for Complex Video Analysis
[b]Unsupervised Learning of Visual Representations using Videos[/b]
Unsupervised Visual Representation Learning by Context Prediction)
Link: http://arxiv.org/abs/1502.04681
Motivation:
- Understanding temporal sequences is important for solving many video related problems. We should utilize temporal structure of videos as a supervisory signal for unsupervised learning.
Proposed model:
In this paper, the author proposed three models based on LSTM:
1) LSTM Autoencoder Model:
![](http://images2015.cnblogs.com/blog/814681/201603/814681-20160321230806995-1607440738.png)
This model is composed of two parts, the encoder and the decoder.
The encoder accepts sequences of frames as input, and the learned representation generated from encoder are copied to decoder as initial input. Then the decoder should reconstruct similar images like input frames in reverse order.
(This is called unconditional version, while a conditional version receives last generated output of decoder as input, shown as the dashed boxes below)
Intuition: The reconstruction work requires the network to capture information about the appearance of objects and the background, this is exactly the information that we would like the representation to contain.
2) LSTM Future Predictor Model:
![](http://images2015.cnblogs.com/blog/814681/201603/814681-20160321233447198-1001467699.png)
This model is similar with the one above. The main difference lies in the output. Output of this model is the prediction of frames that come just after the input sequences. It also varies with conditional/unconditional versions just like the description above.
Intuition: In order to predict the next few frames correctly, the model needs information about which objects are present and how they are moving so that the motion can be extrapolated.
3) A Composite Model:
![](http://images2015.cnblogs.com/blog/814681/201603/814681-20160321234032964-102561825.png)
This model combines "input reconstruction" and "future prediction" together to form a more powerful model. These two modules share a same encoder, which encodes input sequences into a feature vector and copy them to different decoders.
Intuition: this only encoder learns representations that contain not only static appearance of objects&background, but also the dynamic informations like moving objects and their moving pattern.
相关文章推荐
- 知识点5:ln链接
- python正则表达式
- The project target (Android N (Preview)) was not properly loaded.
- csh(tcsh)
- PSP记录个人项目耗时情况
- 设计模式学习笔记——状态模式
- 工具类:拉伸一张图片(UIImage分类)
- 项目之RFID天线设计
- requests的content与text导致lxml的解析问题
- Spring源码之ApplicationContext(七)获取消息资源
- JAVA 基础 final关键字
- AndroidAutoLayout 屏幕适配
- Codeforces 400B Inna and New Matrix of Candies 【模拟】
- 图示aidl原理
- Android中SQLite数据库小计
- Android——关于Activity跳转的返回(无返回值和有返回值)——无返回值
- Spring源码之ApplicationContext(六)注册BeanPostProcessor
- 使用scm-manager搭建git/svn 代码管理仓库
- Objective-C探究alloc方法的实现
- Mysql绿色版安装配置