搜集的一些neural style style github source and website
2016-08-30 15:29
2186 查看
搜集的一些neural style style github source and website
https://github.com/jcjohnson/neural-style https://github.com/yusuketomoto/chainer-fast-neuralstyle https://github.com/manuelruder/artistic-videos https://github.com/DmitryUlyanov/texture_nets https://github.com/mbartoli/neural-animation https://github.com/zerolocker/neural-style https://github.com/AbdullahAlfaraj/neural-style-website https://github.com/DylanAlloy/NeuralStyle-WebApp https://github.com/OlavHN/fast-neural-style https://github.com/chuanli11/CNNMRF https://github.com/Teaonly/easyStyle https://github.com/searchXiaoLai/paintmaster https://github.com/naman14/neural-style-android https://github.com/DmitryUlyanov/fast-neural-doodle https://github.com/layumi/2016_Artist_Style https://github.com/alireza-a/neural-style-webapp https://github.com/layumi/2015_Face_Detection https://github.com/hashbangCoder/Real-Time-Style-Transfer
VGG_ILSVRC_19_layers_deploy.prototxt
https://github.com/genekogan/CubistMirror https://github.com/gafr/chainer-fast-neuralstyle-models https://github.com/awentzonline/keras-rtst https://github.com/suquark/neural-style-visualizer https://github.com/anishathalye/neural-style https://github.com/larspars/neural-style-video https://github.com/titu1994/Neural-Style-Transfer https://github.com/andersbll/neural_artistic_style
Neural-Style movie
https://github.com/zhaw/neural_style https://github.com/Explee/neural_style
Performance result
We can see a x703 time slower in a tensorflow CPU non-quantized implementation versus the AWS GPU
InfrastructureTime
tensorflow GPU K420 AWS:~0.0329s (batching) 0.026s (looping)
tensorflow mac os:4.13s (batching) 3.31s (looping)
tensorflow mac os simulateur:5.86s (float32) 16.9 (quantized, this is weird)
tensorflow iphone 6s CPU:18.30s (float32)
Image size: 600x600
batching: all image are processed in parallel
looping: one image at a time
quantized: Using tensorflow quantization system, (Shouldn't be slower, probably needs a cleaner tf graph)
https://github.com/SaMnCo/docker-neuraltalk2
Place instances_train-val2014.zip in the same folder. You can download it from http://mscoco.org/dataset/#download
Then run the following commands:
Deep learning algorithm paints smooth-moving works of art
http://newatlas.com/neural-network-videos/44580/ PDF截图
推特上一个艺术家收集整理的艺术化视频资料集合
http://www.genekogan.com/works/style-transfer.html 资料合集
https://gist.github.com/genekogan/d61c8010d470e1dbe15d 制作流程
http://www.kylemcdonald.net/stylestudies/ 选择合适的Style,不要无畏的浪费GPU Time
https://m.youtube.com/watch?utm_campaign=buffer&utm_source=twitter.com&v=BuuNjnjpqFI&utm_medium=social&utm_content=buffer5220e http://mt.sohu.com/20160816/n464537105.shtml http://blog.josephmisiti.com/making-neural-art https://github.com/mtyka/neural_artistic_style
用7台手机测试。3个iPhone,4个Android。最快的android,9秒左右,最慢的ios,约15秒左右。
https://github.com/rupeshs/neuralstyler https://github.com/ryankiros/neural-storyteller https://github.com/gafr/chainer-fast-neuralstyle-models/issues/5 https://github.com/SergeyMorugin/ostagram https://developer.apple.com/reference/accelerate/1912851-bnns https://github.com/collinhundley/Swift-AI/issues/50 http://arxiv.org/abs/1603.08155 https://github.com/jcjohnson/neural-style/issues/313 https://github.com/yusuketomoto/chainer-fast-neuralstyle/issues/1#issuecomment-228944706
markz-nyc commented 23 hours ago
Prisma already made the neural style into offline mode, how iPhone can get such good result in few seconds without using gpu?
@DylanAlloyDylanAlloy commented 23 hours ago • edited
Because it's based on C code probably rather than Python.
https://github.com/awentzonline/keras-rtst.git https://www.researchgate.net/publication/301836893_Perceptual_Losses_for_Real-Time_Style_Transfer_and_Super-Resolution https://vimeo.com/167910860 http://www.creativeai.net/posts/EyoFYu2ZDv6T3zdYi/real-time-style-transfer-with-keras https://www.reddit.com/r/MachineLearning/comments/4cj1jj/160308155_perceptual_losses_for_realtime_style/ http://www.le.com/ptv/vplay/22792152.html https://plus.google.com/+ResearchatGoogle/posts/KxetFFXpPTJ http://www.academia.edu/254458/Algorithm_for_Real-Time_Style_Transfer_for_Human_Motion http://www.slideshare.net/yusuketomoto/realtime-style-transfer-63669036
http://blog.csdn.net/lebula/article/details/51896836
论文笔记:Perceptual Losses for Real-Time Style Transfer and Super-Resolution[doing]
1.transformation: image to image
2.perceptual losses:
psnr是per-pixel的loss,值高未必代表图片质量好,广泛应用只是因为计算比较简单
对图片做super resolution
define 4 loss:
1.pixel L2 :feature reconstruction loss
2.gram matrix:style reconstruction loss
3.per-pixel
4.TV
http://www.genekogan.com/works/style-transfer.html http://humanmotion.ict.ac.cn/papers/2015P1_StyleTransfer/details.htm http://dl.acm.org/citation.cfm?id=2766999 https://www.versioneye.com/python/keras-rtst/0.0.1
pip install https://pypi.python.org/packages/source/k/keras-rtst/keras-rtst-0.0.1.tar.gz https://blogs.nvidia.com/blog/2016/05/25/deep-learning-paints-videos/ http://www.meyumer.com/pdfs/SpectralStyleTransfer.pdf http://blogs.scientificamerican.com/sa-visual/neural-networks-for-artists/ http://dmlc.ml/mxnet/2016/06/20/end-to-end-neural-style.html https://github.com/dmlc/mxnet/tree/master/example/neural-style http://www.nerdcore.de/2016/07/10/prisma-style-transfer-kommt-fuer-android-und-die-technik-hinter-der-app/ http://link.springer.com/article/10.1007/s11554-016-0612-0 http://sssslide.com/www.slideshare.net/yusuketomoto/realtime-style-transfer-63669036 http://www.trancefish.de/blog/show/design/Prima+ist+auch+zum+Filmemachen+geeignet/ http://petapixel.com/2016/07/20/prisma-app-turns-standard-timelapse-incredible-moving-painting/ http://petapixel.com/2016/08/03/artisto-app-prisma-video-turns-videos-van-goghs/ https://blog.my.com/artisto-app-for-video-processing-when-your-videos-turn-into-paintings-that-come-to-life/ https://www.engadget.com/2016/08/03/artisto-prisma-for-videos/
style transfer in real-time
https://github.com/6o6o/chainer-fast-neuralstyle.git
#!/bin/bash
for name in ./*.jpg; do convert -resize 256x256\! $name $name; done
ffmpeg -i input.flv out%d.png
ffmpeg -i input.flv -vf fps=1 out%d.png
#ffmpeg -framerate 30 -i img%03d.png -c:v libx264 -r 30 -pix_fmt yuv420p out.mp4
ffmpeg -framerate 30 -i input%03d.jpg -codec copy -pix_fmt yuv420p output.mkv
https://github.com/jcjohnson/neural-style https://github.com/yusuketomoto/chainer-fast-neuralstyle https://github.com/manuelruder/artistic-videos https://github.com/DmitryUlyanov/texture_nets https://github.com/mbartoli/neural-animation https://github.com/zerolocker/neural-style https://github.com/AbdullahAlfaraj/neural-style-website https://github.com/DylanAlloy/NeuralStyle-WebApp https://github.com/OlavHN/fast-neural-style https://github.com/chuanli11/CNNMRF https://github.com/Teaonly/easyStyle https://github.com/searchXiaoLai/paintmaster https://github.com/naman14/neural-style-android https://github.com/DmitryUlyanov/fast-neural-doodle https://github.com/layumi/2016_Artist_Style https://github.com/alireza-a/neural-style-webapp https://github.com/layumi/2015_Face_Detection https://github.com/hashbangCoder/Real-Time-Style-Transfer
VGG_ILSVRC_19_layers_deploy.prototxt
https://github.com/genekogan/CubistMirror https://github.com/gafr/chainer-fast-neuralstyle-models https://github.com/awentzonline/keras-rtst https://github.com/suquark/neural-style-visualizer https://github.com/anishathalye/neural-style https://github.com/larspars/neural-style-video https://github.com/titu1994/Neural-Style-Transfer https://github.com/andersbll/neural_artistic_style
Neural-Style movie
https://github.com/zhaw/neural_style https://github.com/Explee/neural_style
Performance result
We can see a x703 time slower in a tensorflow CPU non-quantized implementation versus the AWS GPU
InfrastructureTime
tensorflow GPU K420 AWS:~0.0329s (batching) 0.026s (looping)
tensorflow mac os:4.13s (batching) 3.31s (looping)
tensorflow mac os simulateur:5.86s (float32) 16.9 (quantized, this is weird)
tensorflow iphone 6s CPU:18.30s (float32)
Image size: 600x600
batching: all image are processed in parallel
looping: one image at a time
quantized: Using tensorflow quantization system, (Shouldn't be slower, probably needs a cleaner tf graph)
https://github.com/SaMnCo/docker-neuraltalk2
Place instances_train-val2014.zip in the same folder. You can download it from http://mscoco.org/dataset/#download
Then run the following commands:
Deep learning algorithm paints smooth-moving works of art
http://newatlas.com/neural-network-videos/44580/ PDF截图
推特上一个艺术家收集整理的艺术化视频资料集合
http://www.genekogan.com/works/style-transfer.html 资料合集
https://gist.github.com/genekogan/d61c8010d470e1dbe15d 制作流程
http://www.kylemcdonald.net/stylestudies/ 选择合适的Style,不要无畏的浪费GPU Time
https://m.youtube.com/watch?utm_campaign=buffer&utm_source=twitter.com&v=BuuNjnjpqFI&utm_medium=social&utm_content=buffer5220e http://mt.sohu.com/20160816/n464537105.shtml http://blog.josephmisiti.com/making-neural-art https://github.com/mtyka/neural_artistic_style
用7台手机测试。3个iPhone,4个Android。最快的android,9秒左右,最慢的ios,约15秒左右。
https://github.com/rupeshs/neuralstyler https://github.com/ryankiros/neural-storyteller https://github.com/gafr/chainer-fast-neuralstyle-models/issues/5 https://github.com/SergeyMorugin/ostagram https://developer.apple.com/reference/accelerate/1912851-bnns https://github.com/collinhundley/Swift-AI/issues/50 http://arxiv.org/abs/1603.08155 https://github.com/jcjohnson/neural-style/issues/313 https://github.com/yusuketomoto/chainer-fast-neuralstyle/issues/1#issuecomment-228944706
markz-nyc commented 23 hours ago
Prisma already made the neural style into offline mode, how iPhone can get such good result in few seconds without using gpu?
@DylanAlloyDylanAlloy commented 23 hours ago • edited
Because it's based on C code probably rather than Python.
https://github.com/awentzonline/keras-rtst.git https://www.researchgate.net/publication/301836893_Perceptual_Losses_for_Real-Time_Style_Transfer_and_Super-Resolution https://vimeo.com/167910860 http://www.creativeai.net/posts/EyoFYu2ZDv6T3zdYi/real-time-style-transfer-with-keras https://www.reddit.com/r/MachineLearning/comments/4cj1jj/160308155_perceptual_losses_for_realtime_style/ http://www.le.com/ptv/vplay/22792152.html https://plus.google.com/+ResearchatGoogle/posts/KxetFFXpPTJ http://www.academia.edu/254458/Algorithm_for_Real-Time_Style_Transfer_for_Human_Motion http://www.slideshare.net/yusuketomoto/realtime-style-transfer-63669036
http://blog.csdn.net/lebula/article/details/51896836
论文笔记:Perceptual Losses for Real-Time Style Transfer and Super-Resolution[doing]
1.transformation: image to image
2.perceptual losses:
psnr是per-pixel的loss,值高未必代表图片质量好,广泛应用只是因为计算比较简单
对图片做super resolution
define 4 loss:
1.pixel L2 :feature reconstruction loss
2.gram matrix:style reconstruction loss
3.per-pixel
4.TV
http://www.genekogan.com/works/style-transfer.html http://humanmotion.ict.ac.cn/papers/2015P1_StyleTransfer/details.htm http://dl.acm.org/citation.cfm?id=2766999 https://www.versioneye.com/python/keras-rtst/0.0.1
pip install https://pypi.python.org/packages/source/k/keras-rtst/keras-rtst-0.0.1.tar.gz https://blogs.nvidia.com/blog/2016/05/25/deep-learning-paints-videos/ http://www.meyumer.com/pdfs/SpectralStyleTransfer.pdf http://blogs.scientificamerican.com/sa-visual/neural-networks-for-artists/ http://dmlc.ml/mxnet/2016/06/20/end-to-end-neural-style.html https://github.com/dmlc/mxnet/tree/master/example/neural-style http://www.nerdcore.de/2016/07/10/prisma-style-transfer-kommt-fuer-android-und-die-technik-hinter-der-app/ http://link.springer.com/article/10.1007/s11554-016-0612-0 http://sssslide.com/www.slideshare.net/yusuketomoto/realtime-style-transfer-63669036 http://www.trancefish.de/blog/show/design/Prima+ist+auch+zum+Filmemachen+geeignet/ http://petapixel.com/2016/07/20/prisma-app-turns-standard-timelapse-incredible-moving-painting/ http://petapixel.com/2016/08/03/artisto-app-prisma-video-turns-videos-van-goghs/ https://blog.my.com/artisto-app-for-video-processing-when-your-videos-turn-into-paintings-that-come-to-life/ https://www.engadget.com/2016/08/03/artisto-prisma-for-videos/
style transfer in real-time
https://github.com/6o6o/chainer-fast-neuralstyle.git
#!/bin/bash
for name in ./*.jpg; do convert -resize 256x256\! $name $name; done
ffmpeg -i input.flv out%d.png
ffmpeg -i input.flv -vf fps=1 out%d.png
#ffmpeg -framerate 30 -i img%03d.png -c:v libx264 -r 30 -pix_fmt yuv420p out.mp4
ffmpeg -framerate 30 -i input%03d.jpg -codec copy -pix_fmt yuv420p output.mkv
相关文章推荐
- neural style demo and source code
- 项目管理实践【五】自动编译和发布网站【Using Visual Studio with Source Control System to build and publish website automatically】
- 项目管理实践【五】自动编译和发布【Using Visual Studio with Source Control System to build and publish website automatically】
- CodeProject: Collapsible, resizable and dockable XP style control bar. Free source code and programming help
- spyder3 安装-github and source
- [zz]Freeware, Open source and Commercial Website Security Tools and Services
- Open Source Objective-C Code With Github and CocoaPods
- 项目管理实践【五】自动编译和发布网站【Using Visual Studio with Source Control System to build and publish website autom
- Deep Dream and Neural Style
- 项目管理实践【五】自动编译和发布网站【Using Visual Studio with Source Control System to build and publish website automatically】
- 项目管理实践【五】自动编译和发布网站【Using Visual Studio with Source Control System to build and publish website autom
- 项目管理实践【五】自动编译和发布网站【Using Visual Studio with Source Control System to build and publish website automatically】
- Open Source Research and Study with GitHub
- 项目管理实践【四】自动编译和发布网站【Using Visual Studio with Source Control System to build and publish website automatically】
- 补充UIAlertView的一些知识,包括UIAlertViewStyleLoginAndPasswordInput和代理方法
- 项目管理实践【五】自动编译和发布网站【Using Visual Studio with Source Control System to build and publish website automatically】
- 项目管理实践【五】自动编译和发布网站【Using Visual Studio with Source Control System to build and publish website automatically】
- 项目管理实践【五】自动编译和发布网站【Using Visual Studio with Source Control System to build and publish website autom
- rsa encrypt and decrypt arithmetic source code
- 搜集的Sql一些常用的语句