您的位置:首页 > 其它

ROS Robotics Projects(5)深度学习

2017-08-17 10:26 375 查看
这是本书的第7章,主要介绍在ROS集成深度学习算法,用于物体识别等。
首先给出一些参考资料:
1 如何自学深度学习并少走弯路:https://www.leiphone.com/news/201611/cWf2B23wdy6XLa21.html
2 八大深度学习的开源框架:https://www.leiphone.com/news/201608/5kCJ4Vim3wMjpBPU.html



这里简单介绍4种,书中介绍了TensorFlow的详细配置和基本使用。
1. TensorFlow
参考:
1 使用ROS和TensorFlow:http://blog.exbot.net/archives/3074
2 Tensorflow_in_ROS:https://github.com/shunchan0677/Tensorflow_in_ROS
3 Cherry-Autonomous-Racecar:https://github.com/DJTobias/Cherry-Autonomous-Racecar

4 jetson-car:https://github.com/dat-ai/jetson-car 

2. theano
参考:
1 A package for ROS that performs semantic segmentation on NYU dataset using Theano framework:
https://github.com/amanrajdce/ROS-package-for-Semantic-Segmentation

3. torch
参考:
1 torch-ros:https://github.com/Xamla/torch-ros
2 torch-moveit:https://github.com/Xamla/torch-moveit

3 torch-swarm:https://github.com/RobbieHolland/SwarmbotGazebo-DQN



4. Caffe
之前的博文也提到过类似的使用笔记,可以参考。

参考:
1 ros-caffe:https://github.com/tzutalin/ros_caffe

2 web-caffe-ros:https://github.com/ykoga-kyutech/caffe_web

最后,分享三个使用ROS和深度学习的有趣的开源案例:
----https://github.com/yao62995/AS_6Dof_Arm

robot arm by ROS & Moveit, Train Deep Reinforcement Algorithms

Gazebo 展示




Real环境展示




URDF描述文件

机械臂相关描述文件位于 as_arm_description/urdf 目录中as_arm.xacro 为机械臂描述文件
camera.xacro 摄像机和机架描述文件
sink.xacro 物品槽描述文件

Launch启动相关命令:

启动gazebo仿真环境:
roslaunch as_arm_gazebo as_arm_bringup.launch

启动moveit Demo:
roslaunch as_arm_moveit_config demo.launch

启动grasp 生成器
roslaunch as_arm_gazebo grasp_generator_server.launch

查看摄像头图像:
rosrun image_view image_view image:=/camera/image_raw

控制某个joint移动角度:
rostopic pub -1 /rrbot/joint1_position_controller/command std_msgs/Float64 "data: 1.5"

获取cube位置
rostopic echo -n 1 /gazebo/cubes

获取joint位置:
rostopic echo -n 1 /as_arm/joint_states

获取某个link(如end effector)的世界坐标
rosrun tf tf_echo /world /grasp_frame_link

碰撞检测相关

碰撞检测包含self-collision和environment-collision两种,相关文件如下:Service描述文件: as_arm_description/srv/CheckCollisionValid.srv
Service服务文件(需要在catkin环境编译): as_arm_control/src/check_collision.cpp

仿真环境运行:

相关控制脚本位于 _as_arm_control/test/_目录中
抓取脚本: pick_and_place.py
moveit运动规划: test_planner.py

控制gazebo joint角度:
rostopic pub -1 /as_arm/joint1_position_controller/command std_msgs/Float64 "data: 0"
rostopic pub -1 /as_arm/joint2_position_controller/command std_msgs/Float64 "data: 0.2834"
rostopic pub -1 /as_arm/joint3_position_controller/command std_msgs/Float64 "data: -0.9736"
rostopic pub -1 /as_arm/joint4_position_controller/command std_msgs/Float64 "data: -1.4648"
rostopic pub -1 /as_arm/joint5_position_controller/command std_msgs/Float64 "data: 0"
rostopic pub -1 /as_arm/joint6_position_controller/command std_msgs/Float64 "data: -0.015"
rostopic pub -1 /as_arm/joint7_position_controller/command std_msgs/Float64 "data: 0.015"

控制gazebo cube位置:
rostopic pub -1 /gazebo/set_link_state gazebo_msgs/LinkState "{link_name: cube1, pose: {position: {x: -0.2, y: 0, z: 1.0}, orientation: {x: 0,y: 0, z: 0, w: 1.0}}, twist: {}, reference_frame: world}"

真实环境运行:

arduino 文件在 as_arm_real/data/servo_v4.0.ino
启动real节点,控制真实机械臂运动:roslaunch as_arm_real servo_bringup_real.launch

启动gazebo仿真环境后,运行控制脚本 as_arm_real/scripts/真实环境与仿真环境机械臂同步随机运动: random_run_drive.py
真实环境与仿真环境机械手同步运动: run_gripper_driver.py
抓取脚本(调用ompl IK算法): pick_and_place.py

深度增强学习训练:

训练脚本位于 as_arm_control/scripts/ 目录中gazebo仿真环境控制和状态获取脚本: simulate_state.py
DDPG算法脚本(TF实现): ddpg.py
仿真环境Agent接口: asm_env.py
训练脚本:learning.py

更新日志 2016-11-24

更改actor 网络的输出层:目前actor网络的输出的5个joint的移动角度值,增加一个输出表示cube是否在gripper的抓取范围
更改actor网络输出层范围区间为[-4, 4]之间的整数

更改reward计算函数 = exp(-1 * γ * dist(cube, gripper))
调整摄像头视野,增加摄像头数量,组成双目摄像头提高距离感知,同时避免cube被机械臂遮挡

更改噪音生成函数OuNoice的参数,防止过拟合,探索更多运动空间。
训练阶段:第一阶段:一个cube,且cube初始位置不变,gripper初始位置为PreGrasp
第二阶段:一个cube,且cube初始位置可变,gripper初始位置为PreGrasp
第三阶段:多个cube,且cube初始位置可变,gripper初始位置为UpRight

更新日志 2016-11-17

碰撞处理产生碰撞时每个关节随机选择某个(-4,4)的角度范围,并检测碰撞,直到没有检测到碰撞时执行该action

ddpg actor网络处理输出层更新为action_dim * 3,再reshape为(action_dim, 3),执行arg_max操作得到5个范围在[0, 2]的整数,再-1得到[-1, 1]的整数作为机械臂的输出action

arm 和 gripper分开处理5个arm joint 和 2个gripper joint,训练网络时只控制5个arm joint。当检测到gripper_frame和cube_pose的距离小于最小阈值时视为到达目标,执行抓取任务,并将cube物体attach到gripper上。

针对gazebo中机械臂执行操作的震荡问题处理调整joint的PID参数保证快速平滑性
调整link的质量属性和惯性属性,减少惯性

针对gazebo joint command在话题队列中丢失的处理:增加joint command的queue队列大小,同时调整训练速度,协调与joint command执行速度的频率

gazebo/ros/moveit的交互通信gazebo 回传摄像头图片,moveit检测碰撞,ros协调通信。涉及end effector、cube、arm_joints、gripper_joints的控制和状态信息。
joint相关的话题:"/joint_states"设置rviz的joints角度,"as_arm/joint_states"获取当前joints角度,"as_arm/joints_position_controller/command"设置gazebo的joints角度
cube相关的话题:"/gazebo/cubes"获取cube位姿, "/gazebo/set_link_state"设置cube位姿。
cube在moveit中使用scene.add_box()生成并与gazebo中的cube同步

摄像头视野和位置的调整
更改actor 网络的输出层:(Todo)目前actor网络的输出的5个joint的移动角度值,增加一个输出表示cube是否在gripper的抓取范围

----https://github.com/AbhiRP/Autonomous-Robot-Navigation-using-Deep-Learning-Vision-Landmark-Framework

Autonomous Robot Navigation using Tensorflow Incpetion V3 Image Recognition Engine and Robot Operating System (ROS)

Autonomous-Robot-Navigation-using-Deep-Learning-Vision-Landmark-Framework

Abstract:

Robot navigation requires specific techniques for guiding a mobile robot to a desired destination. In general, a desired path is required in an environment described by different terrain and a set of distinct objects, such as obstacles and particular landmarks. In this project, a new approach for autonomous navigation is presented using machine learning techniques such as Convolutional Neural Network to identifymarkers from images and Robot Operating System and Object Position Discovery system to navigate towards these markers.

Hardware:

Yujin Robot Kobuki TurtleBot 2 http://kobuki.yujinrobot.com/
Asus Xtion PRO RGB-D Camera https://www.asus.com/3D-Sensor/Xtion_PRO/
ODROID-XU4 Octa Core ARM Microcomputer http://www.hardkernel.com/main/main.php

Software:

Ubuntu 14.04 Trusty Tahr http://releases.ubuntu.com/14.04/
Robot Operating System (ROS) Indigo http://wiki.ros.org/indigo/Installation
TensorFlow https://www.tensorflow.org/install/
ImageNet Dataset http://www.image-net.org/

ROS Package:

openni2_launch http://wiki.ros.org/openni2_launch
depthimage_to_laserscan http://wiki.ros.org/depthimage_to_laserscan

Documentation:

This project is documented as follows:
Project Overview
Algorithm
Experimental Setup
Project Implementation
Experimental Results

Video of this project is available at https://youtu.be/UOC4PvreTG4

----https://github.com/elggem/tensorflow_node

Tensorflow based ROS node for evaluating deep learning algorithms.

tensorflow_node

<hidden..

>
This is a tensorflow based framework for evaluating deep learning algorithms and streaming internal believe states out via ROS. It aims to be a flexible implemention that can be modified and inspected during runtime on live stream data. Eventually it will be used in conjunction with the OpenCog framework for integrated Artificial General Intelligence.
This code is under heavy development and used for research purposes, so handle with care!

Documentation

You can find documentation on the wiki tab. There are references for the network architecture and some high-level descriptions on how it works.

Participate

I've put todos and remaining tasks in the projects tab on Github. Feel free to collaborate or contact me if you have any suggestions!

I want to run it!

Clone the repo into your catkin workspace, make it and run
roslaunch tensorflow_node mnist.launch
TF summaries are being written to
outputs/summaries
, if enabled in the config file, and they can be inspected via this command:
rosrun tensorflow_node tensorboard

----
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: