您的位置:首页 > 其它

Rossum--自主运动的秘密

2013-06-26 15:47 375 查看
我们设定的Rossum机器人可以自主运动,在最底层arduino端,完成的所有功能仅仅是执行而已,它接受人为与自主的运动控制命令,是一个完全的slaver,而在主机端,设计了一个ardros 节点,它完成机器人本身控制命令的接收与传递到arduino端,同时也接收来自远程的人为控制.

以下为部分主要问题介绍:

1.PID速度调节

做过小车调速的人都知道,因为出厂电机参数的差异,在相同PWM激励的驱动电机,用编码器才有得到速度是有差异,这样的对轮式机器人来说,是个很大的问题,很容易想到,这时需要一个机制来及时调整两个电机的PWM参数,话不多说,一般方案有两个:一,用matlab拟合出两个轮子分别在不同PWM(我的是8bits)激励与响应速度的关系;二,建立一个闭环反馈调节系统根据激励与响应的差异自动调节PWM参数.

第一个方案更针对与特定的电机,经过大量数据拟合后,结果可以很好,可以准确的控制电机的速度,弊端是工作量很大,一旦换了电机又重新拟合.

我采用的是第二个方案,如图是一个简易的PI调速系统,结构简单,标定参数过程也很简单.



以下PI算法实现:

#ifndef SpeedController_h
#define SpeedController_h

#include "OdometricLocalizer.h"
#include "RobotParams.h"
#include "TimeInfo.h"
#include "WProgram.h"

class SpeedController
{
private:
float _PParam, _IParam, _DParam;
float _CommandTimeout;
long _MillisecsLastCommand;
OdometricLocalizer* _pOdometricLocalizer;

RobotParams* _pRobotParams;
TimeInfo* _pTimeInfo;

//float _LastLeftError, _LastRightError, _LastTurnError;
float _LeftErrorIntegral, _RightErrorIntegral, _TurnErrorIntegral;

float _MaxVelocityErrorIntegral;
float _MaxTurnErrorIntegral;

float ClipIntegralContribution(float integralContribution)
{
if (integralContribution > 1.0)
{
return 1.0;
}
else if (integralContribution < -1.0)
{
return -1.0;
}
}

void Reset()
{
//_LastLeftError = 0.0;
//_LastRightError = 0.0;
_LeftErrorIntegral = 0.0;
_RightErrorIntegral = 0.0;
_TurnErrorIntegral = 0.0;
LeftError = 0.0;
RightError = 0.0;
TurnError = 0.0;
NormalizedLeftCV = 0.0;
NormalizedRightCV = 0.0;

CommandedVelocity = 0.0;
CommandedAngularVelocity = 0.0;
}

public:
bool IsInitialized;

float VelocityPParam;
float VelocityIParam;
//float VelocityDParam;

float TurnPParam;
float TurnIParam;

float CommandedVelocity;
float CommandedAngularVelocity;

float LeftError;
float RightError;
float TurnError;

// normalized control values for the left and right motor
float NormalizedLeftCV;
float NormalizedRightCV;

SpeedController(OdometricLocalizer* pOdometricLocalizer, RobotParams* pRobotParams, TimeInfo* pTimeInfo)
{
_pOdometricLocalizer = pOdometricLocalizer;

_pRobotParams = pRobotParams;
_pTimeInfo = pTimeInfo;

IsInitialized = false;

VelocityPParam = 0.0;
VelocityIParam = 0.0;

TurnPParam = 0.0;
TurnIParam = 0.0;

CommandedVelocity = 0.0;
CommandedAngularVelocity = 0.0;
}

void Initialize(float velocityPParam, float velocityIParam, float turnPParam, float turnIParam, float commandTimeout)
{
VelocityPParam = velocityPParam;
VelocityIParam = velocityIParam;

TurnPParam = turnPParam;
TurnIParam = turnIParam;

// Avoiding runaway integral error contributions
_MaxVelocityErrorIntegral = 0.5 / VelocityIParam;
_MaxTurnErrorIntegral = 0.5 / TurnIParam;

_CommandTimeout = commandTimeout;
//_CommandTimeout = 1.0;
_MillisecsLastCommand = millis();

IsInitialized = true;
}

void CommandVelocity(float commandedVelocity, float commandedAngularVelocity)
{
if(commandedAngularVelocity < 0.5 && commandedAngularVelocity > -0.5)
{
CommandedVelocity = commandedVelocity;
CommandedAngularVelocity = 0;
}
else
{
CommandedVelocity = 0;
CommandedAngularVelocity = commandedAngularVelocity;
}
_MillisecsLastCommand = millis();
}

void Update(bool batteryVoltageIsTooLow)
{
if (batteryVoltageIsTooLow)
{
// we need to stop the motors and stop accumulating errors
Reset();
return;
}

// How long has it been since we received the last command
float secondsSinceLastCommand = (millis() - _MillisecsLastCommand) / 1000.0;
if (secondsSinceLastCommand > _CommandTimeout)
{
// we need to stop the motors and stop accumulating errors
Reset();
return;
}

if(CommandedVelocity == 0 && CommandedAngularVelocity == 0)
{
Reset();
return;
}
float angularVelocityOffset = 0.5 * CommandedAngularVelocity * _pRobotParams->TrackWidth;

float expectedLeftSpeed = CommandedVelocity - angularVelocityOffset;
float expectedRightSpeed = CommandedVelocity + angularVelocityOffset;

bool expectedLeftSpeedReverse = false;
bool expectedRightSpeedReverse = false;

expectedLeftSpeed = constrain(expectedLeftSpeed, -0.5, +0.5);
expectedRightSpeed = constrain(expectedRightSpeed, -0.5, +0.5);

if(expectedLeftSpeed < 0)
expectedLeftSpeedReverse = true;
if(expectedRightSpeed < 0)
expectedRightSpeedReverse = true;

LeftError = abs(expectedLeftSpeed) - abs(_pOdometricLocalizer->VLeft);
RightError = abs(expectedRightSpeed) - abs(_pOdometricLocalizer->VRight);
TurnError = LeftError - RightError; // if > 0 then we need to steer more to the left (and vice versa)

_LeftErrorIntegral += LeftError;
_RightErrorIntegral += RightError;
_TurnErrorIntegral += TurnError;

// Avoiding runaway integral error contributions
_LeftErrorIntegral = constrain(_LeftErrorIntegral, -_MaxVelocityErrorIntegral, +_MaxVelocityErrorIntegral);
_RightErrorIntegral = constrain(_RightErrorIntegral, -_MaxVelocityErrorIntegral, +_MaxVelocityErrorIntegral);
_TurnErrorIntegral = constrain(_TurnErrorIntegral, -_MaxTurnErrorIntegral, +_MaxTurnErrorIntegral);

//float leftErrorDifferential = (LeftError - _LastLeftError);
//float rightErrorDifferential = (RightError - _LastRightError);

NormalizedLeftCV = VelocityPParam * LeftError + VelocityIParam * _LeftErrorIntegral
+ (TurnPParam * TurnError + TurnIParam * _TurnErrorIntegral);
//NormalizedLeftCV = expectedLeftSpeed ;
NormalizedLeftCV = constrain(NormalizedLeftCV, -0.5, +0.5);
NormalizedRightCV = VelocityPParam * RightError + VelocityIParam * _RightErrorIntegral
- (TurnPParam * TurnError + TurnIParam * _TurnErrorIntegral);
//NormalizedRightCV = expectedRightSpeed;
NormalizedRightCV = constrain(NormalizedRightCV, -0.5, +0.5);

if(expectedLeftSpeedReverse)
{
NormalizedLeftCV = -NormalizedLeftCV;
}
if(expectedRightSpeedReverse)
{
NormalizedRightCV = -NormalizedRightCV;
}

//_LastLeftError = LeftError;
//_LastRightError = RightError;
}
};

#endif


2.基于里程数据的航迹推算

里程计数据是Rossum自主移动的核心依赖之一,之二就是来自于kinect模拟激光测距仪的距离数据了.

首先采用16线编码器对脉冲进行采样实现左右轮运动状态的获取。然后利用增量式测程法得到机器人当前坐标系的位姿,包括二维坐标与朝向角。

增量式测程法是使用从编码器收集的数据并依据起始位姿来确定机器人当前位姿的一种方法。作为一种增量式航迹推算方法,每一个新的位置是基于前面的位置,所以每次错误发生都会累积计算。但是机器人在完成同时定位与地图构建和自主导航任务时,可由算法根据实时测距生成的地图匹配可自动修正部分里程数据的误差,保持对机器人的姿态的跟踪,故我们仅采用编码器来完成机器人位姿的初步估计跟踪,这样一定程度上减少了系统的复杂度与成本。

根据传统运动学理论,可用以下公式推算差分系统中机器人的位姿:



可见如果直接积分计算,过程复杂,计算量太大,我们采取离散化,数值计算的方法完成机器人在当前坐标系下的位姿推算。
以下为基于运动学航迹推算的化简并离散化后的计算公式:

(1) distance = (left_encoder + right_encoder) /2.0;

(2) theta = (left_encoder - right_encoder) /WHEEL_BASE;

(3) X_position = distance * sin(theta);

(4) Y_position = distance * cos(theta);

其中WHEEL_BASE为两轮轮距。

以下为核心算法实现:

void Update(long leftEncoderCounts, long rightEncoderCounts)
{
long deltaLeft = leftEncoderCounts - _PreviousLeftEncoderCounts;
long deltaRight = rightEncoderCounts - _PreviousRightEncoderCounts;

VLeft = deltaLeft * _pRobotParams->DistancePerCount / _pTimeInfo->SecondsSinceLastUpdate;
VRight = deltaRight * _pRobotParams->DistancePerCount / _pTimeInfo->SecondsSinceLastUpdate;

double deltaDistance = 0.5f * (double)(deltaLeft + deltaRight) * _pRobotParams->DistancePerCount;
double deltaX = deltaDistance * (double)cos(Heading);
double deltaY = deltaDistance * (double)sin(Heading);

double deltaHeading = (double)(deltaRight - deltaLeft) * _pRobotParams->RadiansPerCount;

X += deltaX;
Y += deltaY;
Heading += deltaHeading;
if (Heading > PI)
{
Heading -= TwoPI;
}
else
{
if (Heading <= -PI)
{
Heading += TwoPI;
}
}

V = deltaDistance / _pTimeInfo->SecondsSinceLastUpdate;
Omega = deltaHeading / _pTimeInfo->SecondsSinceLastUpdate;

_PreviousLeftEncoderCounts = leftEncoderCounts;
_PreviousRightEncoderCounts = rightEncoderCounts;
}


3.SLAM—地图建立

Gmapping的slam算法包是slam过程的核心算法,在上述轮式运动控制系统的里程计信息,以及Kinect传感器模拟激光测距仪的数据发布后,slam_gmapping节点获取数据并建立地图。详细配置教程可以去ros官网找到.

4.Navigation--自主导航

基于地图与AMCL的自主导航策略,机器人根据上述已建立的地图,并利用Kinect传感器模拟激光测距仪的数据,估计自身大致位姿,选定地图上的目的地后,进行路径规划,运动到地图上的指定位置,详细配置教程可以去ros官网找到.

以下是一些相关资料:

http://www.seattlerobotics.org/encoder/200108/using_a_pid.html

http://ttuadvancedrobotics.wikidot.com/odometry

http://openslam.org/gmapping.html

http://www.ros.org/wiki/amcl
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: