038-robot.png

MEC4127F: Introduction to Robotics

Chapter 8: Control design for wheeled robots

Table of Contents

8.1 Introduction

This Chapter will focus on introductory nonlinear control methods that relate to a differentially-steered mobile robot. Generalised concepts will also be covered that apply to any type of rigid-body platform, which will be useful in later chapters when tackling quadcopter control design.
The motion control problem for wheeled robots often only considers the kinematic model, ignoring the underlying dynamics of the platform. In this way, the translational and angular velocities of the robot in are treated as the virtualised controller commands. The assumption here is that a low-level controller exists that commands wheel torques and can regulate the translational and angular velocities with sufficient bandwidth and precision. In other words, the wheel speed control is sufficiently fast, relative to the mobility of the wheeled robot, such that it can be assumed instantaneous.

8.2 Body-frame position error

In order to incorporate feedback control on any mobile robot, we first need to define the error that will drive our control loop. In the case of position control we require a position error. Because we are assuming that we can directly control translational and angular velocity in , it makes sense to define our position error in — where the control action is directly taking place.

8.2.1 Position error in SE(3)

With reference to Figure 8.1, we will start by describing the current position of our robot in as
which describes, as before, the location of the frame origin with respect to .
Figure 8.1: Representation of three key frames for control design:, , and .
The desired reference is indicated by , which describes how we ultimately want our robot to be configured. The desired position therefore follows as
The position error in is then given by
We unfortunately cannot directly use the error vector above for control, as the actuation of the robot as a result of the wheel speeds takes place in . Instead, we will re-represent vector with respect to , namely, , which describes the incremental position change required in to move the robot to the desired position. Based on Chapter 2, we can determine this quantity using our generalised vector mapping equation:
The quantity above can be thought of both as a desired position in , , or a position error in , , as the current position in will always be equal to zero — In SE(3), , implying that orientation representation must be chosen, with exponential coordinates and quaternions serving as popular and reliable choices.
Note that we could have also obtained the result above by mapping the world-frame position error through our rotation matrix describing with respect to , namely
where . We do not need to account for the translation between the two frame origins as we are interested in the differential behaviour of our system. Error signal gives an indication of how the robot needs to make corrections in , which is where actuation is directly acting. It is important to highlight that we cannot perform arbitrary position control if we do not know the current orientation of the robot. By extension, inaccuracies in our orientation estimate will negatively affect our position controller, by virtue of an incorrect position error.
Once we have formulated the rectilinear tracking problem in the actuator space, we can design a control scheme that minimises this error. Successfully enforcing implies that , which is easy to verify based on the equations above. In other words, our robot position in the task space will have reached its desired location. The type of controller used will vary depending on the particular platform, but the archetypal PID control structure is generally used and adapted where appropriate — we can "disable" elements of a PID controller to reduce it to a proportional controller, proportional-plus-integral controller, or proportional-derivative controller — which comes down to the specific robotic use case.

8.2.2 Position error in SE(2)

When operating on a plane, such as is the case for wheeled robots, the rotation matrix is described by
where ψ uniquely defines the orientation of the robot. By extension, the position error will be a two-vector made up by

Example: Position error on a plane

The code block generates a visualisation for a wheeled robot on a plane. The position error in is also provided, based on the equation above.
You should notice that:
clear
 
psi =68*pi/180;
 
figure(1),clf
%{D}
t_d2w = [0 10 0];
q_d2w = [cos(pi/4) 0 0 sin(pi/4)];
plotTransforms(t_d2w, q_d2w, "Parent", gca, "View","2D", "FrameSize", 2,"FrameAxisLabels","on","FrameLabel","D");
%{B}
t_b2w = [10 10 0];
q_b2w = [cos(psi/2) 0 0 sin(psi/2)];
plotTransforms(t_b2w, q_b2w, "MeshFilePath", "groundvehicle.stl", "Parent", gca, "View","2D", "FrameSize", 2,"FrameAxisLabels","on","FrameLabel","B");
%{W}
plotTransforms([0 0 0], [1 0 0 0], "Parent", gca, "View","2D", "FrameSize", 2,"FrameAxisLabels","on","FrameLabel","W");
axis([-5 15 -5 15]),grid on
view(2)
 
R = [cos(psi) -sin(psi);
sin(psi) cos(psi)];
e_w = t_d2w(1:2)'-t_b2w(1:2)';
e_b = R'*e_w;
 
title('Visualisation of $\{B\}$, $\{D\}$, and $\{W\}$ reference frames in a plane',Interpreter="latex")
text(10,0,{['${^B{\bf e}_p}=[$',num2str(e_b(1)),' ',num2str(e_b(2)),'$]^T$'],...
['${^W{\bf e}_p}=[$',num2str(e_w(1)),' ',num2str(e_w(2)),'$]^T$']},Interpreter="latex")

8.3 Body-frame orientation error

Correcting the orientation of a generic platform follows a similar thinking as that of the position error, albeit with some additional intricacies. We will consider two approaches to generating a description of the body-frame orientation error:
  1. Rotation matrices
  2. Quaternions
When dealing with robotic platforms that can move outside of a fixed plane (e.g. a quadcopter), simply taking the difference between the desired and actual Euler angle vectors is not an appropriate description of the orientation error in — where the control action is going to take place. We can still try map this world-frame Euler error to the body-frame using a special matrix mapping, but the sequential nature of Euler angles is not in line with our rigid bodies are expected to behave in general. For that reason, we will consider other approaches. Note that the Euler error approach is suitable when the angle errors are sufficiently small.

8.3.1 Rotation matrix formulation

This subsection will cover the generalised case of operating in SO(3), but without loss of generality, this method can always be reduced to SO(2). We start by describing the orientation of our body-frame with respect to the world-frame, using , where our matrix is parameterised by quaternions, exponential coordinates, or some other representation. We can additionally describe a desired orientation using . With reference to Figure 8.2, this desired orientation implies that we are introducing a third reference frame that describes how we would like our body-frame to be oriented relative to .
Figure 8.2: Depiction of the three fundamental reference frames when resolving orientation error.
If our desired orientation is different to our current orientation (), we will require some kind of adjustment in order to orchestrate the correct rotation, whereas if the two are aligned (), we do not need to do anything. In general, we would like to find a desired orientation that is described in , which tells us how the body frame needs to alter itself to meet our target configuration. Pre-multiplying by , we can map the desired orientation in to the desired orientation in , which is given by
Note again how the sub- and superscripts cancel out to yield the resulting rotation matrix. Matrix gives an indication of how the platform orientation, described in the body-frame, should reorient itself in order to meet the world-frame objective of , and also can be thought of as the encoded orientation error, as the body-frame orientation will always remain null.
We can decode the resulting rotation matrix to obtain a vector description of the error, which could take the form of exponential coordinates or Euler error. This 3-vector of angles can then be used to inform the robot platform of how to correct its orientation about the three axes in .

8.3.1.1 Euler error

Assuming the body-frame Euler errors are of interest, one can extract this information from
using
The nature of Euler angles is that three sequential rotations are required, so the robot would need to make corrections in three discrete rotations, instead of one single rotation. This presents issues if time or energy optimality are important to the functionality of the system. Additionally, if the system is conditionally stable, having to perform three discrete rotations may not be feasible. Having said that, one can treat the three distinct rotations as one combined rotation if the rotational angles are sufficiently small such that . This would mean that our quantification of the body-frame correction required is not perfectly accurate, but rather approximately accurate. As previously touched on, motion control for rigid-body robots does not naturally take place in ordered sequences, but rather lends itself to instantaneous gross motions. Euler angles can still be used in control systems that make use of 3D rotations if the correct considerations are made — specifically, concerning the span of orientations that the system is expected to make use of. This makes the generalised Euler error largely unusable in a control sense (unless we intend to correct orientation errors in three disparate rotations, for example on a pan-tilt-roll camera). It would also not be possible to enforce this condition if the reference Euler angle experiences large step changes, as the Euler error would become large as a result of these steps. The Euler error formulation should therefore only be used for specific robotic platforms that experience small orientation errors (not to forget the other problem with singularities!). A fixed-wing airplane is a good example where using Euler errors is appropriate given the small angle behaviour during nominal flight conditions. This, however, would not be appropriate when speaking of fighter jets that perform aggressive and high angle-of-attack manoeuvres!

8.3.1.2 Exponential coordinate error

Exponential coordinates are a natural way of expressing a rotation using the combination of a rotation vector and angle. Using our understanding of exponential coordinates, we can recover an error rotation vector from rotation matrix using
Note that we are assuming for sake of clarity. We can then combine the axis-angle form to give the exponential coordinates exactly describing rotation error as
.
This vector indicates the rotational displacement in required to align to . Unlike the caveats of Euler error vectors, the approach above exactly describes the orientation error in . This is very useful when controlling the attitude of rigid-body systems. The benefit in specifying the corrective action in the form of a scaled rotation vector is that we can theoretically adjust our orientation in one smooth rotational movement, as opposed to three disjointed rotations. Note that in stark comparison to the Euler formulation, we are also not limited to only handling small angle errors — the angular error can be arbitrarily large without issue.

8.3.2 Quaternion error formulation

The quaternion structure can be used instead of a rotation matrix, which is appealing for minimising numerical operations. Recall that the unit quaternion encodes the information related to a rotation vector. That is, a single rotation about a defined axis. This is appealing from a control point of view, as we can make orientation corrections simultaneously, without having to perform sequential operations or be subject to poor angle error approximations. For sake of convenience, the verbose quaternion structure is repeated below as
Following a similar procedure as before, our quaternion representation of our robot orientation in is , and our desired quaternion in is . Analogous to , our error quaternion in is given by
The equation above is beneficial based on the low number of element-wise arithmetic operations relative to multiplying two rotation matrices, but we can also now extract the exponential coordinates embedded in our quaternion formulation. Given the standard unit quaternion structure, our error quaternion can be written as
where corresponds to the rotation angle required to correct the orientation of the robot, and refers to the rotation axis. Combined, the corresponding orientation error in is given by
which matches the result from the previous subsection when converting rotation matrices to exponential coordinate errors.

8.3.3 Orientation error in SO(2)

The approaches presented above are crucial when controlling rigid-body robots, such as aerial vehicles and satellites. While the work presented above extends to all rotations in SO(3), there will be many cases when a robot is confined to a plane, in which case a single parameter, ψ is required to describe orientation. As such, when operating in SO(2), the orientation error can be simply determined by taking the difference between a desired orientation, , and the current orientation, ψ, which yields the orientation error of
The rotation axis will always be perpendicular to the plane of operation, which implies that the world-frame description of orientation will be equivalent to the body-frame counterpart.

8.4 Point-to-point control

We begin by considering point-to-point control, which refers to moving a robot, initially at rest, from an initial point in the task space to a final, static point, without consideration for the final orientation. The current position is described by , whereas the desired final position is .

8.4.1 Virtual control inputs

With reference to Chapter 6, the forward kinematics of the differentially-steered robot are repeated below for sake of convenience:
where and . The actuator space is made up of , which represents the low-level control of the wheeled robot. In other words, we can in theory choose at any time what we want the two wheel speeds to be. Because we know the mapping from to , we can abstract and simplify the problem by treating v and as our virtual control inputs. That is, they are not the signals that directly dictate the actuator states, but they are related by a mapping that remains constant. The idea here is that we can effectively decouple the two adjoined control problems (translation and rotation) into two independent problems by posing these virtual inputs in the correct way. Using the notation from Section 8.2, the virtualised kinematic model is given by
where and . Note that at any time we can calculate the required wheel speeds using
which we would require when implementing our eventual control scheme.

8.4.2 Translational control

Point-to-point control considers the problem of driving a mobile robot from an initial known position, to a desired final position. In this configuration, no inherent trajectory exists and the task is to reach the final position given the non-holonomic nature of the platform, whilst taking into account the kinematic constraints imposed by the slip-free wheel model. While methods do exist to consider the orientation of the mobile robot when it reaches its destination, we will not consider it for now.
Consider the problem of designing a feedback control law that can drive the mobile robot from some initial 2D Cartesian point to a desired location. In this scenario, the final orientation is not specified (it can be anything). An example use-case could be an autonomous surveillance vehicle that needs to visit a set of Cartesian points and then accrue information (e.g. Visual information) over an arbitrary period of time. The arbitrary robot orientation at these waypoints would not be an issue if the robot's sensor suite was able to pan (e.g. a panoramic camera) or encompassed every direction (ring of ultrasonic sensors around robot body facing outwards). Given a world-frame position error in SE(2) of , the corresponding position error in was shown to be
Because we can only make rectilinear corrections using forward/backward velocity parallel to the axis, we will focus on the first element in , namely
Our first objective is to enforce with an appropriate control law. This corresponds with reducing the error in the forward direction of the mobile robot to zero. An appropriate control scheme that can achieve this is a proportional controller of the form
where . In other words, our desired linear velocity in described above will only go to zero when the corresponding error in goes to zero. An integrator is present in our open-loop transfer function (to move from the velocity signal to the resulting position that we eventually measure and use), so we have a type-1 system that possesses the appropriate properties to guarantee zero-error tracking at steady-state for step-like signals (assuming there are no disturbances at the plant input).
Note that when the kinematics of the wheeled robot are exactly known and there are no wheel speed saturations, the desired and actual translational velocity will be equivalent, namely,
We will consider wheel speed saturation in Section 8.4.5. With the above assumption, the transfer behaviour describing the mapping from virtual control input to incremental forward/backward displacement in is described by
where s is the Laplace variable.
The selection of will determine the closed-loop bandwidth, and under the aforementioned model simplifications, instability will never occur. However, if we had incorporated digital effects to our control loop, we would eventually reach a point where we cannot add more gain as a result of reaching our gain margin limit. While the kinematic model is nonlinear in nature, we can model the incremental position behaviour in using the loop transfer function of
with the input to this system being translation error term , and the output being the incremental position change in . Recalling Control Systems, the closed-loop behaviour follows as
which implies that the bandwidth our translational controller is directly set by . It follows that the settling time is given by
.
The block diagram describing the closed-loop translational control behaviour is shown Figure 8.3.
Figure 8.3: Block diagram of closed-loop translational control scheme.
Note that the low-level mapping of desired velocity to wheel speeds, and then to actual velocity is omitted from the block diagram above, as it is assumed that the resultant mapping is equivalent to unit gain. On a real platform this assumption cannot be made easily, as the actuators will have some non-zero time constant related to changing wheel speeds.

Example: Translational control of a differentially-steered robot

The simulation resulting from the code block below demonstrates a proportional controller used to regulate the translational motion of a wheeled robot. We should notice that:
clear
d = 1; %m
r = 0.5; %m
 
dT = 0.05;
timer = rateControl(1/dT);
 
p_des_x =9
p_des_x = 9
k_v = 3.5;
t_st = 4/k_v;
 
p_des = [p_des_x 15]';
p = [];
 
Tb2w = [0 -1 5;
1 0 0;
0 0 1];
 
T = 2;
time = 0:dT:T
time = 1×41
0 0.0500 0.1000 0.1500 0.2000 0.2500 0.3000 0.3500 0.4000 0.4500 0.5000 0.5500 0.6000 0.6500 0.7000 0.7500 0.8000 0.8500 0.9000 0.9500 1.0000 1.0500 1.1000 1.1500 1.2000 1.2500 1.3000 1.3500 1.4000 1.4500 1.5000 1.5500 1.6000 1.6500 1.7000 1.7500 1.8000 1.8500 1.9000 1.9500 2.0000
 
for i=1:(T/dT) %run the simulation for a time equivalent to 2 seconds
 
e_p_wf = p_des(1:2)-Tb2w(1:2,3);
 
psi = atan2(Tb2w(2,1),Tb2w(1,1));
e_p_bf = ( cos(psi)*e_p_wf(1)+sin(psi)*e_p_wf(2) );
v_ = k_v*e_p_bf;
 
A = [1 -d/2;
1 d/2];
phi_des = 1/r*A*[v_,0]';
phiL = phi_des(1);
phiR = phi_des(2);
 
v = r*(phiL+phiR)/2;
psi_dot = r*(phiR-phiL)/d;
 
 
S = dT*[0 -psi_dot v;
psi_dot 0 0;
0 0 0];
Tb2w = Tb2w*expm(S);
t = [Tb2w(1:2,3);0]; %extract 3D position of robot
psi = atan2(Tb2w(2,1),Tb2w(1,1));
q = [cos(psi/2) 0 0 sin(psi/2)]; %quaternion that encodes 1D orientation
 
figure(1),clf,hold on,title('Topdown view of robot with translational control')
axis([-20 20 -20 20]),grid on
plotTransforms(t', q, "MeshFilePath", "groundvehicle.stl", "Parent", gca, "View","2D", "FrameSize", 4);
plot(p_des(1),p_des(2),'ko',MarkerFaceColor='k')
text(-18,-15,{['Time: ',num2str( time(i),'%.2f' ), ' seconds'], ...
['Position error: ',num2str( e_p_bf,'%.2f' ), ' m']})
p = [p t(1:2)];
 
drawnow
waitfor(timer);
end
 
figure(2),hold on,sgtitle('Desired vs actual position over time')
subplot(2,1,1),hold on,plot(time(1:i),p(1,:),'b'),ylabel('${^Wp_x}$',Interpreter='latex')
line([0 T],[p_des(1) p_des(1)],"color",'k')
line([t_st t_st],[0 p_des(1)],"color",'k',"lineStyle",'--')
text(t_st,0.5,['$t_{\pm 2\%}=\frac{4}{k_v}=$',num2str(t_st)],"Interpreter","latex")
subplot(2,1,2),hold on,plot(time(1:i),p(2,:),'b'),ylabel('${^Wp_y}$',Interpreter='latex')
line([0 T],[p_des(2) p_des(2)],"color",'k')
line([t_st t_st],[0 p_des(2)],"color",'k',"lineStyle",'--')
text(t_st,0.5,['$t_{\pm 2\%}=\frac{4}{k_v}=$',num2str(t_st)],"Interpreter","latex")

8.4.3 Orientation control

Unlike , is uncontrollable using v, based on the slip-free model. We will instead resolve this error indirectly with an orientation controller. With reference to Figure 8.4, the desired orientation of our robot can be described using
where the four-quadrant arc-tangent function should be used in practice. In MATLAB, this is performed using the atan2 function.
ex =3;
ey =-1;
psi_des = atan2(ey,ex)
Figure 8.4: Resolving desired orientation from world-frame tracking error.
The equation above further demonstrates the coupled nature of the position and orientation in frame . The orientation follows as
.
Given a reliable measurement/estimate of the orientation, ψ , we can define a proportional control law that determines the desired rotational velocity on the robot body,
where . As with , sets the bandwidth of the orientation controller, and will only be limited by digital and saturation effects when solely operating (without the translational controller). When the kinematics are perfectly known and saturation of the wheels does not occur, the desired rotational velocity is equivalent to the realised angular velocity:
With the above assumption, the transfer behaviour describing the mapping from virtual control input to incremental rotational displacement in is described by
where s is the Laplace variable.
Analogous to the translational controller, the open-loop plant representing the orientation system is modelled by
with the input to this system being error signal , and the output being ψ. With unit-feedback, the closed-loop behaviour follows as
which implies that the bandwidth our translational controller is directly set by . It follows that the settling time for the closed-loop orientation system is given by
.
The block diagram describing the closed-loop orientation control behaviour is shown Figure 8.5.
Figure 8.5: Block diagram of closed-loop orientation control scheme.

Example: Orientation control of a differentially-steered robot

The simulation resulting from the code block below demonstrates an orientation controller used to regulate the rotational motion of a wheeled robot. We should notice that:
clear
d = 1; %m
r = 0.5; %m
 
dT = 0.05;
timer = rateControl(1/dT);
 
psi_des =125*pi/180;
k_psi = 3.3;
t_st = 4/k_psi;
 
psi_array = [0];
 
Tb2w = [1 0 5;
0 1 0;
0 0 1];
psi = atan2(Tb2w(2,1),Tb2w(1,1));
T = 2;
time = 0:dT:T;
% figure(2),clf
for i=1:(T/dT) %run the simulation for a time equivalent to 2 seconds
 
e_psi = psi_des-psi;
 
psi_dot = k_psi*e_psi;
 
A = [1 -d/2;
1 d/2];
phi_des = 1/r*A*[0,psi_dot]';
phiL = phi_des(1);
phiR = phi_des(2);
 
v = r*(phiL+phiR)/2;
psi_dot = r*(phiR-phiL)/d;
 
S = dT*[0 -psi_dot v;
psi_dot 0 0;
0 0 0];
Tb2w = Tb2w*expm(S);
t = [Tb2w(1:2,3);0]; %extract 3D position of robot
psi = atan2(Tb2w(2,1),Tb2w(1,1));
q = [cos(psi/2) 0 0 sin(psi/2)]; %quaternion that encodes 1D orientation
 
figure(1),clf,hold on,title('Topdown view of robot with orientation control')
axis([-20 20 -20 20]),grid on
line([Tb2w(1,3) Tb2w(1,3)+10*cos(psi_des)],[Tb2w(2,3) Tb2w(2,3)+10*sin(psi_des)])
plotTransforms(t', q, "MeshFilePath", "groundvehicle.stl", "Parent", gca, "View","2D", "FrameSize", 5);
text(-18,18,{['Time: ',num2str( time(i),'%.2f' ), ' seconds'], ...
['Orientation error: ',num2str( e_psi,'%.2f' ), ' m']})
 
psi_array = [psi_array psi];
drawnow
waitfor(timer);
end
 
figure(2),clf,hold on,title('Desired orientation vs actual orientation over time')
plot(time,psi_array,'b')
line([0 T],[psi_des psi_des],"color",'k')
line([t_st t_st],[0 psi_des],"color",'k',"lineStyle",'--')
text(t_st,0.5,['$t_{\pm 2\%}=\frac{4}{k_v}=$',num2str(t_st)],"Interpreter","latex")

8.4.4 Combining translation and orientation control

When combined with the translation controller, the task of the orientation control scheme above is to point the robot in the direction of the desired location so that the translational controller moves the robot closer to the target location. It makes physical sense to prioritise this control loop over the translational controller, as the robot can incur increasingly large errors if the robot is pointing in the wrong direction at the beginning of the manoeuvre. Prioritisation is achieved by setting the bandwidth of the orientation controller to be sufficiently large relative to the bandwidth of the translational controller. A rough rule of thumb is that the bandwidth of the orientation controller should be at least twice that of the translation controller, which necessitates that
The block diagram of the full control scheme is shown in Figure 8.6.
Figure 8.6: Block diagram of closed-loop translation and orientation control scheme.

Example: Pose control of a differentially-steered robot

The simulation resulting from the code block below demonstrates the combination of a translational and orientation controller used to regulate the point to point motion of a wheeled robot. We should notice that:
clear
d = 1; %m
r = 0.5; %m
 
dT = 0.05;
timer = rateControl(1/dT);
 
p_des_x =11;
k_v = 2.3;
k_psi = 4.6;
 
p_des = [p_des_x 15]';
p = [];
 
Tb2w = [0 -1 5;
1 0 0;
0 0 1];
psi = atan2(Tb2w(2,1),Tb2w(1,1));
 
T = 2;
time = 0:dT:T;
 
for i=1:(T/dT)+1 %run the simulation for a time equivalent to 3 seconds
 
%translation controller
e_p_wf = p_des(1:2)-Tb2w(1:2,3);
e_p_bf = ( cos(psi)*e_p_wf(1)+sin(psi)*e_p_wf(2) );
v = k_v*e_p_bf;
 
%orientation controller
psi_des = atan2(e_p_wf(2),e_p_wf(1));
e_psi = psi_des-psi;
psi_dot = k_psi*e_psi;
 
A = [1 -d/2;
1 d/2];
phi_des = 1/r*A*[v,psi_dot]';
phiL = phi_des(1);
phiR = phi_des(2);
 
v = r*(phiL+phiR)/2;
psi_dot = r*(phiR-phiL)/d;
 
S = dT*[0 -psi_dot v;
psi_dot 0 0;
0 0 0];
Tb2w = Tb2w*expm(S);
 
t = [Tb2w(1:2,3);0]; %extract 3D position of robot
psi = atan2(Tb2w(2,1),Tb2w(1,1));
q = [cos(psi/2) 0 0 sin(psi/2)]; %quaternion that encodes 1D orientation
 
 
figure(1),clf,hold on,title('Topdown view of robot with translational and orientation control')
axis([-20 20 -20 20]),grid on
plotTransforms(t', q, "MeshFilePath", "groundvehicle.stl", "Parent", gca, "View","2D", "FrameSize", 5);
plot(p_des(1),p_des(2),'ko',MarkerFaceColor='k')
text(-18,18,{['Time: ',num2str( time(i),'%.2f' ), ' seconds'], ...
['Position error: ',num2str( e_p_bf,'%.2f' ), ' m'], ...
['Orientation error: ',num2str( e_psi,'%.2f' ), ' rad']})
 
p = [p t(1:2)];
plot(p(1,:),p(2,:),'b')
 
drawnow
waitfor(timer);
end

8.4.5 Wheel speed limits

In practice, we also need to keep in mind that a realistic robot will have a limit to the achievable forward and rotational velocity. Specifically, in practice we have the constraints of
where and represent the respective maximum translational and rotational velocity that can be obtained by the mobile robot. Note that these limits will actually vary depending on the current wheel speed combination, as shown below. While saturation is not deeply problematic when only considering translational control or orientation control separately, saturation when controlling both translation and orientation simultaneously can result in undesirable behaviour.

Example: Attempting pose control while in saturation

This code block provides a simulation of the differentially-steered robot moving to a target point using both a translational and orientation controller. Wheel speed limits are set using the slider below. You should notice that:
clear
d = 1; %m
r = 0.5; %m
 
phi_lim =20;
 
dT = 0.05;
timer = rateControl(1/dT);
 
p_des_x = 15;
k_v = 2.3;
k_psi = 4.6;
 
p_des = [p_des_x 15]';
p = [];
 
Tb2w = [0 -1 5;
1 0 0;
0 0 1];
psi = atan2(Tb2w(2,1),Tb2w(1,1));
 
phi_vec = [];
 
T = 2;
time = 0:dT:T;
for i=1:(T/dT)+1 %run the simulation for a time equivalent to 3 seconds
 
%translation controller
e_p_wf = p_des(1:2)-Tb2w(1:2,3);
e_p_bf = ( cos(psi)*e_p_wf(1)+sin(psi)*e_p_wf(2) );
v = k_v*e_p_bf;
 
%orientation controller
psi_des = atan2(e_p_wf(2),e_p_wf(1));
e_psi = psi_des-psi;
psi_dot = k_psi*e_psi;
 
A = [1 -d/2;
1 d/2];
phi_des = 1/r*A*[v,psi_dot]';
%saturation
if( abs(phi_des(1)) > phi_lim )
phiL = sign(phi_des(1))*phi_lim;
else
phiL = phi_des(1);
end
if( abs(phi_des(2)) > phi_lim )
phiR = sign(phi_des(2))*phi_lim;
else
phiR = phi_des(2);
end
 
v = r*(phiL+phiR)/2;
psi_dot = r*(phiR-phiL)/d;
 
S = dT*[0 -psi_dot v;
psi_dot 0 0;
0 0 0];
Tb2w = Tb2w*expm(S);
 
t = [Tb2w(1:2,3);0]; %extract 3D position of robot
psi = atan2(Tb2w(2,1),Tb2w(1,1));
q = [cos(psi/2) 0 0 sin(psi/2)]; %quaternion that encodes 1D orientation
 
figure(1),clf,hold on,title('Topdown view of robot with translational/orientation control and saturation limits')
axis([-20 20 -20 20]),grid on
plotTransforms(t', q, "MeshFilePath", "groundvehicle.stl", "Parent", gca, "View","2D", "FrameSize", 3);
plot(p_des(1),p_des(2),'ko',MarkerFaceColor='k')
text(-18,18,{['Time: ',num2str( time(i),'%.2f' ), ' seconds'], ...
['Position error: ',num2str( e_p_bf,'%.2f' ), ' m'], ...
['Orientation error: ',num2str( e_psi,'%.2f' ), ' rad']})
 
p = [p t(1:2)];
plot(p(1,:),p(2,:),'b')
 
phi_vec = [phi_vec [phiL,phiR]'];
 
drawnow
waitfor(timer);
end
 
figure(2),clf,hold on,title('Desired wheel speeds over time')
plot(0:dT:T,phi_vec(1,:),'b',LineWidth=2)
plot(0:dT:T,phi_vec(2,:),'r',LineWidth=2)
line([0 T],[phi_lim phi_lim],'color','k','lineStyle','--')
legend('$\dot\phi_L^*$','$\dot\phi_R^*$','$\dot\phi_{lim}$',"interpreter","latex")
xlabel('time (s)'),ylabel('wheel speeds')
The time-varying body velocity constraints originate from the fact that each wheel's velocity has a fundamental limit, namely,
Recall that
If, for example, both wheel speeds are at their positive maximum, which is given by , then this implies that
The result above suggests that the robot can translate at a high speed but has no available control action to adjust orientation. Conversely, if , then this implies that
This suggests that when the wheel speeds are at their maximum but opposite in polarity, no translational control action is available, whereas orientation control is still possible. The two results above imply that when there are wheel velocity limits imposed on the wheeled robot, the orientation and translation controller must share the finite control action available.
Consulting the mapping below of
one can plot the relationship between all admissible wheel speeds that adhere to , as well as the resulting body velocities, which is given in Figure 8.7.
Figure 8.7: Mapping from admissible wheel speeds to body velocities. The multi-coloured points indicate correspondence between wheel velocities and resulting robot body velocities.
We therefore cannot expect the robot to translate at its maximum velocity and simultaneously rotate at the maximum amount. Instead, there will need to be a balance at all times between the rotational and translational velocity of the robot. The right-hand side figure above implies that has four constraints in order to adhere to its admissible solution space:
where is the y-intercept of the body velocity space (where maximum angular velocity is obtainable), and is the x-intercept (maximum forward velocity is obtainable).

Example: Mapping of wheel speeds to body velocities

This example shows the mapping from chosen wheel speeds to the corresponding body velocities. The fixed boundary of the two workspaces is also indicated.
phiL =-0.7;
phiR =10;
 
phi_lim = 10;
d = 0.5;
rw = 0.5;
A = rw*[0.5 0.5; -1/d 1/d];
body_vel = A*[phiL phiR]';
 
a = rw*phi_lim;
b = 2*rw/d*phi_lim;
 
figure(1),clf
subplot(1,2,1),hold on,title('Workspace of wheel speeds')
plot(phiL,phiR,'ro',MarkerFaceColor='r')
line([-phi_lim phi_lim],[phi_lim phi_lim],"color",'k')
line([-phi_lim phi_lim],-[phi_lim phi_lim],"color",'k')
line([phi_lim phi_lim],[-phi_lim phi_lim],"color",'k')
line(-[phi_lim phi_lim],[-phi_lim phi_lim],"color",'k')
axis equal,xlim([-11 11]),ylim([-11 11]),xlabel('$\dot\phi_L$','interpreter','latex'),ylabel('$\dot\phi_R$','interpreter','latex')
subplot(1,2,2)
plot(body_vel(1),body_vel(2),'ro',MarkerFaceColor='r')
line([0 a],[b 0],"color",'k')
line([a 0],[0 -b],"color",'k')
line([0 -a],[-b 0],"color",'k')
line([-a 0],[0 b],"color",'k')
xlabel('$v$','interpreter','latex'),ylabel('$\dot\psi$','interpreter','latex'),title('Workspace of body velocities')
Recalling the inverse mapping, the desired wheel speeds are determined from the body velocities using
If a combination of is specified that will exceed the wheel speed constraints of , then our robot will hit a saturation and not behave as expected. Put differently, we require our wheel speeds to lie within the boundaries described by Figure 8.7 — body velocities that request wheel speeds outside of the wheel speed workspace cannot be achieved. This is described aptly by the equations below for each wheel:
,
Given the coupled nature of trying to simultaneously orient the robot and move towards a target location, saturation can cause the robot to shoot off in the entirely wrong direction and possibly never recover. For this reason, we aim to avoid saturation at all times. The easiest approach is to choose sufficiently small gains so that large control actions are never commanded. However, this may result in response times that are too slow for the particular task at hand.
One could instead share the control action between the orientation and translation controller based on the maximum allowable wheel speeds. Recalling that satisfying
is required for a valid body velocity vector, we can first determine the required based on the demands of the orientation controller, as shown in Section 8.4.3, and then subsequently determine the largest velocity value that satisfies the constraints above. This value can then be used as a velocity limit on the translational controller so that saturation never occurs. This velocity limit is given by

Example: Determining the velocity limit given a required angular velocity

This example shows how the velocity limit can be obtained by solving the equation above when an angular velocity is specified. You should notice that:
psi_dot =0;
vel_direction =-1;
 
phi_lim = 10;
d = 0.5;
rw = 0.5;
A = rw*[0.5 0.5; -1/d 1/d];
 
a = rw*phi_lim;
b = 2*rw/d*phi_lim;
 
v = vel_direction*( a-a/b*abs(psi_dot) );
 
body_vel = [v,psi_dot]';
wheel_speeds = A\body_vel;
 
figure(1),clf
subplot(1,2,1),hold on
plot(wheel_speeds(1),wheel_speeds(2),'ro',MarkerFaceColor='r')
line([-phi_lim phi_lim],[phi_lim phi_lim],"color",'k')
line([-phi_lim phi_lim],-[phi_lim phi_lim],"color",'k')
line([phi_lim phi_lim],[-phi_lim phi_lim],"color",'k')
line(-[phi_lim phi_lim],[-phi_lim phi_lim],"color",'k')
axis equal,xlim([-11 11]),ylim([-11 11])
title('Workspace of wheel speeds')
xlabel('$\dot\phi_L$','interpreter','latex'),ylabel('$\dot\phi_R$','interpreter','latex')
subplot(1,2,2),plot(body_vel(1),body_vel(2),'ro',MarkerFaceColor='r')
line([0 a],[b 0],"color",'k')
line([a 0],[0 -b],"color",'k')
line([0 -a],[-b 0],"color",'k')
line([-a 0],[0 b],"color",'k')
% xlim([-10 10]),ylim([-20 20])
xlabel('$v$','interpreter','latex'),ylabel('$\dot\psi$','interpreter','latex'),title('Workspace of body velocities')
We can include the saturation detection scheme in our previous code block that attempted to perform pose control under saturation. Given that the velocity limit is described by
the velocity command will be determined by
where is the commanded velocity dictated by the translational controller.

Example: Attempting pose control when using saturation detection

This code block provides a simulation of the differentially-steered robot moving to a target point using both a translational and orientation controller. Wheel speed limits are set using the slider below and a saturation detector is added to the forward velocity signal. You should notice that
clear
d = 1; %m
r = 0.5; %m
 
phi_lim =20;
 
dT = 0.05;
timer = rateControl(1/dT);
 
p_des_x = 15;
k_v = 2.3;
k_psi = 4.6;
 
p_des = [p_des_x 15]';
p = [];
 
Tb2w = [0 -1 5;
1 0 0;
0 0 1];
psi = atan2(Tb2w(2,1),Tb2w(1,1));
 
phi_vec = [];
 
a = r*phi_lim;
b = 2*r/d*phi_lim;
 
T = 2;
time = 0:dT:T;
for i=1:(T/dT)+1 %run the simulation for a time equivalent to 3 seconds
 
%translation controller
e_p_wf = p_des(1:2)-Tb2w(1:2,3);
e_p_bf = ( cos(psi)*e_p_wf(1)+sin(psi)*e_p_wf(2) );
v = k_v*e_p_bf;
 
%orientation controller
psi_des = atan2(e_p_wf(2),e_p_wf(1));
e_psi = psi_des-psi;
psi_dot = k_psi*e_psi;
 
%saturation detector
v_lim = a-a/b*abs(psi_dot);
if( abs(v) > v_lim )
v = sign(v)*v_lim;
end
 
A = [1 -d/2;
1 d/2];
phi_des = 1/r*A*[v,psi_dot]';
 
%saturation
if( abs(phi_des(1)) > phi_lim )
phiL = sign(phi_des(1))*phi_lim;
else
phiL = phi_des(1);
end
if( abs(phi_des(2)) > phi_lim )
phiR = sign(phi_des(2))*phi_lim;
else
phiR = phi_des(2);
end
 
%update pose
v = r*(phiL+phiR)/2;
psi_dot = r*(phiR-phiL)/d;
 
S = dT*[0 -psi_dot v;
psi_dot 0 0;
0 0 0];
Tb2w = Tb2w*expm(S);
 
t = [Tb2w(1:2,3);0]; %extract 3D position of robot
psi = atan2(Tb2w(2,1),Tb2w(1,1));
q = [cos(psi/2) 0 0 sin(psi/2)]; %quaternion that encodes 1D orientation
 
figure(1),clf,hold on,title('Topdown view of robot with translational/orientation control and saturation compensation')
axis([-20 20 -20 20]),grid on
plotTransforms(t', q, "MeshFilePath", "groundvehicle.stl", "Parent", gca, "View","2D", "FrameSize", 3);
plot(p_des(1),p_des(2),'ko',MarkerFaceColor='k')
text(-18,18,{['Time: ',num2str( time(i),'%.2f' ), ' seconds'], ...
['Position error: ',num2str( e_p_bf,'%.2f' ), ' m'], ...
['Orientation error: ',num2str( e_psi,'%.2f' ), ' rad']})
 
p = [p t(1:2)];
plot(p(1,:),p(2,:),'b')
 
phi_vec = [phi_vec [phiL,phiR]'];
 
drawnow
waitfor(timer);
end
 
figure(2),clf,hold on,title('Wheel speeds over time')
plot(0:dT:T,phi_vec(1,:),'b',LineWidth=2)
plot(0:dT:T,phi_vec(2,:),'r',LineWidth=2)
line([0 T],[phi_lim phi_lim],'color','k','lineStyle','--')
legend('$\dot\phi_L$','$\dot\phi_R$','$\dot\phi_{lim}$',"interpreter","latex")
xlabel('time (s)'),ylabel('wheel speeds')

8.4.6 Norm-based translational control

The position controller in Section 8.4.2 notably only considers errors in , which inherently relies on the orientation controller pointing in the direction of the target point in order for motion to proceed. In this configuration, the mobile robot will stop translating when points at the target location, and will only recommence motion once the orientation is adjusted. This can result in conservative and disjointed motion depending on the controller gains selected and initial configuration of the robot. A more aggressive method to maintain continuous velocity is to rather use the Euclidean distance from the robot to the destination as an error signal:
The corresponding position translational control law follows as
In this form, the mobile robot will continue to translate until such time that the distance to the target location reaches zero. While this can result in a more natural and fluid motion from the start to end points, there is the risk of never reaching the target location if the proportional gain of the position controller is set too aggressively relative to the orientation controller. The control law above is also applicable to mobile robots that have a car-like or bicycle model, where changing orientation is only possible when the robot is also experiencing a linear velocity.

8.5 Pure pursuit trajectory tracking

8.5.1 Definition

We can extend our concept of point-to-point motion to the generalised case of following a trajectory. This trajectory can originate from a trajectory generation routine (see Chapter 7), or from real-time information based on the robot's sensors. The basic concept of a pure pursuit controller is to track a reference point on the trajectory that constantly moves in relation to the robot at an approximately constant velocity. This can be likened to a donkey chasing after a carrot. With reference to Figure 8.8, our formulation of the tracking error follows on from Section 8.4.6, namely,
with the inclusion of a specified following distance, , which corresponds with the desired displacement between the robot (donkey) and pursuit point (carrot).
Figure 8.8: Robotic platform tracking a moving point using velocity control (pure pursuit).
The aim of the controller is to force , which implies that . That is, the Euclidean distance from the robot to the pursuit point equals when the tracking error goes to zero. Note that the constant offset while the robot is in motion implies that we are no longer controlling the position of our robot directly, but rather controlling the robot's velocity. To accommodate this change, we restructure our velocity controller as follows:
which denotes a proportional-plus-integral (PI) controller form with controller gains and . The integral term is required to provide a nonzero velocity command when the corresponding error, is zero. The orientation controller remains unchanged and is repeated below for sake of convenience:
where . The orientation controller fulfills a similar role as in the previous scenario, but depending on the variability in the specific trajectory, more control action will be required to continuously readjust the robot in order to face the pursuit point.
While the kinematic model of the wheeled robot is nonlinear in nature, we can model the incremental position behaviour in using the loop transfer function of
with the input to this system being translation error term , and the output being the incremental position change in . Recalling Control Systems, the closed-loop behaviour follows as
which implies that the closed-loop system is second order with two poles and one zero. The zero is located at , and the poles are located at
syms kv1 kv2 positive
syms s
solve(s^2+kv1*s+kv2==0,s)
The design of the PI controller can be accomplished using trial and error (not ideal) or using a model-based loop shaping approach, for example as detailed in Control Systems.

8.5.2 Model-based design recap

Recall that a feedback controller can be designed using frequency-domain information. With reference to Figure 8.9, stay-above and stay-below bounds can be used to shape the closed-loop transfer function.
Figure 8.9: Design boundaries for closed-loop system.
As shown in Figure 8.10, if a settling time, , and percentage overshoot, , is specified for closed-loop system , the overbounding value, , and peak frequency, , can be determined using the equations below.
Figure 8.10: Second-order time-domain response showing percentage overshoot, , and settling time, .

Example: PI control design using the log-polar plane

Using a log-polar plot, design a PI controller for the forward/backward motion of a differentially-steered robot with transfer function, which will satisfy the closed-loop second-order specifications of a settling time of 1 seconds, and percentage overshoot of 10%, and fully reject input disturbances at steady-state.
Recall that , representing the stay-below boundary when plotted against a Bode magnitude response, is determined using
,
.
%stay-below bound
zeta = -log(OS/100)/sqrt( pi^2+log(OS/100)^2 )
zeta = 0.5912
Ma = 1/( 2*zeta*sqrt(1-zeta^2) )
Ma = 1.0487
Using the outlined PI controller design approach, we set , and then determine the peak frequency based on the natural frequency and damping ratio of the stay-above bound, which gives
.
wn = 4/(zeta*Ts);
wn = 6.7664
wp = wn*sqrt(1-2*zeta^2)
wp = 3.7127
The code below generates the design boundaries in the log-polar plane and also plots the frequency response locus of . Frequency point is also highlighted on the locus, as we require all frequencies up to to be above green line, which corresponds with .
clear
s = tf('s');
 
P = 1/s;
K = 25;
Ti = 1;
G = K*(1+s*Ti)/(s*Ti);
L = P*G;
 
Ts = 1;
OS = 10;
 
%stay-below bound
zeta = -log(OS/100)/sqrt( pi^2+log(OS/100)^2 );
Ma = 1/( 2*zeta*sqrt(1-zeta^2) );
 
%stay-above bound
wn = 4/(zeta*Ts);
w0 = wn*sqrt( (1-2*zeta^2)+sqrt( 4*zeta^4-4*zeta^2+2 ) );
Mb = 0.98;
 
 
wp = wn*sqrt(1-2*zeta^2);
 
figure, hold on, grid on
 
M = [Ma Mb];
for k = 1:length(M)
phi = linspace(0,2*pi,1e4);
c = -1;
r = 1/M(k);
K = c + r*exp(1i*phi);
 
mag_K = 20*log10( abs(K) );
phi_K = angle( K )*180/pi;
for i=1:length(phi_K)
if( phi_K(i) > 0), phi_K(i) = phi_K(i)-360; end
end
 
if( k==1 )
plot( phi_K(2:end-1),-mag_K(2:end-1),'--b',LineWidth=2 )
else
plot( phi_K(2:end-1),-mag_K(2:end-1),'-g',LineWidth=2 )
Lw = squeeze( freqresp(L,wp) );
ang_Lw = angle(Lw)*180/pi;
if( ang_Lw > 0 )
ang_Lw = ang_Lw-360;
end
plot(ang_Lw,20*log10(abs(Lw)),'o',MarkerFaceColor='g',MarkerEdgeColor='k')
end
end
[magL,phiL] = nichols(L,{1e-3,1e3});
plot(squeeze(phiL),squeeze(20*log10(magL)))
 
plot(-180,0,'ro',MarkerFaceColor='r')
 
legend('$|T(j\omega)|\leq M_A$','$|T(j\omega)|\geq M_B$', '$L_1(j\omega)$','Interpreter','latex')
title('Log-polar plot')
xlim([-360,0]),ylim([-60,60])
hold off
For and , both design boundaries are met. This give a final feedback controller of
which is ratified in the time domain using the code below.
T = feedback(L,1);
figure, hold on
step(T,1.2*Ts)
yline(1+OS/100,'--b','$\% OS=25\%$','Interpreter','latex')
yline(1.02,'--r','$\% +2\%$','Interpreter','latex')
xline(Ts,'--r','$t_s=5$s','Interpreter','latex','LabelVerticalAlignment','bottom')
xlim([0,1.2*Ts]),ylim([0 1+1.3*OS/100])
The controller above is in pole-zero form but can be converted to its additive form based on
where and .
kv1 = 25
kv1 = 25
kv2 = 25/1
kv2 = 25

Example: Pure pursuit controller

Play around with the simulation parameters below to see how they affect the overall performance. Some important parameters are:
d = 1; %m
r = 0.5; %m
 
dT = 0.05; %update/sample time
timer = rateControl(1/dT);
 
k_psi = 18; %orientation controller proportional gain
k_v1 = 25; %linear velocity proportional gain
k_v2 = 25; %linear velocity integral gain
d_star = 0.9; %look ahead distance
 
L = 14;
s1 = [linspace(0,L,1e3); zeros(1,1e3)];
s2 = [L*ones(1,1e3); linspace(0,L,1e3)];
s3 = [linspace(L,0,1e3); L*ones(1,1e3)];
s4 = [zeros(1,1e3); linspace(L,0,1e3)];
S = [s1 s2 s3 s4];
 
ev_int = 0;
t = 0;
psi_des_past = 0;
pTrail = [];
noOfLoops = 2;
 
S_ = [];
for i=1:noOfLoops
S_ = [S_ S];
end
 
T = eye(3);
psi = 0;
tvec = zeros(3,1);
for i=1:(5*noOfLoops/dT) %run the simulation for a time equivalent to 3 seconds
figure(1),clf,hold on
plot(S(1,:),S(2,:),'k--')
 
p_des = S_(:,length(S_)/(5*noOfLoops/dT)*i);
e_p_wf = p_des(1:2)-tvec(1:2);
 
%orientation control
% psi = robotPose(3);
psi_des = atan2(e_p_wf(2),e_p_wf(1));
e_psi = psi_des-psi;
if( abs(e_psi)>pi )
e_psi = -sign(e_psi)*( 2*pi-abs(e_psi) );
end
u_psi = k_psi*e_psi;
%position control
ev = norm(e_p_wf)-d_star;
ev_int = ev_int+dT*ev;
u_v = k_v1*ev+k_v2*ev_int;
 
phi_des = [1/r -d/(2*r);1/r d/(2*r)] *[u_v,u_psi]';
phiL = phi_des(1);
phiR = phi_des(2);
 
v = r*( phiL+phiR )/2;
psi_dot = r*(phiR-phiL)/d;
R = [cos(psi_dot*dT) -sin(psi_dot*dT);
sin(psi_dot*dT) cos(psi_dot*dT)];
if( psi_dot==0 )
J = eye(2,2);
else
J = [sin(psi_dot*dT)/(psi_dot*dT) (cos(psi_dot*dT)-1)/(psi_dot*dT);
(1-cos(psi_dot*dT))/(psi_dot*dT) sin(psi_dot*dT)/(psi_dot*dT) ];
end
T_inc = [R J*[v 0]'*dT; 0 0 1];
 
T = T*T_inc;
tvec = [T(1:2,3);1];
psi = atan2(T(2,1),T(1,1));
q = [cos(psi/2) 0 0 sin(psi/2)];
 
pTrail = [pTrail tvec(1:2)];
plot(pTrail(1,:),pTrail(2,:),'b-',lineWidth=2)
 
axis([-5 20 -5 20]),grid on
plotTransforms(tvec', q, "MeshFilePath", "groundvehicle.stl", "Parent", gca, "View","2D", "FrameSize", d/.5);
 
plot(p_des(1),p_des(2),'ro')
 
t = t+dT;
title(['Time: ',num2str( t ), ' seconds'])
text(0,18,{['Orientation error: ',num2str( e_psi*180/pi,'%.2f' ), ' deg a'], ...
['Look ahead distance: ',num2str( norm(e_p_wf),'%.2f' ), ' m']})
 
drawnow
waitfor(timer);
end