28.06.2013 Views

Visual Based Predictive Control for a Six Degrees of Freedom Robot

Visual Based Predictive Control for a Six Degrees of Freedom Robot

Visual Based Predictive Control for a Six Degrees of Freedom Robot

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Visual</strong> <strong>Based</strong> <strong>Predictive</strong> <strong>Control</strong> <strong>for</strong> a <strong>Six</strong> <strong>Degrees</strong> <strong>of</strong><br />

<strong>Freedom</strong> <strong>Robot</strong><br />

P.A. Fernandes Ferreira 1 , J.R. Caldas Pinto 2<br />

1 Instituto Politécnico de Setúbal, Escola Superior de Tecnologia,<br />

Campus do Instituto Politécnico de Setúbal, Estefanilha. 2914-508 Setúbal, Portugal<br />

2 Instituto Superior Técnico, Av. Rovisco Pais, 1096 Lisboa, Portugal<br />

e-mail: pferreir@est.ips.pt<br />

Abstract- A visual servoing architecture <strong>for</strong> a six degrees<br />

<strong>of</strong> freedom PUMA robot, using predictive control, is<br />

presented. Two different approaches, GPC and MPC,<br />

are used. A comparison between these two ones and the<br />

classical PI controller is per<strong>for</strong>med. The implemented<br />

PUMA robot model simulator used as plat<strong>for</strong>m <strong>for</strong> the<br />

development <strong>of</strong> the control algorithms is presented. A<br />

control law based on features extracted from camera<br />

images is used. Simulation results show that the<br />

proposed strategies provide an efficient control system<br />

and that visual servoing architectures using predictive<br />

control are faster than those using PI control.<br />

Experimental results are obtained from an architecture<br />

using a XPC Target and Matlab Simulink. Through this<br />

technology is possible to create an operative system<br />

which allows working in real time robot control.<br />

1. INTRODUCTION<br />

The controller has a crucial role in a visual servoing system<br />

per<strong>for</strong>mance. Most <strong>of</strong> the developed works in visual<br />

servoing systems do not take into account the manipulator<br />

dynamics. Nevertheless, considering these parameters could<br />

increase the precision and the velocity <strong>of</strong> the system.<br />

The term Model <strong>Predictive</strong> <strong>Control</strong> (MPC) includes a very<br />

wide range <strong>of</strong> control techniques which make an explicit use<br />

<strong>of</strong> a process model to obtain the control signal by<br />

minimizing an objective function [1]. The MPC is <strong>of</strong>ten<br />

<strong>for</strong>mulated in a state space <strong>for</strong>mulation conceived <strong>for</strong><br />

multivariable constrained control while Generalized<br />

<strong>Predictive</strong> <strong>Control</strong> (GPC), which was first introduced in<br />

1987 [2], is primarily suited <strong>for</strong> single variable, being the<br />

model presented in a polynomial transfer function <strong>for</strong>m.<br />

<strong>Predictive</strong> control systems have been applied to different<br />

fields such as industry, medicine and chemical process<br />

control. Particularly, Model <strong>Predictive</strong> control has been<br />

adopted in industry as an efficient way <strong>of</strong> dealing with<br />

multivariable constrained control problems [3].<br />

A developed work in the field <strong>of</strong> visual servoing systems <strong>for</strong><br />

small displacements used a predictive controller [4]. In this<br />

work, it is given continuity to some developed works in<br />

visual servoing research [5], [6]. In order to carry out this<br />

1-4244-0681-1/06/$20.00 '2006 IEEE<br />

846<br />

work a toolbox that allows incorporating vision in the<br />

PUMA robot control architecture was created. Its great<br />

versatility, allowing the easy interconnection <strong>of</strong> different<br />

types <strong>of</strong> controllers, becomes this type <strong>of</strong> tool very<br />

advantageous. In this paper the results <strong>of</strong> a set <strong>of</strong><br />

experiences in the area <strong>of</strong> the modelling, identification and<br />

control <strong>of</strong> a visual servoing system <strong>for</strong> a PUMA 560 robot<br />

are presented.<br />

This paper is organised as follows: the implemented six<br />

degrees <strong>of</strong> freedom Puma robot model simulation and its<br />

validation is presented in section 2. A brief overview <strong>of</strong> a<br />

2D visual servoing technique is introduced in Section 3. The<br />

principles <strong>of</strong> <strong>Predictive</strong> <strong>Control</strong> (GPC and MPC) and the<br />

identification <strong>of</strong> the PUMA ARMAX model are presented<br />

in Section 4. The experimental settings and the results <strong>for</strong> a<br />

PI, GPC and MPC controllers are given in section 5. Section<br />

6 concludes the paper and section 7 suggests the continuity<br />

<strong>of</strong> this work.<br />

2. PUMA 560 MODEL<br />

This work started with the implementation <strong>of</strong> the PUMA<br />

560 model [7] corresponding to a six degrees <strong>of</strong> freedom<br />

robot model (Fig.1) with all the joints revolute. In order to<br />

validate the model, real tests in joint space with some<br />

controllers and gravity compensation [8] were per<strong>for</strong>med.<br />

1<br />

torque<br />

torque MATLAB<br />

MATLAB<br />

2<br />

Function<br />

Function<br />

Dqout<br />

Saturation<br />

q''<br />

DAC<br />

q' q<br />

Sum6<br />

KrG<br />

MATLAB<br />

1/s<br />

1/s<br />

1<br />

Function<br />

qout<br />

<strong>Robot</strong><br />

MATLAB<br />

Function<br />

KG<br />

Out1 In1<br />

Gravitic compensation<br />

Fig. 1. PUMA 560 model simulation<br />

The model output (q) depends <strong>of</strong> the robot dynamics, the<br />

applied torque, the current position and the velocity. The<br />

binary applied to the joints are given by [9]:<br />

Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on March 23,2010 at 08:17:40 EDT from IEEE Xplore. Restrictions apply.


T= M( qq ) + Cqqq ( , ) + Fq + Gq ( )<br />

(1)<br />

where M is the Inertia matrix, q is the vector joints<br />

velocities, q<br />

is the vector joints acceleration. The matrices<br />

C, F and G represent the Coriolis and centripetal effects, the<br />

viscous and Coulomb Forces, and the gravity effects<br />

respectively.<br />

2.1 Puma model validation<br />

The robot model was tested with a PI controller using the<br />

parameters <strong>of</strong> the real system. Fig. 2 shows the torque<br />

values <strong>for</strong> a 20º amplitude trajectory during 9 s. It was used<br />

an algorithm that allows to obtain a planning periodic<br />

trajectory (Fig.2). The error obtained from the comparison<br />

between the real and simulate robot trajectory is shown in<br />

Fig. 3.<br />

Fig. 2. Real and simulation trajectory result <strong>of</strong> joint 3.<br />

Fig. 3. Error result between real and simulate to joint 3.<br />

3. VISUAL SERVOING ARCHITECTURES<br />

3.1 Basic definitions<br />

A few basic definitions <strong>for</strong> 2D visual servoing architecture<br />

are here introduced (see Fig. 4.):<br />

pe - Actual pose <strong>of</strong> the camera (with respect to a fixed<br />

reference frame R0)<br />

p - Required displacement <strong>of</strong> the camera referential from the<br />

actual position to the desire position, related to the observed<br />

object frame.<br />

p* - Pose <strong>of</strong> the camera (with respect to a fixed reference<br />

frame R0) in a desired position.<br />

847<br />

Mcr – Homogeneous trans<strong>for</strong>mation between the camera<br />

frame in the current position, Rc, and camera frame in the<br />

desired position Rr.<br />

Fig. 4. Reference frames definition.<br />

Let T6 be the trans<strong>for</strong>mation which converts a<br />

homogeneous matrix into 6 operational coordinates:<br />

where<br />

p (q)=T6 (Mcr(q)) (2)<br />

Mcr=<br />

⎡r<br />

⎢<br />

⎢<br />

r<br />

⎢r<br />

⎢<br />

⎣ 0<br />

11<br />

21<br />

31<br />

r<br />

r<br />

r<br />

12<br />

22<br />

32<br />

0<br />

r<br />

r<br />

r<br />

13<br />

23<br />

33<br />

0<br />

Tx<br />

⎤<br />

T<br />

⎥<br />

y ⎥<br />

Tz<br />

⎥<br />

⎥<br />

1 ⎦<br />

p(q)= [ ] T<br />

x<br />

y<br />

z<br />

x<br />

y<br />

z<br />

(3)<br />

T T T θ θ θ (4)<br />

Then the homogeneous matrix can be trans<strong>for</strong>med into a<br />

vector that describes the displacement as function <strong>of</strong><br />

rotation and translation.<br />

where<br />

R b<br />

p(q) = T6(M )=<br />

⎡<br />

T x<br />

⎤<br />

⎢<br />

⎥<br />

⎢<br />

T y<br />

⎥<br />

⎢<br />

T z<br />

⎥<br />

⎢<br />

⎥<br />

⎢ arcsin( r13<br />

) ⎥<br />

⎢<br />

− r12<br />

r11<br />

⎥<br />

⎢arctan<br />

2(<br />

, )<br />

cos θ cos<br />

⎥<br />

⎢<br />

r θ r ⎥<br />

⎢<br />

− r23<br />

r33<br />

arctan 2(<br />

, ) ⎥<br />

⎢<br />

⎣ cos θ cos r ⎥<br />

r θ ⎦<br />

θ r = arcsin( r13<br />

)<br />

(6)<br />

From the definition <strong>of</strong> robot Jacobian [9] the relationship<br />

between the joint velocities and the end-effector linear and<br />

angular velocities is given by the expression:<br />

where Jc is the robot Jacobian.<br />

3.2 2D <strong>Visual</strong> Servoing<br />

x<br />

y<br />

x<br />

p e<br />

y<br />

x<br />

M cr= p<br />

(5)<br />

pq ( ) = J( qq ) (7)<br />

The modelled system describes the PUMA 560 dynamics<br />

with an eye in hand configuration controlling its 6 degrees<br />

Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on March 23,2010 at 08:17:40 EDT from IEEE Xplore. Restrictions apply.<br />

R c<br />

R 0<br />

z<br />

x<br />

Z<br />

c<br />

y<br />

*<br />

p<br />

R r<br />

z<br />

y


<strong>of</strong> freedom. The camera is placed in such a way that objects<br />

in workspace are in its field <strong>of</strong> vision. An error image is<br />

measured and used to evaluate the displacement related to<br />

the end effector. Thus the robot pose is estimated through<br />

the visual in<strong>for</strong>mation. Within this framework image feature<br />

measures are converted in 6 operational coordinates.<br />

Fig. 5 2D <strong>Visual</strong> Servoing.<br />

The used control architecture represented in Fig. 5 is a 2D<br />

type [10]. The reference is obtained from image features.<br />

This approach uses the robot pose error p=(p*-pe) obtained<br />

from the difference between the reference image features s*<br />

and the obtained current image features s. The image<br />

Jacobian, that relates the kinematics torsor <strong>of</strong> the camera<br />

frame and the variation <strong>of</strong> image primitives, plays an<br />

important role.<br />

The extracted visual measure is expressed under the <strong>for</strong>m <strong>of</strong><br />

a pose p, which contains six operational coordinates. This<br />

pose is obtained from the relation between the frame Rr in<br />

the desired position and the camera frame in current position<br />

Rc. The control goal consists <strong>of</strong> bringing the measure p to<br />

the desired measure pr. When p is very close to pr,<br />

Mcr= * M cr , and<br />

Mcr=I (8)<br />

2n<br />

Let s ∈ be the vector <strong>of</strong> current coordinates <strong>of</strong> n points<br />

<strong>of</strong> the image:<br />

s [ x y x y ... x y ]<br />

= (9)<br />

p1<br />

p1<br />

p2<br />

p2<br />

The image Jacobian Jv(s)∈R 2nx6 is a matrix that relates the<br />

velocity s <strong>of</strong> the image points and the kinematics torsor r<br />

<strong>of</strong> the camera frame.<br />

where<br />

s *<br />

s<br />

Comand<br />

law<br />

Image primitives<br />

measure<br />

<strong>Robot</strong><br />

comand<br />

and the image Jacobian is given by:<br />

Image<br />

processing<br />

pn<br />

pn<br />

s= Jv() s r<br />

(10)<br />

T<br />

r = [ v vyvzωxωyωz] x<br />

R (11)<br />

c<br />

pe<br />

R 0<br />

Target<br />

848<br />

⎡ x x y f + x ⎤<br />

2<br />

f<br />

p1 p1 p1 2 p1<br />

⎢ − 0<br />

−<br />

y p1<br />

⎥<br />

⎢ z1 z1 f f ⎥<br />

⎢ 2 2 1<br />

f f + y p xp1y ⎥<br />

p1<br />

⎢ 0 − yp1 − −xp1⎥<br />

⎢ Z1f f ⎥<br />

Jv<br />

( ℑ ) =<br />

⎢ ⎥<br />

⎢ ⎥<br />

2<br />

⎢ f xpn xpnypn f2+ x ⎥<br />

pn<br />

⎢− 0<br />

−<br />

y pn ⎥<br />

⎢ Zn zn f f ⎥<br />

⎢ 2 2<br />

f ypn f + y pn xpny ⎥<br />

pn<br />

⎢ 0 − − −x⎥<br />

pn<br />

⎢⎣ zn zn f f ⎥⎦<br />

(12)<br />

The image Jacobian is a function <strong>of</strong> the image features,<br />

focal distance f and <strong>of</strong> the camera frame z-coordinates <strong>of</strong> the<br />

target points. The relation between the kinematics torsor r<br />

and p is given by<br />

r = J p<br />

(13)<br />

where Jp is the Jacobian between the referential in initial<br />

and final position and relates the velocity in these two<br />

frames:<br />

p→0<br />

p<br />

lim J p =− I<br />

(14)<br />

Assuming Pr ≈ 0, in case <strong>of</strong> displacement, and that servoing<br />

is fast enough the following approach is valid:<br />

P ≈ −r<br />

(15)<br />

Let J v + be the pseudo inverse image Jacobian Jv ( s ) , then<br />

From equation (15):<br />

0<br />

J<br />

v<br />

T −1<br />

T ( J J ) J<br />

+ =<br />

(16)<br />

v<br />

v<br />

+<br />

v<br />

v<br />

r= J s<br />

(17)<br />

p ≈−J<br />

s<br />

(18)<br />

Let s0 be the vector <strong>of</strong> primitives corresponding to the<br />

reference image. When s = s0<br />

, Rc = Rr<br />

and<br />

p = p = 0 . From equation (18) one can deduce:<br />

0<br />

+<br />

v<br />

( ) 2<br />

+<br />

p− p = −J s− s* + O( s ) (19)<br />

v<br />

2<br />

+<br />

where Os ( ) is a second order error component, and J v is<br />

the pseudo-inverse image Jacobian s is the primitive <strong>of</strong> the<br />

target <strong>for</strong> the actual configuration <strong>of</strong> the robot and s0 is the<br />

primitive <strong>of</strong> the target <strong>for</strong> the desired robot configuration.<br />

Then<br />

+<br />

p≈−J s− s<br />

(20)<br />

v<br />

( )<br />

From equation (20) is possible to convert image primitive<br />

measures into six operational coordinates P.<br />

Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on March 23,2010 at 08:17:40 EDT from IEEE Xplore. Restrictions apply.<br />

0


4. PREDICTIVE CONTROL<br />

4.1 Generalized <strong>Predictive</strong> <strong>Control</strong><br />

The basic idea <strong>of</strong> GPC is to calculate a sequence <strong>of</strong> future<br />

control signals in such a way that it minimizes a cost<br />

function defined over a prediction horizon [1], [2]. The<br />

index to be optimized is the expectation <strong>of</strong> a quadratic<br />

function measuring the distance between the predicted<br />

system output and some predicted reference sequence over<br />

the horizon plus a quadratic function measuring the control<br />

ef<strong>for</strong>t.<br />

H p Hc<br />

2<br />

Jk ( ) = ∑ yˆ − r + λ2<br />

∆u<br />

k+ j k+ j ∑ k+ j−1<br />

j= N j=<br />

1<br />

where:<br />

N1- minimum costing horizons<br />

Hp – Prediction horizon<br />

Hc≤Hp≥1 <strong>Control</strong> horizon<br />

1<br />

∆uK - <strong>Control</strong> action increment, ∆uk=uk-uk-1<br />

λk - control energy weight<br />

y ˆ<br />

k + j - Prediction <strong>of</strong> the system output<br />

<br />

- reference predictive trajectory<br />

r k + j<br />

The system model can be presented in the ARMAX <strong>for</strong>m [1]<br />

−1<br />

−1<br />

A( z ) y(<br />

t)<br />

= B(<br />

z ) u(<br />

t −Te<br />

−1<br />

C(<br />

z ) ξ ( t)<br />

) +<br />

−1<br />

1−<br />

z<br />

(22)<br />

1<br />

Where Az ( )<br />

−<br />

1<br />

, Bz ( )<br />

−<br />

1<br />

and Cz ( )<br />

−<br />

are the matrix<br />

parameters <strong>of</strong> transfer function H ( z ) . To compute the<br />

output predictions is necessary to know the system model<br />

that must be controlled (Fig. 7). The parameters used by the<br />

GPC are obtained from the configuration shown in Fig. 6.<br />

∆ p p *<br />

GPC<br />

H(z)<br />

J -1<br />

∗<br />

q<br />

ZOH<br />

Vision (Z -1 )<br />

F(z)<br />

Fig. 6. Manipulator system block diagram controlled by vision.<br />

The transfer, H ( z ) , function is given by:<br />

q<br />

Jc ∫<br />

<strong>Robot</strong><br />

velocity<br />

control<br />

p<br />

(21)<br />

849<br />

pz ( ) − 1 T 11<br />

( ) ( ) a z+<br />

H z = = JcF z Jc<br />

p *( z) 2 z−1z (23)<br />

The parameters <strong>of</strong> this function are used in the predictive<br />

controller implementation.<br />

4.2 System Identification<br />

The robot model is obtained by identifying each <strong>of</strong> the joints<br />

dynamics to obtain a six order linear model:<br />

−5 −4 −3 −2 −1<br />

bz 5 + bz 4 + bz 3 + bz 2 + bz 1 + b0<br />

Fi( z)<br />

=<br />

−6 −5 −4 −3 −2 −1<br />

z + a5z + a4z + a3z + a2z + a1z + a0<br />

(24)<br />

In the identification procedure a PRBS is used as input<br />

signal. A prediction error method (PEM) is used to identify<br />

1<br />

the robot dynamics. The noise model Cz ( )<br />

−<br />

<strong>of</strong> order 1 was<br />

selected. In this approach the identification <strong>of</strong> H ( z ) (Fig. 6)<br />

is per<strong>for</strong>med around a reference condition. Since the robot is<br />

controlled in velocity and the dynamics depend mainly on<br />

the first joints, assuming small displacements is possible to<br />

linearize the system around the position q. It was also<br />

necessary to consider a diagonal inertia matrix. Under these<br />

conditions, the Jacobian matrix is constant as well as H(z).<br />

This procedure is valid at low velocities. This means that the<br />

cross coupled terms are neglected.<br />

4.3 Model <strong>Predictive</strong> <strong>Control</strong>ler<br />

In this approach the model is <strong>for</strong>mulated in a space state<br />

<strong>for</strong>m:<br />

⎧x(<br />

t + 1)<br />

= Ax(<br />

t)<br />

+ Bu(<br />

t),<br />

x(<br />

0)<br />

= x0<br />

⎨<br />

⎩ y(<br />

t)<br />

= Cx(<br />

t)<br />

(25)<br />

where x(t) is the state, u(t) is the control input and y(t) is the<br />

output.<br />

p *<br />

p<br />

J B t<br />

∫ Ct J -1<br />

A t<br />

Fig. 7 State space manipulator scheme.<br />

The state space manipulator dynamics controlled in velocity<br />

is represented in Fig. 7. The parameters values <strong>of</strong> At, Bt and<br />

Ct are obtained from the identification algorithm PEM. The<br />

matrices in equation (25) are computed through the image<br />

Jacobian matrix by the following definitions:<br />

B = BJ C= CJ A= A (26)<br />

−1<br />

t t t<br />

The predictive control algorithm is (Clarke et al., 1987):<br />

Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on March 23,2010 at 08:17:40 EDT from IEEE Xplore. Restrictions apply.


1. At time t predict the output from the system,<br />

y ˆ( t + k / t)<br />

, where k=N1,N1+1,…,N2.These outputs<br />

will depend on the future control signals,<br />

u ˆ( t + j / t)<br />

, j=0,1,…,N3 and on the measured state<br />

vectors at time t.<br />

2. Choose a criterion based on these variables and<br />

optimise with respect to u ˆ( t + j / t),<br />

j = 0,<br />

1,...,<br />

N 3 .<br />

3. Apply u ( t)<br />

= uˆ<br />

( t / t)<br />

.<br />

4. At time t+1 go to 1 and repeat.<br />

5.1 System configuration<br />

5. SIMULATION PROCEDURE<br />

The implemented <strong>Visual</strong> Servoing package allows the<br />

simulation <strong>of</strong> different kind <strong>of</strong> cameras. In this particular<br />

case, it was chosen a Costar camera placed in the endeffector<br />

and positioned according with oz axis. Its target <strong>of</strong><br />

eight coplanar points was created which will serve as<br />

control reference. The accuracy <strong>of</strong> the camera position<br />

control in the world coordinate system was increased by the<br />

use <strong>of</strong> redundant features [11]. The centre <strong>of</strong> the target<br />

corresponds to the point with coordinates (0,0) and the<br />

remaining points are placed symmetrically in relation to this<br />

point. The target pose is referenced to the robot base frame.<br />

In the case <strong>of</strong> servoing a trajectory, the target is remained<br />

fixed and the desired point is variable. As the primitive <strong>of</strong><br />

the target points is obtained it is possible to estimate the<br />

operational coordinates <strong>of</strong> the camera position point.<br />

<strong>Visual</strong> Servoing with a PI controller. In 2D <strong>Visual</strong> Servoing<br />

the image characteristics are used to control the robot.<br />

Images acquired by the camera are function <strong>of</strong> the end<br />

effector’s position, since the camera is fixed on the end<br />

effector <strong>of</strong> the robot. They are compared with the<br />

corresponding desired images. In the present case the image<br />

characteristics are the centroids <strong>of</strong> the target points. Fig. 8<br />

represents the model simulation <strong>of</strong> the implemented 2D<br />

visual servoing architecture. In this case CT is a PI<br />

controller.<br />

dp,pd<br />

pd<br />

dp<br />

CT<br />

P-P*<br />

S0<br />

Jr<br />

J v<br />

ZOH<br />

+<br />

− S<br />

Fig. 8 2D visual servoing architecture simulation.<br />

<strong>Predictive</strong> <strong>Visual</strong> Servoing implementation. In this approach<br />

our goal is also to control the relative pose <strong>of</strong> the <strong>Robot</strong> in<br />

respect to the target. In a similar way the model corresponds<br />

to Fig. 8 but substituting the controller – in a first case is<br />

used a GPC and in another is used a MPC controller. In both<br />

experiments all the conditions and characteristics <strong>of</strong> the<br />

robot are the same. The goal is to control the end effector<br />

from the image error between a current image and desire<br />

image.<br />

q iin<br />

q out<br />

Puma 560 + control<br />

850<br />

5.2 <strong>Visual</strong> servoing control results<br />

<strong>Visual</strong> Servoing using a PI <strong>Control</strong>ler. To eliminate the<br />

position error was chosen a PI controller. The point<br />

coordinates in operational coordinates are:<br />

pi = [0.35 –0.15 0.40 π 0 π] T<br />

pd = [0.45 –0.10 0.40 π 0 π] T<br />

The points pi and pd correspond to the <strong>Robot</strong> position from<br />

which the images used to control the robot are obtained. In<br />

Fig. 7 it can be observed the translation and rotation <strong>of</strong> the<br />

end-effector around ox, oy and oz axis.<br />

Fig. 9 <strong>Visual</strong> servoing using PI control.<br />

<strong>Predictive</strong> GPC and MPC <strong>Visual</strong> servoing control. In both<br />

experiments a 2D visual servoing architecture were used.<br />

From figures 9, 10 and 11 it can be seen that the GPC has a<br />

more linear trajectory and is faster. The rise time is around<br />

0.6s <strong>for</strong> the PI while <strong>for</strong> the GPC and MPC are 0.1s and<br />

0.2s, respectively. The settling time is 1s <strong>for</strong> the PI, 0.3s <strong>for</strong><br />

the GPC and 0.9s <strong>for</strong> the MPC. The results are less accurate<br />

in turn <strong>of</strong> z and <strong>for</strong> rotation <strong>of</strong> the end-effector.<br />

Fig. 10 Results <strong>of</strong> a 2D <strong>Visual</strong> servoing architecture using a GPC<br />

controller.<br />

Fig 11 Results <strong>of</strong> a 2D <strong>Visual</strong> servoing architecture using a MPC<br />

controller.<br />

Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on March 23,2010 at 08:17:40 EDT from IEEE Xplore. Restrictions apply.


Table 1 presents the computed errors <strong>for</strong> each algorithm<br />

which reveals the best per<strong>for</strong>mance <strong>for</strong> the GPC.<br />

TABLE1<br />

r.m.s. values <strong>for</strong> the control algorithm<br />

SSR Tx Ty Tz θx θy θz error<br />

PI 2.50 2.40 1.20 2.20 0.30 0.47 1.51<br />

GPC 2.14 0,81 0.22 1.84 0.22 0.02 0.87<br />

MPC 1.36 0.67 3.01 0.76 6.3 6.21 3.04<br />

6. EXPERIMENTAL PROCEDURE<br />

The experimental implementation <strong>of</strong> the proposed vision<br />

control algorithms was per<strong>for</strong>med through the whole<br />

simulation system previously developed used as plat<strong>for</strong>m. In<br />

spite <strong>of</strong> the presented simulation works had been developed<br />

in an “eye in hand” configuration, the experimental works<br />

were per<strong>for</strong>med according to an “eye to hand” one. This<br />

fact is related with the necessity <strong>of</strong> protecting the camera<br />

which was placed outside the robot allowing from this way a<br />

higher security system in the initial phase.<br />

The homogeneous trans<strong>for</strong>mation matrix which relates the<br />

camera frame with the robot frame, w<br />

T c , <strong>for</strong> the used<br />

configuration is given by:<br />

w Tc<br />

⎡−1000.4521⎤ ⎢<br />

0 0 −1<br />

0.45<br />

⎥<br />

= ⎢ ⎥<br />

⎢ 0 −1<br />

0 0.91 ⎥<br />

⎢ ⎥<br />

⎣ 0 0 0 1 ⎦<br />

(27)<br />

The application consisted in the robot control using in a first<br />

one a PI controller and in a second one a generalized<br />

predictive controller, GPC.<br />

6.1 Implemented system configuration<br />

In the experimental developed work the potentiality given<br />

by the XPC Target and Matlab Simulink was used. Through<br />

this technology is possible to create an operative system<br />

which allows working in real time robot control. There were<br />

used two computers (Fig. 12), a Host-PC used <strong>for</strong> the visual<br />

in<strong>for</strong>mation acquisition and processing and a target PC<br />

which receives the processing results from the Host-PC and<br />

per<strong>for</strong>ms the robot control.<br />

The image acquisition system processes them at a rate<br />

between 12 and 20 images per second and sends the<br />

processed data to the robot control system (Target PC)<br />

through RS232. This external loop control frequency is<br />

related with the algorithm weight, the numerical capacity <strong>of</strong><br />

the computers and with the specific weight <strong>of</strong> the simulink<br />

program. Theoretically the used Vector camera could reach<br />

the rate <strong>of</strong> 300 images per second.<br />

In order to generate the robot control environment it was<br />

necessary to replace the original PUMA controller by an<br />

open control architecture. This procedure allows the<br />

851<br />

adaptation <strong>of</strong> the system to different kinds <strong>of</strong> controllers. In<br />

the present case the internal controller was substituted by a<br />

velocity controller with gravitic compensation.<br />

host<br />

Image in<strong>for</strong>mation<br />

<strong>Visual</strong> control algorithm<br />

target<br />

Fig 12 Experimental Scheme.<br />

The target target view by the camera is shown in figure 13.<br />

A planar target with 8 LED’s, placed at the vertices <strong>of</strong> two<br />

squares <strong>of</strong> 40 mm <strong>of</strong> side and spacing to each other <strong>of</strong> 150<br />

mm, is used. In figure 13 is possible to observe the target<br />

viewed by the camera.<br />

The choice <strong>of</strong> the number <strong>of</strong> points was conditioned by the<br />

estimated sensibility from the obtained results in the<br />

theoretical study obtained through simulation and by the<br />

limitations <strong>of</strong> the image processing system. In spite <strong>of</strong><br />

having redundant in<strong>for</strong>mation this number <strong>of</strong> points lead to<br />

better results as it was verified in the simulation study.<br />

Fig. 13 Target view by the camera.<br />

The experimental works obey to the configurations shown<br />

in figure 14.<br />

Fig. 14 Installation <strong>of</strong> the experimental visual servoing system.<br />

The image that would be used as reference was previously<br />

captured. From this one the image primitives were obtained<br />

and corresponded to the centroids <strong>of</strong> 8 lighting circles<br />

Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on March 23,2010 at 08:17:40 EDT from IEEE Xplore. Restrictions apply.<br />

t


shown in figure 5.8. The end effectors or more precisely the<br />

target was displaced 30º away from that position around<br />

joint 3. The robot tries to reach the desired position through<br />

the control scheme shown in figure 8. In spite <strong>of</strong> the<br />

per<strong>for</strong>med displacement correspond to the simple situation<br />

<strong>of</strong> moving around a joint, the followed trajectory is realized<br />

by the use <strong>of</strong> the other joints. The goal <strong>of</strong> the task per<strong>for</strong>med<br />

by the robot will be to place the target in a position which<br />

corresponds to the desired image. There<strong>for</strong>e the robot<br />

manipulator follows a trajectory in order to minimize the<br />

s − s .<br />

error between the current and reference images, ( )<br />

In the experiments carried out two different controllers were<br />

used: proportional and predictive. As in the simulation case<br />

the robot is controlled in velocity through the internal loop<br />

which includes gravitic compensation. This one operates at<br />

frequency <strong>of</strong> 1 KHz and the external vision controller<br />

operates at a frequency <strong>of</strong> 12 Hz.<br />

6.2 Experimental results <strong>of</strong> the vision control system using a<br />

proportional controller<br />

In the experimental work it was concluded that very<br />

reasonable results were obtained through the use <strong>of</strong> a<br />

proportional controller and that the integrative or derivative<br />

factor had no influence on the system per<strong>for</strong>mance<br />

From figure15 is possible to evaluate the images error in a<br />

2D visual servoing architecture using a proportional<br />

controller. The good system convergence is notable and one<br />

can observe that the error is near zero approximately after<br />

20 seconds <strong>for</strong> all the image primitives.<br />

Image error (pixels)<br />

Time (s)<br />

Fig. 15 Image error <strong>for</strong> the 2D architecture using a PI controller.<br />

6.3 Experimental results <strong>of</strong> the vision control<br />

system using a predictive controller<br />

From the developed work presented the predictive control<br />

algorithm presented was implemented. A prediction horizon<br />

<strong>of</strong> H p = 6 was used.<br />

The implementation <strong>of</strong> the predictive control algorithm was<br />

preceded by the identification <strong>of</strong> the ARIMAX model <strong>for</strong><br />

each <strong>of</strong> the controlled joints. In the identification procedure<br />

a PRBS (pseudo random binary signal) with a frequency <strong>of</strong><br />

100 Hz was injected in each joint. The Identification<br />

algorithm based on the prediction error [12] was used.<br />

d<br />

852<br />

From figure 16 is possible to evaluate the image results<br />

through the use <strong>of</strong> a generalized predictive control<br />

algorithm. The good convergence is notable and one can<br />

conclude that this system is faster when compared with the<br />

use <strong>of</strong> a proportional controller. The error is near zero<br />

approximately after 15 seconds <strong>for</strong> all the image primitives.<br />

Fig. 16 Image error <strong>for</strong> the 2D architecture using a GPC controller.<br />

From figure 17 the joint velocity evolution <strong>for</strong> the case <strong>of</strong><br />

using a GPC and a PI controller can be observed. The joint<br />

velocities are lower than in the proportional controller case<br />

as well <strong>of</strong> the oscillations.<br />

Joint velocity (rad/s)<br />

Image error (pixels)<br />

Time (s)<br />

(a)<br />

Time (s)<br />

Fig. 17 Joint velocity using a 2D architecture with a GPC (a) and a PI (b)<br />

controller.<br />

Joint velocity (rad/s)<br />

7. CONCLUSIONS<br />

Time (s)<br />

(b)<br />

A vision control system applied to a six degrees <strong>of</strong> freedom<br />

robot was studied. A PUMA 560 model was carried out and<br />

tested with a PI controller using the parameters <strong>of</strong> the real<br />

system. A prediction error method was used to identify the<br />

robot dynamics and to implement a predictive control<br />

algorithm (MPC and GPC). The three different algorithms<br />

always converge to the desired position. In general, we can<br />

conclude that in visual servoing is obvious the good<br />

per<strong>for</strong>mance <strong>of</strong> both predictive controllers. The obtained<br />

results also show that the 2D algorithm associated with the<br />

studied controllers allows to control larger displacements<br />

than those referred in [4].<br />

From the analysis <strong>of</strong> r.m.s error presented in Table 1 we can<br />

conclude the better per<strong>for</strong>mance <strong>of</strong> the GPC. In spite <strong>of</strong> the<br />

better MPC results <strong>for</strong> the translation in the xy plane when<br />

compared to the GPC, the global error is worse. In the visual<br />

servoing trajectory is obvious the good per<strong>for</strong>mance <strong>of</strong> this<br />

approach. The identification procedure has a great influence<br />

on the results.<br />

Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on March 23,2010 at 08:17:40 EDT from IEEE Xplore. Restrictions apply.


The evaluation <strong>of</strong> the graphical trajectories and the<br />

computed errors allow finally concluding that the GPC<br />

vision control algorithm leads to the best per<strong>for</strong>mance.<br />

Finally the obtained experimental results <strong>for</strong> the “eye to<br />

hand” case are in agreement with those expected from the<br />

theoretical algorithm development and the simulation<br />

results. The developed experimental system allows using<br />

with great versatility the simulation algorithms.<br />

8. FUTURE WORKS<br />

In future works, another kind <strong>of</strong> controllers such as<br />

intelligent, neural and fuzzy will be used. Other algorithms<br />

to estimate the joints coordinates should be tested. These<br />

algorithms will be applied to the real robot in visual<br />

servoing path planning. Furthermore others target and other<br />

visual features should be tested.<br />

ACKNOWLEDGEMENTS<br />

This work is partially supported by the “Programa de<br />

Financiamento Plurianual de Unidades de I&D (POCTI), do<br />

Quadro Comunitário de Apoio III” by program FEDER and<br />

by the FCT project POCTI/EME/39946/2001.<br />

REFERENCES<br />

[1] Camacho, E. F. and C. Bordons (1999). Model<br />

<strong>Predictive</strong> <strong>Control</strong>. Springer Berlin<br />

[2] Clarke, D.W., C.Mohtafi, and P.S.Tuffs. Generalized<br />

<strong>Predictive</strong> <strong>Control</strong>-Part I (1987). The Basic<br />

Algorithm, Automatica, Vol.23, Nº2, pp.137-148.<br />

[3] Lee, J.H,. and B. Cooley (1997). Recent advances in<br />

model predictive control.In:Chemical Process <strong>Control</strong>,<br />

Vol.93, no. 316. pp201-216b. AIChe Syposium Series<br />

– American Institute <strong>of</strong> Chemical Engineers.<br />

[4] Gangl<strong>of</strong>f, J. (1999). Asservissements visuel rapides D’<br />

un <strong>Robot</strong> manipulateur à six degrés de liberté. Thèse<br />

de Doutorat de L’ Université Louis Pasteur.<br />

[5] Corke, P. (1994). A Search <strong>for</strong> Consensus Among Model<br />

Parameters Reported <strong>for</strong> the Puma 560 In Proc. IEEE<br />

Int. Conf.. <strong>Robot</strong>ics and Automation, pages 1608-<br />

1613, San Diego.<br />

[6] Malis,E., F. Chaumette and S. Boudet 1999. 21/2D<br />

visual Servoing. IEEE Transactions on <strong>Robot</strong>ics and<br />

Automation 15 (2), 238-250.<br />

[7] Chaumette, F. (1990). La relation vision-commande:<br />

théorie et application à des tâches robotiques. Thése de<br />

doctorat, Université de Rennes.<br />

[8] Craig, J. (1988). Adaptive <strong>Control</strong> <strong>of</strong> Mechanical<br />

Manipulators, Addison-Wesley.<br />

[9] Sciliano, B. and L Sciavicco 2000. Modelling and<br />

<strong>Control</strong> <strong>of</strong> <strong>Robot</strong> Manipulators, 2 nd edition,<br />

Springer-Verlag.<br />

[10] Ferreira, P. e Caldas, P. (2003). 3D and 2D <strong>Visual</strong><br />

servoing Architectures <strong>for</strong> a PUMA 560 <strong>Robot</strong>. In<br />

853<br />

Proceedings <strong>of</strong> the 7 th IFAC Symposium on <strong>Robot</strong><br />

<strong>Control</strong>, September 1-3, , pp 193-198.<br />

[11] Hashimoto, K. and T. Noritsugu (1998). Per<strong>for</strong>mance<br />

and Sensitivity in <strong>Visual</strong> Servoing. IEEE Int. Conf. on<br />

<strong>Robot</strong>ics and Automation, Leuven, Belgium, pp.2321-<br />

2326.<br />

[12] Ljung, L. (1987). System Identification Theory For<br />

The User, Prentice Hall, Englewood Cliffs, New York.<br />

Authorized licensed use limited to: Pontificia Universidad Javeriana. Downloaded on March 23,2010 at 08:17:40 EDT from IEEE Xplore. Restrictions apply.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!