27.12.2012 Views

ARUP; ISBN: 978-0-9562121-5-3 - CMBBE 2012 - Cardiff University

ARUP; ISBN: 978-0-9562121-5-3 - CMBBE 2012 - Cardiff University

ARUP; ISBN: 978-0-9562121-5-3 - CMBBE 2012 - Cardiff University

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

predicting hand postures. In some cases, artificial intelligence techniques are used for<br />

learning the hand inverse kinematics given the fingertips’ positions, as in [4].<br />

From the robotics’ viewpoint, the questions posed by grasping are slightly different. For<br />

a given object, grasp synthesis must provide the most appropriate set of contact points<br />

and hand posture for grasping. This problem has been studied using analytical<br />

approaches that have been nearly ruled out due to their poor results when it comes to<br />

real implementations [6]. Currently, artificial intelligence algorithms are used, as those<br />

based on artificial neural networks (ANN) [10]. These sensor-based approaches learn<br />

the underlying rules of robot grasping without the need of explicit kinematic models, by<br />

means of exploration. For instance, an ANN-based strategy was developed in [11] for<br />

grasping tasks with a 5 degrees-of-freedom (dof) gripper. Similarly, a grasping model<br />

was design in [13] for grasp synthesis of a 7-dof planar hand for circular and rectangular<br />

objects. ANN methods have also been used in feasible contact point determination [1,<br />

13]. In [13] an ANN able to learn generic grasping functions for simple 2-finger<br />

grippers was developed. Still, it was not extended to multi-fingered hands due to the<br />

lack of grasping information for an object (contact points). In [2] this problem was<br />

tackled defining hand contact configurations, associated to feasible object contact zones.<br />

Many researchers state that advances in the field of robot grasping and manipulation<br />

require a better knowledge of human grasping [14]. This is especially true in the field of<br />

service robotics, where robots move in human environments and interact with daily-life<br />

objects [15]. The variety of robotics grasps studied so far is very scarce and depends on<br />

the features of the particular robot hand. To deal with new hands or objects, some works<br />

begin to use human grasping information to guide the robot grasp synthesis, as in [1],<br />

where a data glove is used for training a neural network that produces robot grasping<br />

postures from previously computed contact zones for different objects.<br />

Along these lines, this paper aims at predicting the grasping posture for the index and<br />

thumb fingers of human hands from characteristic hand data, the features of the object<br />

to be grasped and the task to perform. Different feed-forward networks have been tested<br />

for automatically providing hand postures given such inputs. This is the first step in the<br />

elimination of the initial data collection phase required in a grasping study. The outputs<br />

of the ANN are the fingers joint angles for the grasping posture. The ANN has been<br />

trained with data collected from grasping experiments with daily-life objects (bottles)<br />

and tasks (moving and pouring). Once trained, the network is able to predict grasping<br />

postures for objects different to those used in training with an acceptable error.<br />

ROLL MCP1<br />

MCP2<br />

ABD1<br />

a) b)<br />

Fig. 1. a) Bottles B1, B2, B3, and B4 (left to right); b) Cyberglove joint sensors.<br />

DIP1<br />

PIP2

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!