Videos

 

Teleoperating the Barrett Hand
(September 2006)
 

An experimenter teleoperates the Lira-Lab's Barrett Hand using a 18 sensor Cyberglove.

- glovebarrettdvx(DivX) (~21 MB), click here

 

James preprogrammed grasping
(March 2006)

The 23 degree of freedom humanoid robot James grasps an object in a pre-programmed position. A short video showing the flexibility of the embedded controller

James1.avi (DivX) Click here (~52Mb)

Teleoperating Babybot
(March 2006)
 A human teleoperates babybot using a magnetic tracker (Flock of Birds) installed on the wrist and a data glove (Immersion Cyberglove)

- Teleoperation(DivX) (~10 MB), click here

Grasping objects
(September 2004)

 

The robot starts by looking at an object that is placed on its palm; after a brief exploration the object is dropped on the table and the robot starts searching for it. Once the object is fixated again, the robot actively grasps it. If the grasp is successful the object is dropped off the table, otherwise another trial is attempted (this work has been carried out in collaboration with Paul Fitzpatrick, MIT CSAIL).

- long sequence (~20MB), click here
- short movie (~8MB)
, click here
- what is really going on (robot's point of view ~ 4MB), click here

Arm control
(June 2003)

Babybot learns to control the arm:
- low stiffness arm control (~7MB), click here
- gravity compensation (~7MB), click here

 

 
  Grasping objects
(May 2004)

The robot graps objects that touch its hand (hand details), click here
 

  Hand localization
(October 2003)

Learning:
motion is used to segment the hand from the background; proprioception and visual motion are correlated to remove objects moving independently in the background,
click here

After learning:
- hand localization: the red ellipse represents the position of the hand in the visual field,
click here
- hand prediction: the red ellipse represents the position of the hand at the end of the movement,
click here
- hand localization: a color model of the hand is used to detect when the hand is actually visible, 
click here

 
 
  Learning to act on objects
(July 2002)

Learning:
Babybot learns the consequences of its actions by playing with objects (click here, mpg file).

After learning:
The robot is able to choose the appropriate action to achieve a given goal (i.e. pushing an object toward someone else's hand):
- external view, click here
- robot's eye, click here

 

Human-robot interaction
(May 1999)

 

(click here to download Full Version, 5.7 MB)
(click here to download Short Version, 1.0 MB)
  More on oculo-motor behaviors
(May 1999)

 

(click here to download Full Version, 6.0 MB)
(click here to download Short Version, 1.0 MB)

  Sound localization
(July 2001)

Robot's point of view. Note that the tracking exploits sound only, no visual information is used.

click here to download (4.9 MB)

  Paying attention to sound cues

 

click here to download (5.79 Mb)
  Presenting the Babybot
(Feb 2001)

(click here to download High Resolution Video, 7.5 MB)
(click here to download Low Resolution Video, 2.1 MB)

  Sensorimotor coordination

(click here to download High Resolution Video, 6.8 MB)
(click
here to download Low Resolution Video, 1.9 MB)

  Sound and vision together

 

click here to download (3.8 MB)

   

(click here to download Full Version, 5.8 MB)
(click here to download Short Version, 1.0 MB)

 

@LIRA-Home