MIRROR

IST–2000-28159

Mirror Neurons based Object Recognition

All Multimedia Content


Note: some videos require the mpeg codec which is normally available by default on many players.

DIST -- University Of Genoa

Preliminary experiments in pre-grasp orientation
Here we study the pre-grasp orientation of the robot end-effector. The task in this case is the insertion of the end-effector into a slit; the robot learns how to pre-orient the wrist so that the action is successful. The "insertion task", considered here as a simplified type of grasping, is used to study how to learn the preparation of a motor action

posting-short.avi

Learning to act on objects In this experiment we show how a humanoid robot uses its arm to try some simple pushing actions on an object, while using vision and proprioception to learn the effects of its actions (first video). Afterwards this knowledge is used to position the arm to push/pull the target in a desired direction (second and third video) learning.avi
after_learning.avi
robot_pv_dvx.avi
Mirror neurons We use a precursor of manipulation, i.e. simple poking and prodding, and show how it facilitates object segmentation, a long-standing problem in machine vision. The robot can familiarize itself with the objects in its environment by acting upon them. It can then recognize other actors (such as humans) in the environment through their effect on the objects it has learned about objects_080502.avi
canonical_080502.avi
m1.avi
m6.avi
Setup for the acquisition of visual and motor data from human subjects during grasping actions The main goal here is to build a setup to acquire data from human subjects performing different types of grasps. We are able to record motor (position and orientation of the hand, position of the fingers) as well as visual data (sequence of stereo images) glove1.avi
glove2.avi
Learning gravity compensation Arm zero-weight. The robot keeps the arm in a stationary position; the closed loop control is not activated and only the arm’s weight is compensated by the controller. The arm can be moved by a human as if it was very light

gravity1.avi

Low stiffness. In this case the arm is controlled and randomly moved to show the low-stiffness control. A human can safely interact with the manipulator

gravity2.avi

Learning the hand model

Hand segmentation. The robot moves the arm around and performs a periodic motion of the hand. Motion in the image plane is correlated with the periodic motor command in order to get a segmented image of the hand. The video shows that the segmentation is not influenced by external movements

handModel1.avi

handModel2.avi

handModel3.MOV

Hand

Hand compliance. The video show the intrinsic elasticity in the hand mechanics. The hand here is not controlled.

hand1.avi

Grasping different objects. To test the hand mechanics some objects are grasped. In this case the hand is remotely controlled by an operator. It is important to note how the fingers adapt to the shape of the object being grasped

hand2.avi

hand3.avi

hand4.avi

Grasping strategies

A simple grasp reflex-like mechanism is implemented. Whenever pressure is applied to the palm of the hand, a grasping movement is initiated. After a certain period of time the hand is opened again and the object released

grasp1.avi

Grasping objects on the table

The robot uses pretty much all the modules developed within the project to grasp a toy laying on the table. See D1.10 for details. DivX Video
The robot uses pretty much all the modules developed within the project to grasp a small bottle. DivX Video
The robot uses pretty much all the modules developed within the project to grasp a toy car. DivX Video
The robot uses pretty much all the modules developed within the project to grasp a plastic duck. DivX Video

Looking at the hand

A demonstration of the acquired body map. The robot uses the body map to track its hand as it moves. DivX Video
The body map is used to predict the position of the hand given a commanded motion. DivX Video
The body map is used together with color information to detect the position of the hand in the image. DivX Video
Reaching Reaching for a visually identified object using the motor-motor coordination schema. DivX Video
Grasping Haptic exploration by grasping. DivX Video

IST - Istituto Superior Tecnico in Lisbon

3D reconstruction and depth segmentation from log-polar images
The process takes a pair of log-polar images and computes a dense disparity map that allows for depth segmentation of the scene. It is based on a set of disparity channels whose responses are combined in a probabilistic framework to obtain the final depth map. One of the important aspects is that depth discontinuities are preserved, thus being useful for problems of figure-ground segmentation based on depth cues. See DI-2.3 for more details.
The first four videos illustrate depth maps obtained when looking at a person or at a hand. The segmentation results are also shown both for the hand and the upper body. The fifth (last) video illustrates the cortical (log-polar) images as they are represented and processed internally to the system

depth_h.avi

segm_h.avi
depth.avi
segment.avi
cortical.avi
Gesture Imitation These videos illustrate the approach developed for an artificial system to imitate the arm gestures performed by someone. When the demonstrator performs a gesture (first video), the system starts by segmenting the hand in the images based on skin color information. This information is used with the View Point transformation (see DI-2.3) to align the demonstrator's gestures to the point of view of the system. Finally, the Sensory Motor map is applied to generate the adequate arm configurations as shown in the second video. demonstrator.avi
imitation.avi
Model of mirror neurons Visual processing related to the modeling of mirror neurons is shown here. The first video shows an example of the precision grip. Images here were recorded using the data-glove setup developed especially for Mirror. This is exactly the type of visual information that was processed by the Bayesian classifier described in deliverable D3.4. Video number 2 shows an example of power grasp under the same experimental conditions. Video number 3 shows how the hand is located (based on color) and its appearance mapped to a standard reference frame. The normalized and rescaled pictorial information is successively processed by the PCA algorithm. IST-video1.avi
IST-video2.avi
IST-video3.avi
Baltazar In this experiment we show one of the main properties of the anthropomorphic kinematics. The position of the third joint and the inverse kinematics used enables the robot to orient the hand solely by setting the initial condition of the algorithm. DivX Video
Coordination In this experiment we show the coordination of the head and hand. DivX Video
Imitation This experiment shows the capability of Baltazar of mimicking human gestures. This video shows the robot point of view when looking at the human experimenter. DivX Video
The robot process the image in order to extract the background to be able to estimate the body, shoulder and hand position. DivX Video
Using the visuo-motor map the system is able to mimic a gesture. DivX Video
Grasping The hand is controlled to close but because of the mechanical compliance the fingers adapt to the object. DivX Video

DP - University Of Uppsala

Rotating rod experiment
Infants' ability to adjust hand orientation when grasping a rotating rod has been studied. The rod to be reached for was either stationary or rotated. The results show that reaching movements are adjusted to the rotating rod in a prospective way and that the rotating rod affects the grasping but not the approach of the rod

QuickTime Video

DBS - University Of Ferrara

In-vivo recordings of mirror neurons in behaving monkeys
Different classes of neurons are recorded during grasping action in different conditions of "visual feedback". The videos show grasping actions performed in four different conditions: 1) with ambient illumination; 2) in the dark; 3) with a flash of light at the instant of maximum finger aperture; 4) with a flash of light at the instant of touch.

grasplight.avi

graspdark.avi
flashonmax.avi
flashontouch.avi