Visually Guided Manipulation
Moving in an unknown environment is an every day experience for all animate systems; this is possible because the visual information data stream holds a large number of useful parameters that all these systems interpret and translate into actions as well as into behaviors. It is clear that the same source of information could be used for driving a robot in an unknown environment in order to avoid obstacles, to pass through doors or to approach objects.
The purpose of this project is to investigate the role of vision in the control loop of a robot manipulator. The goal is to develop control strategies to perform manipulative actions. The paradigm used is called visual servoing. All measurements are done in the camera image plane and are directly used inside the control loop. Either Cartesian space estimations or camera calibration are unnecessary. Even if measurements are a tradeoff between precision and computational speed the experiments prove the feasibility of the approach.Videoclips of some of the experiments:
Reaching a plastic cup
Capping a pen
Back to past projects@LIRA