The system developed for docking maneuvers consists in a camera mounted on the end effector of a 6 degrees of freedom robot arm; the goal is to approach a target, a planar surface, in such a way that collisions have to be avoided and the camera has to stop with its viewing direction perpendicular to the surface. This involves the control of five different velocities: the control of the forward velocity in order to prevent collisions with the target, and the control of four velocities, two translational and two rotational, for orienting the camera with respect to the surface.
It is possible to achieve the task when linked the kinematics quantities of the camera to the visual information. It has been proved so far that every different kind of movement will produce its specific change in the image shape. More in details, a movement of the camera in its viewing direction will produce an isotropic expansion of the image shape; a rotation around the viewing direction will produce a 2-D rigid rotation; panning or tilting the camera will results in an image translation; finally, translating on a plane parallel to the image plane will result in an image deformation: an expansion along an axis and a contraction along a perpendicular one. These changes in the image shape will be referred, respectively, as divergence, rotation, translation and deformation. It is possible to retrieve an estimate of these parameters from optic flow computation; then it is possible to link local measures on the image plane to kinematics.
Docking will be achieved controlling the forward velocity starting from the measurement of the divergence of the optic flow and superimposing the constancy of the image expansion rate, and controlling the camera spatial position starting from measures of the deformation of the image plane.
Back to past projects@LIRA