The RESCUER project focuses on:
a) the development of an intelligent Information and Communication
Technology and Emergency Risk Management tool and on testing it in five
Improvised Explosive Device Disposal, and Civil Protection Rescue Mission
scenarios. RESCUER will implement also a ad-hoc software for Emergency Risk
Management Monitoring and Advising that will integrate the information
flows from the different sources and will generate decisions.
b) the development of an intelligent mobile robot capable of achieving given goals under conditions of uncertainty. In contrast to existing automated bomb disarming systems, which are, by definition, pre-programmed. The RESCUER mobile robot will include multifunctional tools, two simultaneously working robot arms with dextrous grippers, smart sensors for ordnance, for human detection and for the assessment of the environmen.
The project output will be an assistance software for the management of risk, plus a mobile robot with multifunctional tools, two simultaneously working robot arms with dextrous grippers. The combined use of the two systems will improve risk management because of the integration of 1) a mobile robot, 2) intelligent methods for bomb disposal and rescue operations, and 3) IT techniques for the management of rescue missions.
As infants, each one of us developed the ability to move our muscles to manipulate objects and also to communicate with gestures and speech. Did we learn to perceive and produce gestures for manipulation and speech independently, or are these two learning processes linked? The CONTACT project is an ambitious attempt to investigate the parallel development of manipulatory and speech-related motor acts from a multi-disciplinary perspective. The project is designed to test the hypothesis that fundamentally similar mechanisms are involved in the development of perception and production for both speech and manipulation. This hypothesis is stimulated by recent evidence suggesting that the human brain interprets motor acts (movements) of other people in essentially the same way, regardless of whether the act generates speech or a manipulative gesture.
The project should yield progress on the following:
RobotCub is an Integrated Project funded by European Commission through the E5 Unit (Cognition) of Information Society Technologies priority of the Sixth Framework Programme. The pre-proposal was submitted in June 2003 and the project started the 1st of September 2004 with duration of 60 months. The consortium is initially composed of 11 European research centers plus two research centers in the USA and three in Japan specialized in robotics, neuroscience, and developmental psychology.
The Project Manager is Prof. Giulio Sandini of the Dipartimento di Informatica, Sistemistica e Telematica of the University of Genova. The total funding is 8.5 Million euro. The main goals of RobotCub are two:
to create an open robotic platform for embodied research that can be taken up and used by the research community at large to further their particular approach to the development of humanoid-based cognitive systems, and
to advance our understanding of several key issues in cognition by exploiting this platform in the investigation of cognitive capabilities.
The scientific objective of RobotCub is, therefore, to jointly design the mindware and the hardware of a humanoid platform to be used to investigate human cognition and human-machine interaction. We call this platform CUB or Cognitive Universal Body. It is worth remarking that the results of RobotCub will be fully open and consequently licensed following a General Public (GP) license to the scientific community. Among the activities planned in the project, there is an important component devoted to the support of the open initiative which aims at establishing an international Research and Training Site with the following institutional activities:
Maintenance and update of the CUB. At least three complete systems will be available at the site.
Training of scientists (both national and international) and students on the preparation, utilization, and development of new components for the CUB.
Multidisciplinary Research Center open to scientists not yet in the position to embark on the construction and setup of a complete laboratory and/or a full humanoid to start nonetheless their research agenda in embodied cognition.
Besides establishing a Research and Training and in order to facilitate the adoption of the RobotCub platform by other scientists, part of the RobotCub budget has been allocated to purchase the components to build a small series of 10 humanoids and to support 10-15 start-up projects. A total of 2.1 Million € has been reserved for these activities in the project’s budget and will be managed by the University of Genova.
The fusion of NEUROscience and roBOTICS
The ultimate objective of the NEUROBOTICS project is to introduce a discontinuity in the robot design, thus going literally ‘Beyond Robotics’. This discontinuity will be pursued by a strategic alliance between Neuroscience and Robotics, which will go well beyond present, mostly fragmented, collaborations, and lead to overcome state-of-the-art of robotics worldwide. The scientific, technological and cultural environment in Europe is mature to face this challenge, whose impact in engineering and medicine could be comparable to that recognized to ‘big science’ projects. NEUROBOTICS will systematically explore the area of Hybrid Bionic Systems (HBSs). NEUROBOTICS will investigate thee platforms that involve different degrees of hybridness:
The ‘heritage’ of NEUROBOTICS will be the kernel of a new community of European researchers, with strong links to non-European top scientists (e.g. in US and Japan), able to dominate the scientific, technical, industrial, societal and ethical aspects of this novel discipline and to exploit it to the benefit of the EU citizens.
The main objective of ADAPT is to study the process of building a coherent representation of visual, auditory, and haptic sensations and how this representation can be used to describe/elicit the sense of presence. The goal is the "understanding" of representation in humans and machines. We intend to pursue this in the framework of development: i.e. by studying the problem from the point of view of a developing system.
Development, in addition to learning, includes the growth and maturation of
the organism that is structural changes in addition to parametric
changes. Within this framework we will use two methodologies: on one side
we will investigate the mechanisms used by the brain to learn and build this
unified representation by studying and performing experiments with human
infants; on the other side we intend to use artificial systems (i.e. robots) as
models and demonstrators of perception-action-representation theories. We
will employ a synthetic methodology (i.e. a methodology of "understanding
by building") which consists of three parts or steps:
(i) modeling aspects of a biological system,
(ii) abstracting general principles of intelligent behavior from this model, and
(iii) applying these principles to the design of intelligent artifacts.
These steps are not performed in sequence but rather in parallel and iteratively. Within the Presence Initiative, what we claim is that in order to be able to elicit meaning for humans, one has to understand the process that builds that specific meaning. In recreating meaning machines should use this same human-like representation.
Studying development in the presence framework has the goal of understanding how humans learn to attribute or extract a stable meaning from the continuously changing sensory stimulation. The main objective described above will be achieved by four sub goals:
The milestones will be:
nEUrone will provide the infrastructure and forum that ensures a close continuous cooperation between the research consortia of the proactive initiatives launched by FET in the field of neuroscience and neuroinformatics, currently FET-LPS and FET-NI. nEUrone will actively pursue the inclusion of further interested consortia and partners from industry, public/research institutions. nEUrone is particularly committed to making known the potential of the basic research conducted within the proactive initiatives to related scientific communities, to SMEs and global companies from industry and other economical sectors. The enormous potential of research into Information Technology based on and inspired by neurosciences and neuroinformatics (dubbed Neuro-IT), with ‘Living Artifacts’ being a very attractive potential flagship application in the future is becoming visible at this point in time. It is therefore of highest importance to organize an adequate coordination and representation to help to define this area as a genuine European contribution and to stay in the forefront of its development. nEUrone will help to avoid fragmentation of European research, to catch up where necessary and to gain leadership in relation to US research.
Milestones and expected results
To give Neuro-IT research a clear, coherent, highly visible profile and increasing momentum, nEUrone will provide very early on comprehensive online information, roadmaps on research and technology, a NewsTicker and NewsLetter, ‘yellow research pages’, reports on synergies and exchange opportunities (all drafts in the first year, continuously updated). Later, conferences, curricula development and education will follow along with technology transfer events and long-term perspective development.
The goals of MIRROR are:
The biological starting point is the existence in primates' premotor cortex of the mirror neurons, that becomes active both during execution of goal directed actions and during the observation of similar actions performed by other. We intend to proceed with two different methodologies: implementation and use of an artificial system and electrophysiological and behavioral experiments.
The reference scenario is that of a person performing goal driven arm/hand gestures such as pointing, picking, grasping, etc. The expected results are:
Given the rather generic definition above an obvious question is: what is cognitive vision? Considering the general definition of cognition as "the process of knowing, understanding and learning things" it is possible to derive some key characteristics for cognitive vision:
Vision is a process that operates in a spatio-temporal context. I.e. vision is not instantaneous, it evolves over time and incorporates information to generate "answers".
Vision uses and generates knowledge (that includes information that is not organized spatially). This implies that a fundamental part of studies of visual processes is consideration of representations and memory.
The visual process generates/maintains models of the environment in terms of its geometry, and semantic labels for events and entities in the environment. I.e. "understanding" implies an ability to generate an explicit description of the perceived world in terms of objects, structures, events, their relations and dynamics that can be used for action generation or communication.
Learning implies an ability to generate open-ended models and representations of the world. That is, the model of the system and its use can not be based on a closed world assumption, but rather on a model that allows automatic generation of new representations and models.
Vision is a process which implies that it operates in the context of an "agent" that provides a task context and has finite resources in terms of computation, memory, and bandwidth. These characteristics provide an abstract model that allows definition of a number of issues that must be addressed to enable design, implementation and deployment of a fully fledged cognitive vision system.
The objective of this project is to provide the methods and techniques that enable construction of vision systems that can perform task oriented categorization and recognition of objects and events in the context of an embodied agent. The functionality will enable construction of mobile agents that can interpret the action of humans and interact with the environment for tasks such as fetch and delivery of objects in a realistic domestic setting.
Responsabile: Giulio Sandini
Ente Finanziatore: ENEA
Within the national project for the exploration of Antarctica (PNRA) activities are undergoing to transform a human-operated vehicle into a teleoperated one. The research group at LIRA-Lab is involved in the design and implementation of a vision system controlling the gaze direction of a stereo head to provide the remote operator with a stable stereo view of the environment.
This is accomplished by integrating visual and inertial information. Inertial data is acquired by a special purpose artificial vestibular system measuring two rotational acceleration and two linear accelerations. Visual data is based on optical flow computation.
The robotic head has 5 degrees of freedom and the cameras have a space-variant resolution. The head was designed in collaboration with Telerobot. The inertial system was designed and realized at LIRA-Lab. The space-variant CMOS sensor was designed in the lab within a collaborative European Project (IBIDEM).
The technical goals in the development of the new GSM - camera include:
for the sensor: additional functionalities and standard circuits on board;
for the camera: case redesign, lens/auxiliary electronics/connections revision;
for the overall system: design of interfaces camera/GSM and camera/LCD display.
From the technical point of view, the major task will be the design of a camera/GSM interface. From the timing perspective, the longest task will be the upgrade of the chip with additional functionalities due to long fabrication times (not less than 5 months).
The business goals (after the successful project completion) are to reach a strong position in the niche of GSM-based video communication, with a sales level around 100,000 units/year (subject to an alliance with a large ICT player).
Responsabile: Giulio Sandini
Ente Finanziatore: EU-ESPRIT
This project addresses the definition of mechanisms of autonomous behavior for mobile robots. Autonomous systems has been a topic of intensive research, and considerable progress has been made in the realization of systems with learning capabilities, with applications to many areas, such as control and manufacturing and services industries. However, the great majority of these systems base their intelligent-like behavior on repeated observation. In this project, we aim at the development of a more complex kind of intelligence, based on the ability to actively search for solutions even under uncertainty. This type of learning activity is one of the major distinctions between humans and most "intelligent machines".
The goal of this project is to develop reliable navigation systems of limited cost for mobile robots operating in unknown and unstructured scenarios. More specifically, the project aims at using the perception capabilities of the robot's sensing system to extend the autonomy range of autonomous terrestrial and underwater vehicles, allowing operation in unknown regions without resorting to the introduction of handcrafted landmarks prior to the mission. Autonomy will be measured by the ability of the vehicle to travel to unknown regions distant from its initial position without getting lost, i.e., being able to find its way back to its starting point. The exploratory behavior that this project proposes is necessary for the operation of autonomous robots in such environments, meaning that the robot is able to overcome its reduced a priori information by an intelligent use of its perception system. The two constraints mentioned, reliability and limited cost, are mandatory if large-scale use of these systems is desired: reliability means that the robot will accomplish its mission without destructive interference with its scenario, and limited cost is a major concern in consideration of industrial applications.
The project addresses all the main components for navigation purposes : control architecture, perception and navigation. As a reference, the project will consider the framework of cooperative underwater robotics. In these systems, full control of the vehicle position for a particular task requires, in general, ability to locate itself with respect to a fixed world reference, to gather knowledge about the static components of its surroundings, and the possibility of locating the other robots present, as well as to characterize their dynamics .
Responsabile: Giulio Sandini
Ente Finanziatore: EU, MURST Ricerche di interesse nazionale
The study of sensory-motor coordination in artificial systems has been carried out mainly by analyzing and trying to implement skill levels comparable to those of adult humans. For example the control of robot's heads as well as visually guided manipulation tasks have been studied with reference to psychophysical data measuring the performance of adult humans and animals.
In spite of the recent advances in this area, the systems implemented are still far from achieving human-like performance and, more importantly, even for successful implementations the integration of, for example, manipulation skills and gaze control seems to be more difficult than expected. This difficulty arises, in our view, from the approach followed so far which, with the aim of making a complex system more tractable, has broken sensory-motor coordination, into a set of sub-problems often defined by a specific sensory modality (e.g. vision, audition, touch etc.) or specific motor skills (e.g. manipulation, gaze control, navigation). This artificial fragmentation, even if useful so far to understand some basic functionalities, has been struggling recently with the problem of how to integrate these, independently implemented sub-systems.
A different solution is used in humans and other animals where adult-level performance is achieved through the simultaneous development of sensory, motor and cognitive abilities. This process is not simply caused by the maturation of the single components or the learning of progressively more sophisticated skills, but, on the contrary, it is marked, particularly in the very early stages, by changes of the neural circuitry, and strategic exploitation of the environment and of the limited skills present at each developmental stage. It is interesting to note that some of the skills found at certain stages of development disappear as soon as they are replaceable by more sophisticated ones.
The goal of the present proposal is to investigate if, by adopting a similar methodology for artificial systems, better insight on how to build highly complex systems and how to better understand brain functions can be derived.
To investigate these issues a humanoid baby-robot is used composed of a 6 d.o.f. arm, a 5 d.o.f. binocular head with anthropomorphic cameras, an artificial vestibular system. The task investigated is visually guided reaching.
This project is carried out in collaboration with the groups at the University of Parma, Modena, and Firenze and with the Institute of Neurophysiology of the CNR in Pisa.