Sensing and Perception: this concerns the ability of the system to perceive its environ- ment, ranging from detecting objects, animals and humans to classifying the objects and determining their locations, recognizing human faces or locations. In general, it refers to in- terpreting information and estimating environment features based on sensory data. Though vision uses to be the most widely known sensor, many others contribute to perception, ei- ther individually or by fusing their information, e.g., laser scanners, microphones, infrared, motion detectors, proximity, touch.
Modeling and Understanding: this concerns the ability of the system to integrate, over space and time, the information from several sources (e.g., sensors, prior knowledge) to set up a model (often mathematical and quantitative, but in some cases also qualitative) of the environment. Such model can be static (e.g., the locations and description of objects in a semantic map) or dynamic (e.g., the motion of a robot, the behaviour of a human, the state of the environment expressed by the values of its state variables). Models can have different levels of abstraction and be symbolic or numeric. Symbolic models are typically used by task planners while numeric models are typically used by motion planners and dynamic state estimators.
Planning and Action - Planning and acting: this concerns the ability of a system to compute its own decisions, which are mapped onto actions over the environment. Task and motion planning are crucial ingredients for autonomy. An autonomous system, whether it is a single or multi-robot sys- tem, an intelligent sensor network, or a combination of the two, needs to map the current state of the environment, as the system perceives it, onto actions that will change the environ- ment towards reaching the goals of the autonomous system. Examples include task planning and motion planning for mobile robots, task scheduling, or task allocation for multi-robot systems or actuation networks.