We will explain the architecture for the environment and
sensors that has been built for the European project URUS (Ubiquitous
Networking Robotics in Urban Sites), a project whose objective is to
develop an adaptable network robot architecture for cooperation between
network robots and human beings and/or the environment in urban areas.
The project goal is to deploy a team of robots in an urban area to give
a set of services to a user community. This presentation addresses the
sensor architecture devised for URUS and the type of robots and sensors
used, including environment sensors and sensors onboard the robots.
Furthermore, we will also explain how sensor fusion takes place to
achieve urban outdoor execution of robotic services (people and robot
tracking and detection of gestures). Finally some results of the project
related to the sensor network will be highlighted.
Alberto Sanfeliu received the BSEE and PhD degrees from the Universitat Politècnica de Catalunya (UPC), Spain, in 1978 and 1982 respectively.
He joined the faculty of UPC in 1981 and is full professor of Computational Sciences and Artificial Intelligence.
He is director of the Instituto de Robotica i Informatica Industrial - IRI (UPC-CSIC), director of the Artificial Vision and Intelligent System Group (VIS), past director of The UPC department “Enginyeria de Sistemes, Automatica I Informatica Industrial” and past president of AERFAI, and he is doing research at IRI, (UPC-CSIC).
This talk will focus on how graph-structures can be analysed using
diffusion processes and random walks. It will commence by explaining the
relationship between the heat equation on a graph, the spectrum of the
Laplacian matrix (the degree matrix minus the weighted adjacency matrix)
and the steady-state random walk. The talk will then focus in some depth
on how the heat kernel, i.e. the solution of the heat equation, can be
used to characterise graph structure in a compact way. One of the
important steps here is to show that the zeta function is the moment
generating function of the heat kernel trace, and that the zeta function
is determined by the distribution of paths and the number of spanning
trees in a graph. We will then explore a number of applications of these
ideas in image analysis and computer vision. This will commence by
showing how the heat kernel can be used for the anisotropic smoothing of
complex non-Euclidean image data, including tensor MRI. We will then
show how a similar diffusion process based on the Fokker-Planck equation
can be used for consistent image labelling. Thirdly, we will show how
permutation invariant characteristics extracted from the heat-kernel can
be used for learning shape classes. If time permits, the talk will
conclude by showing how quantum walks on graphs can overcome some of the
problems which limit the utility of classical random walks.
Edwin R. Hancock holds a BSc degree in physics (1977), a PhD
degree in high-energy physics (1981) and a D.Sc. degree (2008) from
the University of Durham. From 1981-1991 he worked as a researcher in
the fields of high-energy nuclear physics and pattern recognition at the
Rutherford-Appleton Laboratory (now the Central Research Laboratory of
the Research Councils). During this period, he also held adjunct
teaching posts at the University of Surrey and the Open University. In
1991, he moved to the University of York as a lecturer in the Department
of Computer Science, where he has held a chair in Computer Vision since
1998. He leads a group of some 25 faculty, research staff, and PhD
students working in the areas of computer vision and pattern
recognition. His main research interests are in the use of optimization
and probabilistic methods for high and intermediate level vision. He is
also interested in the methodology of structural and statistical !
pattern recognition. He is currently working on graph matching,
shape-from-X, image databases, and statistical learning theory. His work
has found applications in areas such as radar terrain analysis, seismic
section analysis, remote sensing, and medical imaging. He has published
about 135 journal papers and 500 refereed conference publications. He
was awarded the Pattern Recognition Society medal in 1991 and an
outstanding paper award in 1997 by the journal Pattern Recognition. He
has also received best paper prizes at CAIP 2001, ACCV 2002, ICPR 2006
and BMVC 2007. In 2009 he was awarded a Royal Society Wolfson Research
In 1998, he became a fellow of the International Association for Pattern
Recognition. He is also a fellow of the Institute of Physics, the
Institute of Engineering and Technology, and the British Computer
Society. He has been a member of the editorial boards of the journals
IEEE Transactions on Pattern Analysis and Machine Intelligence,
Pattern Recognition, Computer Vision and Image Understanding, and Image
and Vision Computing. In 2006, he was appointed
as the founding editor-in-chief of the IET Computer Vision Journal. He
has been conference chair for BMVC 1994, Track Chair for ICPR 2004 and
Area Chair at ECCV 2006 and CVPR 2008, and
in 1997 established the EMMCVPR workshop series.
In this talk, I will describe recent results in the context of using humanoid robotics for reverse engineering the brain, namely by modeling the human behaviour in an interdisciplinary research involving computational neuroscience, developmental psychology and engineering.
One aspect is related to neuro-physiology and the discovery of the mirror neurons which suggest that both action understanding and execution are performed by the same (motor) areas of the brain, possibly the root for non-verbal communication and facilitating social learning amongst con-specific individuals. I will present a computational model inspired by these findings and that outperforms (classic) approaches in gesture recognition from video.
A second aspect corresponds to the developmental pathway that allows by human infants (or robots) to successively acquire news skills based on previously learned capabilities while managing the complexity of the body-senses-environment. The talk will focus on aspects of sensorimotor coordination, learning about affordances and social interaction.
During the talk, I will provide examples of the use of humanoid robots (with our first platform, Baltazar, and the iCub) as testbeds to study human cognition, learning and sensorimotor coordination, while offering engineers with new approaches to build artificial systems.
José Santos-Victor received the PhD degree in Electrical and Computer Engineering in 1995 from Instituto Superior Técnico (IST - Lisbon, Portugal), in the area of Computer Vision and Robotics. He is an Associate Professor with "Aggregation" at the Department of Electrical and Computer Engineering of IST and a researcher of the Institute of Systems and Robotics (ISR) and heads the Computer and Robot Vision Lab - VisLab.
He is the scientific responsible for the participation of IST/ISR in various European and National research projects in the areas of Computer Vision and Robotics. His research interests are in the areas of Computer and Robot Vision, particularly in the relationship between visual perception and the control of action, biologically inspired vision and robotics, cognitive vision and visual controlled (land, air and underwater) mobile robots.
Prof. Santos-Victor was an Associated Editor of the IEEE Transactions on Robotics and the Journal of Robotics and Autonomous Systems.