Catálogo de publicaciones - libros

Compartir en
redes sociales


RoboCup 2004: Robot Soccer World Cup VIII

Daniele Nardi ; Martin Riedmiller ; Claude Sammut ; José Santos-Victor (eds.)

En conferencia: 8º Robot Soccer World Cup (RoboCup) . Lisbon, Portugal . June 27, 2004 - July 5, 2004

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-25046-3

ISBN electrónico

978-3-540-32256-6

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

An Egocentric Qualitative Spatial Knowledge Representation Based on Ordering Information for Physical Robot Navigation

Thomas Wagner; Kai Hübner

Navigation is one of the most fundamental tasks to be accomplished by many types of mobile and cognitive systems. Most approaches in this area are based on building or using existing allocentric, static maps in order to guide the navigation process. In this paper we propose a simple egocentric, qualitative approach to navigation based on ordering information. An advantage of our approach is that it produces qualitative spatial information which is required to describe and recognize complex and abstract, i.e., translation-invariant behavior. In contrast to other techniques for mobile robot tasks, that also rely on landmarks it is also proposed to reason about their validity despite insufficient and insecure sensory data. Here we present a formal approach that avoids this problem by use of a simple internal spatial representation based on landmarks aligned in an structure.

- Full Papers | Pp. 134-149

Sensor-Actuator-Comparison as a Basis for Collision Detection for a Quadruped Robot

Jan Hoffmann; Daniel Göhring

Collision detection in a quadruped robot based on the comparison of sensor readings (actual motion) to actuator commands (intended motion) is described. Ways of detecting such incidences using just the sensor readings from the servo motors of the robot’s legs are shown. Dedicated range sensors or collision detectors are not used. It was found that comparison of motor commands and actual movement (as sensed by the servo’s position sensor) allowed the robot to reliably detect collisions and obstructions. Minor modifications to make the system more robust enabled us to use it in the RoboCup domain, enabling the system to cope with arbitrary movements and accelerations apparent in this highly dynamic environment. A sample behavior is outlined that utilizes the collision information. Further emphasis was put on keeping the process of calibration for different robot gaits simple and manageable.

- Full Papers | Pp. 150-159

Learning to Drive and Simulate Autonomous Mobile Robots

Alexander Gloye; Cüneyt Göktekin; Anna Egorova; Oliver Tenchio; Raúl Rojas

We show how to apply learning methods to two robotics problems, namely the optimization of the on-board controller of an omnidirectional robot, and the derivation of a model of the physical driving behavior for use in a simulator.

We show that optimal control parameters for several PID controllers can be learned adaptively by driving an omni directional robot on a field while evaluating its behavior, using an reinforcement learning algorithm. After training, the robots can follow the desired path faster and more elegantly than with manually adjusted parameters.

Secondly, we show how to learn the physical behavior of a robot. Our system learns to predict the position of the robots in the future according to their reactions to sent commands. We use the learned behavior in the simulation of the robots instead of adjusting the physical simulation model whenever the mechanics of the robot changes. The updated simulation reflects then the modified physics of the robot.

- Full Papers | Pp. 160-171

RoboCupJunior — Four Years Later

Elizabeth Sklar; Amy Eguchi

In this paper, we report on the status of the RoboCupJunior league, four years after it was founded. Since its inception in 2000, we have been surveying and/or interviewing students and mentors who participate in the international event. Here we present a high-level overview of this data. We discuss demographics of participants, characteristics of preparation and educational value. We highlight trends and identify needs for the future, in terms of event organization, educational assessment and community-building.

- Full Papers | Pp. 172-183

Evolution of Computer Vision Subsystems in Robot Navigation and Image Classification Tasks

Sascha Lange; Martin Riedmiller

Real-time decision making based on visual sensory information is a demanding task for mobile robots. Learning on high-dimensional, highly redundant image data imposes a real problem for most learning algorithms, especially those being based on neural networks. In this paper we investigate the utilization of evolutionary techniques in combination with supervised learning of feedforward nets to automatically construct and improve suitable, task-dependent preprocessing layers helping to reduce the complexity of the original learning problem. Given a number of basic, parameterized low-level computer vision algorithms, the proposed evolutionary algorithm automatically selects and appropriately sets up the parameters of exactly those operators best suited for the imposed supervised learning problem.

- Full Papers | Pp. 184-195

Towards Illumination Invariance in the Legged League

Mohan Sridharan; Peter Stone

To date, RoboCup games have all been played under constant, bright lighting conditions. However, in order to meet the overall goal of RoboCup, robots will need to be able to seamlessly handle changing, natural light. One method for doing so is to be able to identify colors regardless of illumination: . Color constancy is a relatively recent, but increasingly important, topic in vision research. Most approaches so far have focussed on stationary cameras. In this paper we propose a methodology for color constancy on mobile robots. We describe a technique that we have used to solve a subset of the problem, in real-time, based on color space distributions and the KL-divergence measure. We fully implement our technique and present detailed empirical results in a robot soccer scenario.

- Full Papers | Pp. 196-208

Using Layered Color Precision for a Self-Calibrating Vision System

Matthias Jüngel

This paper presents a vision system for robotic soccer which was tested on Sony’s four legged robot Aibo. The input for the vision system are images of the camera and the sensor readings of the robot’s head joints, the output are the positions of all recognized objects in relation to the robot. The object recognition is based on the colors of the objects and uses a color look-up table. The vision system creates the color look-up table on its own during a soccer game. Thus no pre-run calibration is needed and the robot can cope with inhomogeneous or changing light on the soccer field. It is shown, how different layers of color representation can be used to refine the results of color classification. However, the self-calibrated color look-up table is not as accurate as a hand-made. Together with the introduced object recognition which is very robust relating to the quality of the color table, the self-calibrating vision works very well. This robustness is achieved using the detection of edges on scan lines.

- Full Papers | Pp. 209-220

Getting the Most from Your Color Camera in a Color-Coded World

Erio Grillo; Matteo Matteucci; Domenico G. Sorrenti

In this paper we present a proposal for setting camera parameters which we claim to give results better matched to applications in color-coded environments then the camera internal algorithms. Moreover it does not require online human intervention, i.e. is automated, and is faster than a human operator. This work applies to situations where the camera is used to extract information from a color-coded world. The experimental activity presented has been performed in the framework of Robocup mid-size rules, with the hypothesis of temporal constancy of light conditions; this work is the necessary first step toward dealing with slow changes, in the time domain, of light conditions.

- Full Papers | Pp. 221-235

Combining Exploration and Ad-Hoc Networking in RoboCup Rescue

Martijn N. Rooker; Andreas Birk

In challenging environments where the risk of loss of a robot is high, robot teams are a natural choice. In many applications like for example rescue missions there are two crucial tasks for the robots. First, they have to efficiently and exhaustively explore the environment. Second, they must keep up a network connection to the base-station to transmit data to ensure timely arrival and secure storage of vital information. When using wireless media, it is necessary to use robots from the team as relay stations for this purpose. This paper deals with the problem to combine an efficient exploration of the environment with suited motions of the robots to keep data transmissions stable.

- Full Papers | Pp. 236-246

Robust Multi-robot Object Localization Using Fuzzy Logic

Juan Pedro Cánovas; Kevin LeBlanc; Alessandro Saffiotti

Cooperative localization of objects is an important challenge in multi-robot systems. We propose a new approach to this problem where we see each robot as an expert which shares unreliable information about object locations. The information provided by different robots is then combined using fuzzy logic techniques, in order to reach a between the robots. This contrasts with most current probabilistic techniques, which average information from different robots in order to obtain a , and can thus incur well-known problems when information is unreliable. In addition, our approach does not assume that the robots have accurate self-localization. Instead, uncertainty in the pose of the sensing robot is propagated to object position estimates. We present experimental results obtained on a team of Sony AIBO robots, where we share information about the location of the ball in the RoboCup domain.

- Full Papers | Pp. 247-261