Catálogo de publicaciones - libros

Compartir en
redes sociales


Symbol Grounding and Beyond: Third International Workshop on the Emergence and Evolution of Linguistic Communications, EELC 2006, Rome, Italy, September 30-October 1, 2006, Proceedings

Paul Vogt ; Yuuya Sugita ; Elio Tuci ; Chrystopher Nehaniv (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Sociolinguistics; Artificial Intelligence (incl. Robotics); Simulation and Modeling; Computation by Abstract Devices; Language Translation and Linguistics; Computer Appl. in Social and Behavioral Sciences

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-45769-5

ISBN electrónico

978-3-540-45771-8

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

A Hybrid Model for Learning Word-Meaning Mappings

Federico Divina; Paul Vogt

In this paper we introduce a model for the simulation of language evolution, which is incorporated in the New Ties project. The New Ties project aims at evolving a cultural society by integrating evolutionary, individual and social learning in large scale multi-agent simulations. The model presented here introduces a novel implementation of language games, which allows agents to communicate in a more natural way than with most other existing implementations of language games. In particular, we propose a hybrid mechanism that combines cross-situational learning techniques with more informed feedback mechanisms. In our study we focus our attention on dealing with referential indeterminacy after joint attention has been established and on whether the current model can deal with larger populations than previous studies involving cross-situational learning. Simulations show that the proposed model can indeed lead to coherent languages in a quasi realistic world environment with larger populations.

Pp. 1-15

Cooperation, Conceptual Spaces and the Evolution of Semantics

Peter Gärdenfors; Massimo Warglien

We start by providing an evolutionary scenario for the emergence of semantics. It is argued that the evolution of anticipatory cognition and theory of mind in the hominids opened up for cooperation about future goals. This cooperation requires symbolic communication. The meanings of the symbols are established via a “meeting of minds.” The concepts in the minds of communicating individuals are modelled as convex regions in conceptual spaces. We then outline a mathematical framework based on fixpoints in continuous mappings between conceptual spaces that can be used to model such a semantics.

Pp. 16-30

Cross-Situational Learning: A Mathematical Approach

Kenny Smith; Andrew D. M. Smith; Richard A. Blythe; Paul Vogt

We present a mathematical model of cross-situational learning, in which we quantify the learnability of words and vocabularies. We find that high levels of uncertainty are not an impediment to learning single words or whole vocabulary systems, as long as the level of uncertainty is somewhat lower than the total number of meanings in the system. We further note that even large vocabularies are learnable through cross-situational learning.

Pp. 31-44

Dialog Strategy Acquisition and Its Evaluation for Efficient Learning of Word Meanings by Agents

Ryo Taguchi; Kouichi Katsurada; Tsuneo Nitta

In word meaning acquisition through interactions among humans and agents, the efficiency of the learning depends largely on the dialog strategies the agents have. This paper describes automatic acquisition of dialog strategies through interaction between two agents. In the experiments, two agents infer each other’s comprehension level from its facial expressions and utterances to acquire efficient strategies. Q-learning is applied to a strategy acquisition mechanism. Firstly, experiments are carried out through the interaction between a mother agent, who knows all the word meanings, and a child agent with no initial word meaning. The experimental results showed that the mother agent acquires a teaching strategy, while the child agent acquires an asking strategy. Next, the experiments of interaction between a human and an agent are investigated to evaluate the acquired strategies. The results showed the effectiveness of both strategies of teaching and asking.

Pp. 45-56

Evolving Distributed Representations for Language with Self-Organizing Maps

Simon D. Levy; Simon Kirby

We present a neural-competitive learning model of language evolution in which several symbol sequences compete to signify a given propositional meaning. Both symbol sequences and propositional meanings are represented by high-dimensional vectors of real numbers. A neural network learns to map between the distributed representations of the symbol sequences and the distributed representations of the propositions. Unlike previous neural network models of language evolution, our model uses a Kohonen Self-Organizing Map with unsupervised learning, thereby avoiding the computational slowdown and biological implausibility of back-propagation networks and the lack of scalability associated with Hebbian-learning networks. After several evolutionary generations, the network develops systematically regular mappings between meanings and sequences, of the sort traditionally associated with symbolic grammars. Because of the potential of neural-like representations for addressing the symbol-grounding problem, this sort of model holds a good deal of promise as a new explanatory mechanism for both language evolution and acquisition.

Pp. 57-71

How Do Children Develop Syntactic Representations from What They Hear?

Elena Lieven

Children learn language from what they hear. In dispute is what mechanisms they bring to this task. Clearly some of these mechanisms have evolved to support the human speech capacity but this leaves a wide field of possibilities open. The question I will address in my paper is whether we need to postulate an innate module that has evolved to make the learning of language structure possible. I will suggest that more general human social and cognitive capacities may be all that is needed to support the learning of syntactic structure.

Pp. 72-75

How Grammar Emerges to Dampen Combinatorial Search in Parsing

Luc Steels; Pieter Wellens

According to the functional approach to language evolution (inspired by cognitive linguistics and construction grammar), grammar arises to deal with issues in communication among autonomous agents, particularly maximisation of communicative success and expressive power and minimisation of cognitive effort. Experiments in the emergence of grammar should hence start from a simulation of communicative exchanges between embodied agents, and then show how a particular issue that arises can be solved or partially solved by introducing more grammar. This paper shows a case study of this approach, focusing on the issue of search during parsing. Multiple hypotheses arise in parsing when the same syntactic pattern can be used for multiple purposes or when one syntactic pattern partly overlaps with another one. It is well known that syntactic ambiguity rapidly leads to combinatorial explosions and hence an increase in memory use and processing power, possibly to a point where the sentence can no longer be handled. Additional grammar, such as syntactic or semantic subcategorisation or word order and agreement constraints can help to dampen search because it provides information to the hearer which hypotheses are the most likely. The paper shows an operational experiment where avoiding search is used as the driver for the introduction and negotiation of syntax. The experiment is also a demonstration of how Fluid Construction Grammar is well suited for experiments in language evolution.

Pp. 76-88

Implementation of Biases Observed in Children’s Language Development into Agents

Ryo Taguchi; Masashi Kimura; Shuji Shinohara; Kouichi Katsurada; Tsuneo Nitta

This paper describes efficient word meaning acquisition for infant agents (IAs) based on learning biases that are observed in children’s language development. An IA acquires word meanings through learning the relations among visual features of objects and acoustic features of human speech. In this task, the IA has to find out which visual features are indicated by the speech. Previous works introduced stochastic approaches to do this, however, such approaches need many examples to achieve high accuracy. In this paper, firstly, we propose a word meaning acquisition method for the IA based on an Online-EM algorithm without learning biases. Then, we implement two types of biases into it to accelerate the word meaning acquisition. Experimental results show that the proposed method with biases can efficiently acquire word meanings.

Pp. 89-99

Lexicon Convergence in a Population With and Without Metacommunication

Zoran Macura; Jonathan Ginzburg

How does a shared lexicon arise in population of agents with differing lexicons, and how can this shared lexicon be maintained over multiple generations? In order to get some insight into these questions we present an ALife model in which the lexicon dynamics of populations that possess and lack metacommunicative interaction (MCI) capabilities are compared. We suggest that MCI serves as a key component in the maintenance of a linguistic interaction system. We ran a series of experiments on mono-generational and multi-generational populations whose initial state involved agents possessing distinct lexicons. These experiments reveal some clear differences in the lexicon dynamics of populations that acquire words solely by introspection contrasted with populations that learn using MCI or using a mixed strategy of introspection and MCI. Over a single generation the performance between the populations with and without MCI is comparable, in that the lexicon converges and is shared by the whole population. In multi-generational populations lexicon diverges at a faster rate for an introspective population, eventually consisting of one word being associated with every meaning, compared with MCI capable populations in which the lexicon is maintained, where every meaning is associated with a unique word.

Pp. 100-112

Operational Aspects of the Evolved Signalling Behaviour in a Group of Cooperating and Communicating Robots

Elio Tuci; Christos Ampatzis; Federico Vicentini; Marco Dorigo

This paper complements the results and analysis shown in current studies on the evolution of signalling and cooperation. It describes operational aspects of the evolved behaviour of a group of robots equipped with a different set of sensors, that navigates towards a target in a walled arena. In particular, analysis of the sound signalling behaviour shows that the robots employ the sound to remain close to each other at a safe distance with respect to the risk of collisions. Spatial discrimination of the sound sources is achieved by exploiting a rotational movement which amplifies intensity differences between the two sound sensors.

Pp. 113-127