Catálogo de publicaciones - libros
Symbol Grounding and Beyond: Third International Workshop on the Emergence and Evolution of Linguistic Communications, EELC 2006, Rome, Italy, September 30-October 1, 2006, Proceedings
Paul Vogt ; Yuuya Sugita ; Elio Tuci ; Chrystopher Nehaniv (eds.)
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Sociolinguistics; Artificial Intelligence (incl. Robotics); Simulation and Modeling; Computation by Abstract Devices; Language Translation and Linguistics; Computer Appl. in Social and Behavioral Sciences
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2006 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-45769-5
ISBN electrónico
978-3-540-45771-8
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2006
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2006
Cobertura temática
Tabla de contenidos
doi: 10.1007/11880172_11
Propositional Logic Syntax Acquisition
Josefina Sierra-Santibáñez
This paper addresses the problem of the acquisition of the syntax of propositional logic. An approach based on general purpose cognitive capacities such as invention, adoption, parsing, generation and induction is proposed. Self-organisation principles are used to show how a shared set of preferred lexical entries and grammatical constructions, i.e., a , can emerge in a population of autonomous agents which do not have any initial linguistic knowledge.
Experiments in which a population of autonomous agents constructs a language that allows communicating the formulas of a propositional language are presented. This language although simple has interesting properties found in natural languages, such as compositionality and recursion.
Pp. 128-142
doi: 10.1007/11880172_12
Robots That Learn Language: Developmental Approach to Human-Machine Conversations
Naoto Iwahashi
This paper describes a machine learning method that enables robots to learn the capability of linguistic communication from scratch through verbal and nonverbal interaction with users. The method focuses on two major problems that should be pursued to realize natural human-machine conversation: a scalable grounded symbol system and belief sharing. The learning is performed in the process of joint perception and joint action with a user. The method enables the robot to learn beliefs for communication by combining speech, visual, and behavioral reinforcement information in a probabilistic framework. The beliefs learned include speech units like phonemes or syllables, a lexicon, grammar, and pragmatic knowledge, and they are integrated in a system represented by a dynamical graphical model. The method also enables the user and the robot to infer the state of each other’s beliefs related to communication. To facilitate such inference, the belief system held by the robot possesses a structure that represents the assumption of shared beliefs and allows for fast and robust adaptation of it through communication with the user. This adaptive behavior of the belief systems is modeled by the structural coupling of the belief systems held by the robot and the user, and it is performed through incremental online optimization in the process of interaction. Experimental results reveal that through a practical, small number of learning episodes with a user, the robot was eventually able to understand even fragmental and ambiguous utterances, act upon them, and generate utterances appropriate for the given situation. This work discusses the importance of properly handling the risk of being misunderstood in order to facilitate mutual understanding and to keep the coupling effective.
Pp. 143-167
doi: 10.1007/11880172_13
Simulating Meaning Negotiation Using Observational Language Games
Tiina Lindh-Knuutila; Timo Honkela; Krista Lagus
In this article, we study the emergence of associations between words and concepts using the self-organizing map. In particular, we explore the meaning negotiations among communicating agents. The self-organizing map is used as a model of an agent’s conceptual memory. The concepts are not explicitly given but they are learned by the agent in an unsupervised manner. Concepts are viewed as areas formed in a self-organizing map based on unsupervised learning. The language acquisition process is modeled in a population of simulated agents by using a series of language games, specifically observational games. The results of the simulation experiments verify that the agents learn to communicate successfully and a shared lexicon emerges.
Pp. 168-179
doi: 10.1007/11880172_14
Symbol Grounding Through Cumulative Learning
Samarth Swarup; Kiran Lakkaraju; Sylvian R. Ray; Les Gasser
We suggest that the primary motivation for an agent to construct a symbol-meaning mapping is to solve a task. The meaning space of an agent should be derived from the tasks that it faces during the course of its lifetime. We outline a process in which agents learn to solve multiple tasks and extract a store of “cumulative knowledge” that helps them to solve each new task more quickly and accurately. This cumulative knowledge then forms the ontology or meaning space of the agent. We suggest that by grounding symbols to this extracted cumulative knowledge agents can gain a further performance benefit because they can guide each others’ learning process. In this version of the symbol grounding problem meanings cannot be directly communicated because they are internal to the agents, and they will be different for each agent. Also, the meanings may not correspond directly to objects in the environment. The communication process can also allow a symbol meaning mapping that is dynamic. We posit that these properties make this version of the symbol grounding problem realistic and natural. Finally, we discuss how symbols could be grounded to cumulative knowledge via a situation where a teacher selects tasks for a student to perform.
Pp. 180-191
doi: 10.1007/11880172_15
The Human Speechome Project
Deb Roy; Rupal Patel; Philip DeCamp; Rony Kubat; Michael Fleischman; Brandon Roy; Nikolaos Mavridis; Stefanie Tellex; Alexia Salata; Jethran Guinness; Michael Levit; Peter Gorniak
The Human Speechome Project is an effort to observe and computationally model the longitudinal course of language development for a single child at an unprecedented scale. We are collecting audio and video recordings for the first three years of one child’s life, in its near entirety, as it unfolds in the child’s home. A network of ceiling-mounted video cameras and microphones are generating approximately 300 gigabytes of observational data each day from the home. One of the worlds largest single-volume disk arrays is under construction to house approximately 400,000 hours of audio and video recordings that will accumulate over the three year study. To analyze the massive data set, we are developing new data mining technologies to help human analysts rapidly annotate and transcribe recordings using semi-automatic methods, and to detect and visualize salient patterns of behavior and interaction. To make sense of large-scale patterns that span across months or even years of observations, we are developing computational models of language acquisition that are able to learn from the childs experiential record. By creating and evaluating machine learning systems that step into the shoes of the child and sequentially process long stretches of perceptual experience, we will investigate possible language learning strategies used by children with an emphasis on early word learning.
Pp. 192-196
doi: 10.1007/11880172_16
Unify and Merge in Fluid Construction Grammar
Luc Steels; Joachim De Beule
Research into the evolution of grammar requires that we employ formalisms and processing mechanisms that are powerful enough to handle features found in human natural languages. But the formalism needs to have some additional properties compared to those used in other linguistics research that are specifically relevant for handling the emergence and progressive co-ordination of grammars in a population of agents. This document introduces Fluid Construction Grammar, a formalism with associated parsing, production, and learning processes designed for language evolution research. The present paper focuses on a formal definition of the unification and merging algorithms used in Fluid Construction Grammar. The complexity and soundness of the algorithms and their relation to unification in logic programming and other unification-based grammar formalisms are discussed.
Pp. 197-223
doi: 10.1007/11880172_17
Utility for Communicability by Profit and Cost of Agreement
Ryuichi Matoba; Makoto Nakamura; Satoshi Tojo
The inflection of words based on agreement, such as number, gender and case, is considered to contribute to clarify the dependency between words in a sentence. Our purpose in this study is to investigate the efficiency of word inflections with HPSG (Head–driven Phrase Structure Grammar), which is able to deal with these features directly. Using a notion of utility, we measure the efficiency of a grammar in terms of the balance between the number of semantic structures of a sentence, and the cost of agreement according to the number of unification processes. In our experiments, we showed how these were balanced in two different corpora. One, WSJ (Wall Street Journal), includes long and complicated sentences, while the other corpus, ATIS (Air Travel Information System) does shorter colloquial sentences. In the both corpora, agreement is surely important to reduce ambiguity. However, the importance of agreement in the ATIS corpus became salient as personal pronouns were so often employed in it, compared with the WSJ corpus.
Pp. 224-236