Catálogo de publicaciones - libros

Compartir en
redes sociales


Cooperative Information Agents XI: 11th International Workshop, CIA 2007, Delft, The Netherlands, September 19-21, 2007. Proceedings

Matthias Klusch ; Koen V. Hindriks ; Mike P. Papazoglou ; Leon Sterling (eds.)

En conferencia: 11º International Workshop on Cooperative Information Agents (CIA) . Delft, The Netherlands . September 19, 2007 - September 21, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Information Systems Applications (incl. Internet); Data Mining and Knowledge Discovery; Database Management; Computer Communication Networks; User Interfaces and Human Computer Interaction

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-75118-2

ISBN electrónico

978-3-540-75119-9

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Trust Modeling with Context Representation and Generalized Identities

Martin Rehák; Michal Pěchouček

We present a trust model extension that attempts to relax the assumptions that are currently taken by the majority of existing trust models: (i) proven identity of agents, (ii) repetitive interactions and (iii) similar trusting situations. The proposed approach formalizes the situation (context) and/or trusted agent identity in a multi-dimensional Identity-Context feature space, and attaches the trustworthiness evaluations to individual elements from this metric space, rather than to fixed identity tags (e.g. AIDs, addresses). Trustworthiness of the individual elements of the I-C space can be evaluated using any trust model that supports weighted aggregations and updates, allowing the integration of the mechanism with most existing work. Trust models with the proposed extension are appropriate for deployment in dynamic, ad-hoc and mobile environments, where the agent platform can’t guarantee the identity of the agents and where the cryptography-based identity management techniques may be too costly due to the unreliable and costly communication.

- Trust | Pp. 298-312

Learning Initial Trust Among Interacting Agents

Achim Rettinger; Matthias Nickles; Volker Tresp

Trust learning is a crucial aspect of information exchange, negotiation, and any other kind of social interaction among autonomous agents in open systems. But most current probabilistic models for computational trust learning lack the ability to take context into account when trying to predict future behavior of interacting agents. Moreover, they are not able to transfer knowledge gained in a specific context to a related context. Humans, by contrast, have proven to be especially skilled in perceiving traits like trustworthiness in such so-called . The same restriction applies to most multiagent learning problems. In complex scenarios most algorithms do not scale well to large state-spaces and need numerous interactions to learn. We argue that trust related scenarios are best represented in a system of relations to capture semantic knowledge. Following recent work on nonparametric Bayesian models we propose a flexible and context sensitive way to model and learn multidimensional trust values which is particularly well suited to establish trust among strangers without prior relationship. To evaluate our approach we extend a multiagent framework by allowing agents to break an agreed interaction outcome retrospectively. The results suggest that the inherent ability to discover clusters and relationships between clusters that are best supported by the data allows to make predictions about future behavior of agents especially when initial trust is involved.

- Trust | Pp. 313-327

A Probabilistic Framework for Decentralized Management of Trust and Quality

Le-Hung Vu; Karl Aberer

In this paper, we propose a probabilistic framework targeting three important issues in the computation of quality and trust in decentralized systems. Specifically, our approach addresses the multi-dimensionality of quality and trust, taking into account credibility of the collected data sources for more reliable estimates, while also enabling the personalization of the computation. We use graphical models to represent peers’ qualitative behaviors and exploit appropriate probabilistic learning and inference algorithms to evaluate their quality and trustworthiness based on related reports. Our implementation of the framework introduces the most typical quality models, uses the Expectation-Maximization algorithm to learn their parameters, and applies the Junction Tree algorithm to inference on them for the estimation of quality and trust. The experimental results validate the advantages of our approach: first, using an appropriate personalized quality model, our computational framework can produce good estimates, even with a sparse and incomplete recommendation data set; second, the output of our solution has well-defined semantics and useful meanings for many purposes; third, the framework is scalable in terms of performance, computation, and communication cost. Furthermore, our solution can be shown as a generalization or serve as the theoretical basis of many existing trust computational approaches.

- Trust | Pp. 328-342

Formal Analysis of Trust Dynamics in Human and Software Agent Experiments

Tibor Bosse; Catholijn M. Jonker; Jan Treur; Dmytro Tykhonov

Recognizing that trust states are mental states, this paper presents a formal analysis of the dynamics of trust in terms of the functional roles and representation relations for trust states. This formal analysis is done both in a logical framework and in a mathematical framework based on integral and differential equations. Furthermore, the paper presents formal specifications of a number of relevant dynamic properties of trust. The specifications provided were used to perform automated formal analysis of empirical and simulated data from two case studies, one involving two experiments with humans, and one involving simulation experiments in the context of an economic game.

- Trust | Pp. 343-359