Catálogo de publicaciones - libros

Compartir en
redes sociales


Algorithmic Learning Theory: 17th International Conference, ALT 2006, Barcelona, Spain, October 7-10, 2006, Proceedings

José L. Balcázar ; Philip M. Long ; Frank Stephan (eds.)

En conferencia: 17º International Conference on Algorithmic Learning Theory (ALT) . Barcelona, Spain . October 7, 2006 - October 10, 2006

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Algorithm Analysis and Problem Complexity; Mathematical Logic and Formal Languages; Document Preparation and Text Processing

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-46649-9

ISBN electrónico

978-3-540-46650-5

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

Teaching Memoryless Randomized Learners Without Feedback

Frank J. Balbach; Thomas Zeugmann

The present paper mainly studies the expected teaching time of memoryless randomized learners without feedback. First, a characterization of optimal randomized learners is provided and, based on it, optimal teaching teaching times for certain classes are established. Second, the problem of determining the optimal teaching time is shown to be ${{\mathcal N\!P}}$ -hard. Third, an algorithm for approximating the optimal teaching time is given. Finally, two heuristics for teaching are studied, i.e., cyclic teachers and greedy teachers.

Palabras clave: Learn Theory; Success Probability; Concept Class; Teaching Model; Target Concept.

- Regular Contributions | Pp. 93-108

The Complexity of Learning SUBSEQ (A)

Stephen Fenner; William Gasarch

Higman showed that if A is any language then SUBSEQ( A ) is regular, where SUBSEQ( A ) is the language of all subsequences of strings in A . We consider the following inductive inference problem: given A ( ε ), A (0), A (1), A (00), ... learn, in the limit, a DFA for SUBSEQ( A ). We consider this model of learning and the variants of it that are usually studied in inductive inference: anomalies, mindchanges, and teams.

Palabras clave: Turing Machine; Computable Function; Inductive Inference; Team Learning; Standard Enumeration.

- Regular Contributions | Pp. 109-123

Mind Change Complexity of Inferring Unbounded Unions of Pattern Languages from Positive Data

Matthew de Brecht; Akihiro Yamamoto

This paper gives a proof that the class of unbounded unions of languages of regular patterns with constant segment length bound is inferable from positive data with mind change bound between ω ^ ω and $\omega^{\omega^{\omega}}$ . We give a very tight bound on the mind change complexity based on the length of the constant segments and the size of the alphabet of the pattern languages. This is, to the authors’ knowledge, the first time a natural class of languages has been shown to be inferable with mind change complexity above ω ^ ω . The proof uses the notion of closure operators on a class of languages, and also uses the order type of well-partial-orderings to obtain a mind change bound. The inference algorithm presented can be easily applied to a wide range of classes of languages. Finally, we show an interesting connection between proof theory and mind change complexity.

- Regular Contributions | Pp. 124-138

Learning and Extending Sublanguages

Sanjay Jain; Efim Kinber

A number of natural models for learning in the limit is introduced to deal with the situation when a learner is required to provide a grammar covering the input even if only a part of the target language is available. Examples of language families are exhibited that are learnable in one model and not learnable in another one. Some characterizations for learnability of algorithmically enumerable families of languages for the models in question are obtained. Since learnability of any part of the target language does not imply monotonicity of the learning process, we consider also our models under additional monotonicity constraint.

Palabras clave: Initial Segment; Target Language; Recursive Function; Inductive Inference; Restrictive Model.

- Regular Contributions | Pp. 139-153

Iterative Learning from Positive Data and Negative Counterexamples

Sanjay Jain; Efim Kinber

A model for learning in the limit is defined where a (so-called iterative ) learner gets all positive examples from the target language, tests every new conjecture with a teacher (oracle) if it is a subset of the target language (and if it is not, then it receives a negative counterexample), and uses only limited long-term memory (incorporated in conjectures). Three variants of this model are compared: when a learner receives least negative counterexamples, the ones whose size is bounded by the maximum size of input seen so far, and arbitrary ones. We also compare our learnability model with other relevant models of learnability in the limit, study how our model works for indexed classes of recursive languages, and show that learners in our model can work in non-U-shaped way — never abandoning the first right conjecture.

Palabras clave: Target Language; Recursive Function; Regular Language; Positive Data; Target Concept.

- Regular Contributions | Pp. 154-168

Towards a Better Understanding of Incremental Learning

Sanjay Jain; Steffen Lange; Sandra Zilles

The present study aims at insights into the nature of incremental learning in the context of Gold’s model of identification in the limit. With a focus on natural requirements such as consistency and conservativeness, incremental learning is analysed both for learning from positive examples and for learning from positive and negative examples. The results obtained illustrate in which way different consistency and conservativeness demands can affect the capabilities of incremental learners. These results may serve as a first step towards characterising the structure of typical classes learnable incrementally and thus towards elaborating uniform incremental learning methods.

- Regular Contributions | Pp. 169-183

On Exact Learning from Random Walk

Nader H. Bshouty; Iddo Bentov

We consider a few particular exact learning models based on a random walk stochastic process, and thus more restricted than the well known general exact learning models. We give positive and negative results as to whether learning in these particular models is easier than in the general learning models.

- Regular Contributions | Pp. 184-198

Risk-Sensitive Online Learning

Eyal Even-Dar; Michael Kearns; Jennifer Wortman

We consider the problem of online learning in settings in which we want to compete not simply with the rewards of the best expert or stock, but with the best trade-off between rewards and risk . Motivated by finance applications, we consider two common measures balancing returns and risk: the Sharpe ratio [9] and the mean-variance criterion of Markowitz [8]. We first provide negative results establishing the impossibility of no-regret algorithms under these measures, thus providing a stark contrast with the returns-only setting. We then show that the recent algorithm of Cesa-Bianchi et al. [5] achieves nontrivial performance under a modified bicriteria risk-return measure, and give a modified best expert algorithm that achieves no regret for a “localized” version of the mean-variance criterion. We perform experimental comparisons of traditional online algorithms and the new risk-sensitive algorithms on a recent six-year S&P 500 data set and find that the modified best expert algorithm outperforms the traditional with respect to Sharpe ratio, MV, and accumulated wealth. To our knowledge this paper initiates the investigation of explicit risk considerations in the standard models of worst-case online learning.

Palabras clave: Competitive Ratio; Online Algorithm; Sharpe Ratio; Average Reward; Good Expert.

- Regular Contributions | Pp. 199-213

Leading Strategies in Competitive On-Line Prediction

Vladimir Vovk

We start from a simple asymptotic result for the problem of on-line regression with the quadratic loss function: the class of continuous limited-memory prediction strategies admits a “leading prediction strategy”, which not only asymptotically performs at least as well as any continuous limited-memory strategy but also satisfies the property that the excess loss of any continuous limited-memory strategy is determined by how closely it imitates the leading strategy. More specifically, for any class of prediction strategies constituting a reproducing kernel Hilbert space we construct a leading strategy, in the sense that the loss of any prediction strategy whose norm is not too large is determined by how closely it imitates the leading strategy. This result is extended to the loss functions given by Bregman divergences and by strictly proper scoring rules.

Palabras clave: Loss Function; Reproduce Kernel Hilbert Space; Prediction Strategy; Predictable Process; Quadratic Loss Function.

- Regular Contributions | Pp. 214-228

Hannan Consistency in On-Line Learning in Case of Unbounded Losses Under Partial Monitoring

Chamy Allenberg; Peter Auer; László Györfi; György Ottucsák

In this paper the sequential prediction problem with expert advice is considered when the loss is unbounded under partial monitoring scenarios. We deal with a wide class of the partial monitoring problems: the combination of the label efficient and multi-armed bandit problem, that is, where the algorithm is only informed about the performance of the chosen expert with probability ε ≤1. For bounded losses an algorithm is given whose expected regret scales with the square root of the loss of the best expert. For unbounded losses we prove that Hannan consistency can be achieved, depending on the growth rate of the average squared losses of the experts.

- Regular Contributions | Pp. 229-243