Catálogo de publicaciones - libros

Compartir en
redes sociales


Induction, Algorithmic Learning Theory, and Philosophy

Michèle Friend ; Norma B. Goethe ; Valentina S. Harizanov (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Epistemology; Mathematical Logic and Foundations; Mathematical Logic and Formal Languages; Philosophy of Science; Algorithms; Cognitive Psychology

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-1-4020-6126-4

ISBN electrónico

978-1-4020-6127-1

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer 2007

Tabla de contenidos

Introduction to the Philosophy and Mathematics of Algorithmic Learning Theory

Valentina S. Harizanov; Norma B. Goethe; Michèle Friend

This paper is an overview of formal learning theory that emphasizes the philosophical motivation for the mathematical definitions. I introduce key concepts at a slow pace, comparing and contrasting with other approaches to inductive inference such as confirmation theory. A number of examples are discussed, some in detail, such as Goodman’s Riddle of Induction. I outline some important results of formal learning theory that are of philosophical interest. Finally, I discuss recent developments in this approach to inductive inference.

- Introduction to the Philosophy and Mathematics of Algorithmic Learning Theory | Pp. 1-24

Inductive Inference Systems for Learning Classes of Algorithmically Generated Sets and Structures

Valentina S. Harizanov

Computability theorists have extensively studied sets the elements of which can be enumerated by Turing machines. These sets, also called sets, can be identified with their Gödel codes. Although each Turing machine has a unique Gödel code, different Turing machines can enumerate the same set. Thus, knowing a computably enumerable set means knowing one of its infinitely many Gödel codes. In the approach to learning theory stemming from E.M. Gold’s seminal paper [9], an inductive inference learner for a computably enumerable set is a system or a device, usually algorithmic, which when successively (one by one) fed data for outputs a sequence of Gödel codes (one by one) that at a certain point stabilize at codes correct for . The convergence is called semantic or , unless the same code for is eventually output, in which case it is also called syntactic or . There are classes of sets that are semantically inferable, but not syntactically inferable.

Here, we are also concerned with generalizing inductive inference from sets, which are collections of distinct elements that are mutually independent, to mathematical structures in which various elements may be interrelated. This study was recently initiated by F. Stephan and Yu. Ventsov. For example, they systematically investigated inductive inference of the ideals of computable rings. With F. Stephan we continued this line of research by studying inductive inference of computably enumerable vector subspaces and other closure systems.

In particular, we showed how different convergence criteria interact with different ways of supplying data to the learner. Positive data for a set are its elements, while negative data for are the elements of its complement. Inference from means that only positive data are supplied to the learner. Moreover, in the limit, all positive data are given. Inference from means that changes from positive to negative data or are allowed, but if there are only finitely many such changes, then in the limit all data of the eventually requested type (either positive or negative) are supplied. Inference from an means that positive and negative data are supplied alternately, but in the limit all data are supplied. For sets, inference from switching is more restrictive than inference from an informant, but more powerful than inference from text. On the other hand, for example, the class of computably enumerable vector spaces over an infinite field, which is syntactically inferable from text does not change if we allow semantic convergence, or inference from switching, but not both at the same time. While many classes of inferable algebraic structures have nice algebraic characterizations when learning from text or from switching is considered, we do not know of such characterizations for learning from an informant.

I - Technical Papers | Pp. 27-54

Deduction, Induction, and beyond in Parametric Logic

Eric Martin; Arun Sharma; Frank Stephan

With parametric logic, we propose a unified approach to deduction and induction, both viewed as particular instances of a generalized notion of logical consequence. This generalized notion of logical consequence is Tarskian, in the sense that if a sentence is a generalized logical consequence of a set of premises, then the truth of implies the truth of . So in particular, if can be induced from , then the truth of implies the truth of . The difference between deductive and inductive consequences lies in the process of deriving the truth of from the truth of .

If is a deductive consequence of , then can be conclusively inferred from with absolute certainty that is true. If is an inductive consequence of , then can be (correctly, though only hypothetically) inferred from , but will also be (incorrectly, still hypothetically and provisionally only) inferred from other theories that might not have as an inductive consequence (but that have enough in common with to ‘provisionally force’ the inference of ). The hallmark of induction is that given such a theory , can actually be refuted from : ¬ can be inferred from with absolute certainty that ¬ is true.

As a consequence, deduction and induction are both derivation processes that produce ‘truths,’ but the deductive process is characterized by the possibility of committing no mind change, whereas the inductive process is characterized by the possibility of committing one mind change at most. More generally, when a sentence is a generalized logical consequence of a set of premises, it might be possible to infer the truth of from the truth of with fewer than β mind changes, for the least nonnull ordinal β, or it might be possible to infer the truth of from the truth of in the limit, though without any mind change bound, or it might be possible to discover the truth of from the truth of thanks to still more complex notions of inference. ‘Discovering with an ordinal mind change bound’ and ‘discovering in the limit’ are concepts from formal learning theory , an illustration of the fact that parametric logic puts together fundamental notions from mathematical logic and formal learning theory . This paper presents the model-theoretic foundations of parametric logic and some of their connections to formal learning theory and topology.

I - Technical Papers | Pp. 55-110

How Simplicity Helps You Find the Truth without Pointing at it

Kevin T. Kelly

It seems that a fixed bias toward simplicity should help one find the truth, since scientific theorizing is guided by such a bias. But it also seems that a fixed bias toward simplicity cannot indicate or point at the truth, since an indicator has to be sensitive to what it indicates. I argue that both views are correct. It is demonstrated, for a broad range of cases, that the Ockham strategy of favoring the simplest hypothesis, together with the strategy of never dropping the simplest hypothesis until it is no longer simplest, uniquely minimizes reversals of opinion and the times at which the reversals occur prior to convergence to the truth. Thus, simplicity guides one down the straightest path to the truth, even though that path may involve twists and turns along the way. The proof does not appeal to prior probabilities biased toward simplicity. Instead, it is based upon minimization of worst-case cost bounds over complexity classes of possibilities.

I - Technical Papers | Pp. 111-143

Induction over the Continuum

Iraj Kalantari

We exploit the analogy between the well ordering principle for nonempty subsets of N (the set of natural numbers) and the existence of a greatest lower bound for non-empty subsets of [) to formulate a principle of induction over the continuum for [) analogous to induction over N. While the gist of the idea for this principle has been alluded to, our formulation seems novel. To demonstrate the efficiency of the approach, we use the new induction form to give a proof of the compactness of []. (Compactness, which plays a key role in topology, will be briefly discussed.) Although the proof is not fundamentally different from many familiar ones, it is direct and transparent. We also give other applications of the new principle.

I - Technical Papers | Pp. 145-154

Logically Reliable Inductive Inference

Oliver Schulte

This paper is an overview of formal learning theory that emphasizes the philosophical motivation for the mathematical definitions. I introduce key concepts at a slow pace, comparing and contrasting with other approaches to inductive inference such as confirmation theory. A number of examples are discussed, some in detail, such as Goodman’s Riddle of Induction. I outline some important results of formal learning theory that are of philosophical interest. Finally, I discuss recent developments in this approach to inductive inference.

II - Philosophy Papers | Pp. 157-178

Some Philosophical Concerns about the Confidence in ‘Confident Learning’

Michèle Friend

A learner is engaged in “” when we are guaranteed that the hypotheses put out by the learner will converge in the limit. Whence the word “confidence”? We are confident that the learner will eventually learn. The question raised in the paper is: what does the learner really learn? Friend applies the to the scenario of confident learning to undermine our confidence.

It is true that the learner learns an algorithm to that put out by the informant. However, it only has to be equivalent in some respects, in order to converge. That is, the algorithm does not have to be to the algorithm put out by the informant. As a result, the learning is somewhat superficial. Our confidence can only be placed at a superficial level of learning, which might be alright for some purposes, but not for others.

II - Philosophy Papers | Pp. 179-197

How to Do Things with an Infinite Regress

Kevin T. Kelly

Scientific methods may be viewed as procedures for converging to the true answer to a given empirical question. Typically, such methods converge to the truth only if certain empirical presuppositions are satisfied, which raises the question whether the presuppositions are satisfied. Another scientific method can be applied to this empirical question, and so forth, occasioning an empirical regress. So there is an obvious question about the point of such a regress. This paper explains how to assess the methodological worth of a methodological regress by solving for the strongest sense of single-method performance that can be achieved given that such a regress exists. Several types of regresses are “collapsed” into corresponding concepts of single method performance in this sense. The idea bears on some other issues in the philosophy of science, including Popper’s falsificationism and its relationship to Duhem’s problem.

II - Philosophy Papers | Pp. 199-217

Trade-Offs

Clark Glymour

Statistical inference turns on trade-offs among conflicting assumptions that provide stronger or weaker guarantees of convergence to the truth, and stronger or weaker measures of uncertainty of inference. In applied statistics—social statistics, epidemiology, economics—these assumptions are often hidden within computerized data analysis procedures, although they reflect broader epistemological issues. I claim that the content of the trade-offs, and of the epistemological issues they manifest, is clarified by placing them within the framework of formal learning theory.

II - Philosophy Papers | Pp. 219-232

Two Ways of Thinking about Induction

Norma B. Goethe

Inductive inference has always been a major concern in philosophy. This paper considers two different ways of thinking about induction: the classical way and the new, learning-theoretic way. After a brief introduction, I discuss the classical style of thinking about induction, which conceives of inductive inference as an extension of deductive argumentation. Then, I focus on the new, alternative learning-theoretic approach which sees induction as a type of computational process that converges to the truth. I conclude the paper by considering some of the important philosophical consequences of this fundamental shift in the metaphor through which philosophers conceive of inductive reasoning.

II - Philosophy Papers | Pp. 233-258