Catálogo de publicaciones - revistas

Compartir en
redes sociales


ACM Computing Surveys (CSUR)

Resumen/Descripción – provisto por la editorial en inglés
A journal of the Association for Computing Machinery (ACM), which publishes surveys, tutorials, and special reports on all areas of computing research. Volumes are published yearly in four issues appearing in March, June, September, and December.
Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Período Navegá Descargá Solicitá
No detectada desde mar. 1969 / hasta dic. 2023 ACM Digital Library

Información

Tipo de recurso:

revistas

ISSN impreso

0360-0300

ISSN electrónico

1557-7341

Editor responsable

Association for Computing Machinery (ACM)

País de edición

Estados Unidos

Fecha de publicación

Tabla de contenidos

Survey of software tools for evaluating reliability, availability, and serviceability

Allen M. Johnson; Miroslaw Malek

<jats:p>In computer design, it is essential to know the effectiveness of different design options in improving performance and dependability. Various software tools have been created to evaluate these parameters, applying both analytic and simulation techniques, and this paper reviews those related primarily to reliability, availability, and serviceability. The purpose, type of models used, type of systems modeled, inputs, and outputs are given for each package. Examples of some of the key modeling elements such as Markov chains, fault trees, and Petri nets are discussed. The information is compiled to facilitate recognition of similarities and differences between various models and tools and can be used to aid in selecting models and tools for a particular application or designing tools for future needs. Tools included in the evaluation are CARE-III, ARIES-82, SAVE, MARKl, HARP, SHARPE, GRAMP, SURF, SURE, ASSIST, METASAN, METFAC, ARM, and SUPER. Modeling tools, such as REL70, RELCOMP, CARE, CARSRA, and CAST, that were forerunners to some of the current tools are noted for their contributions. Modeling elements that have gained widespread use for general systems, as well as fault-tolerant systems, are included. Tools capable of modeling both repairable and nonrepairable systems, accepting constant or time varying failure rates, and predicting reliability, availability, and serviceability parameters are surveyed.</jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 227-269

Hierarchical representations of collections of small rectangles

Hanan Samet

<jats:p>A tutorial survey is presented of hierarchical data structures for representing collections of small rectangles. Rectangles are often used as an approximation of shapes for which they serve as the minimum rectilinear enclosing object. They arise in applications in cartography as well as very large-scale integration (VLSI) design rule checking. The different data structures are discussed in terms of how they support the execution of queries involving proximity relations. The focus is on intersection and subset queries. Several types of representations are described. Some are designed for use with the plane-sweep paradigm, which works well for static collections of rectangles. Others are oriented toward dynamic collections. In this case, one representation reduces each rectangle to a point in a higher multidimensional space and treats the problem as one involving point data. The other representation is area based—that is, it depends on the physical extent of each rectangle.</jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 271-309

Human-computer interface development: concepts and systems for its management

H. Rex Hartson; Deborah Hix

<jats:p> <jats:italic>Human-computer interface management</jats:italic> , from a computer science viewpoint, focuses on the process of developing quality human-computer interfaces, including their representation, design, implementation, execution, evaluation, and maintenance. This survey presents important concepts of interface management: dialogue independence, structural modeling, representation, interactive tools, rapid prototyping, development methodologies, and control structures. <jats:italic>Dialogue independence</jats:italic> is the keystone concept upon which all the other concepts depend. It is a characteristic that separates design of the interface from design of the computational component of an application system so that modifications in either tend not to cause changes in the other. The role of a dialogue developer, whose main purpose is to create quality interfaces, is a direct result of the dialogue independence concept. <jats:italic>Structural models of the human-computer interface</jats:italic> serve as frameworks for understanding the elements of interfaces and for guiding the dialogue developer in their construction. <jats:italic>Representation of the human-computer interface</jats:italic> is accomplished by a variety of notational schemes for describing the interface. Numerous kinds of <jats:italic>interactive tools for human-computer interface development</jats:italic> free the dialogue developer from much of the tedium of "coding" dialogue. The early ability to observe behavior of the interface—and indeed that of the whole application system—provided by <jats:italic>rapid prototyping</jats:italic> increases communication among system designers, implementers, evaluators, and end-users. <jats:italic>Methodologies for interactive system development</jats:italic> consider interface management to be an integral part of the overall development process and give emphasis to evaluation in the development life cycle. Finally, several types of <jats:italic>control structures</jats:italic> govern how sequencing among dialogue and computational components is designed and executed. Numerous systems for human-computer interface management are presented t o illustrate these concepts. </jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 5-92

Unification: a multidisciplinary survey

Kevin Knight

<jats:p>The unification problem and several variants are presented. Various algorithms and data structures are discussed. Research on unification arising in several areas of computer science is surveyed; these areas include theorem proving, logic programming, and natural language processing. Sections of the paper include examples that highlight particular uses of unification and the special problems encountered. Other topics covered are resolution, higher order logic, the occur check, infinite terms, feature structures, equational theories, inheritance, parallel algorithms, generalization, lattices, and other applications of unification. The paper is intended for readers with a general computer science background—no specific knowledge of any of the above topics is assumed.</jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 93-124

Control strategies for two-player games

Bruce Abramson

<jats:p>Computer games have been around for almost as long as computers. Most of these games, however, have been designed in a rather ad hoc manner because many of their basic components have never been adequately defined. In this paper some deficiencies in the standard model of computer games, the minimax model, are pointed out and the issues that a general theory must address are outlined. Most of the discussion is done in the context of control strategies, or sets of criteria for move selection. A survey of control strategies brings together results from two fields: implementations of real games and theoretical predictions derived on simplified game-trees. The interplay between these results suggests a series of open problems that have arisen during the course of both analytic experimentation and practical experience as the basis for a formal theory.</jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 137-161

Explanation-based learning: a survey of programs and perspectives

Thomas Ellman

<jats:p>Explanation-based learning (EBL) is a technique by which an intelligent system can learn by observing examples. EBL systems are characterized by the ability to create justified generalizations from single training instances. They are also distinguished by their reliance on background knowledge of the domain under study. Although EBL is usually viewed as a method for performing generalization, it can be viewed in other ways as well. In particular, EBL can be seen as a method that performs four different learning tasks: generalization, chunking, operationalization, and analogy.</jats:p> <jats:p>This paper provides a general introduction to the field of explanation-based learning. Considerable emphasis is placed on showing how EBL combines the four learning tasks mentioned above. The paper begins with a presentation of an intuitive example of the EBL technique. Subsequently EBL is placed in its historical context and the relation between EBL and other areas of machine learning is described. The major part of this paper is a survey of selected EBL programs, which have been chosen to show how EBL manifests each of the four learning tasks. Attempts to formalize the EBL technique are also briefly discussed. The paper concludes with a discussion of the limitations of EBL and the major open questions in the field.</jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 163-221

Applications of combinatorial designs in computer science

Charles J. Colbourn; Paul C. van Oorschot

<jats:p>The theory of combinatorial designs has been used in widely different areas of computation concerned with the design and analysis of both algorithms and hardware. Combinatorial designs capture a subtle balancing property that is inherent in many difficult problems and hence can provide a sophisticated tool for addressing these problems. The role of combinatorial designs in solving many problems that are basic to the field of computing is explored in this paper. Case studies of many applications of designs to computation are given; these constitute a first survey, which provides a representative sample of uses of designs. More importantly, they suggest paradigms in which designs can be used profitably in algorithm design and analysis.</jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 223-250

Programming languages for distributed computing systems

Henri E. Bal; Jennifer G. Steiner; Andrew S. Tanenbaum

<jats:p>When distributed systems first appeared, they were programmed in traditional sequential languages, usually with the addition of a few library procedures for sending and receiving messages. As distributed applications became more commonplace and more sophisticated, this ad hoc approach became less satisfactory. Researchers all over the world began designing new programming languages specifically for implementing distributed applications. These languages and their history, their underlying principles, their design, and their use are the subject of this paper.</jats:p> <jats:p>We begin by giving our view of what a distributed system is, illustrating with examples to avoid confusion on this important and controversial point. We then describe the three main characteristics that distinguish distributed programming languages from traditional sequential languages, namely, how they deal with parallelism, communication, and partial failures. Finally, we discuss 15 representative distributed languages to give the flavor of each. These examples include languages based on message passing, rendezvous, remote procedure call, objects, and atomic transactions, as well as functional languages, logic languages, and distributed data structure languages. The paper concludes with a comprehensive bibliography listing over 200 papers on nearly 100 distributed programming languages.</jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 261-322

How to write parallel programs: a guide to the perplexed

Nicholas Carriero; David Gelernter

<jats:p>We present a framework for parallel programming, based on three conceptual classes for understanding parallelism and three programming paradigms for implementing parallel programs. The conceptual classes are result parallelism, which centers on parallel computation of all elements in a data structure; agenda parallelism, which specifies an agenda of tasks for parallel execution; and specialist parallelism, in which specialist agents solve problems cooperatively. The programming paradigms center on live data structures that transform themselves into result data structures; distributed data structures that are accessible to many processes simultaneously; and message passing, in which all data objects are encapsulated within explicitly communicating processes. There is a rough correspondence between the conceptual classes and the programming methods, as we discuss. We begin by outlining the basic conceptual classes and programming paradigms, and by sketching an example solution under each of the three paradigms. The final section develops a simple example in greater detail, presenting and explaining code and discussing its performance on two commercial parallel computers, an 18-node shared-memory multiprocessor, and a 64-node distributed-memory hypercube. The middle section bridges the gap between the abstract and the practical by giving an overview of how the basic paradigms are implemented.</jats:p> <jats:p>We focus on the paradigms, not on machine architecture or programming languages: The programming methods we discuss are useful on many kinds of parallel machine, and each can be expressed in several different parallel programming languages. Our programming discussion and the examples use the parallel language C-Linda for several reasons: The main paradigms are all simple to express in Linda; efficient Linda implementations exist on a wide variety of parallel machines; and a wide variety of parallel programs have been written in Linda.</jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 323-357

Conception, evolution, and application of functional programming languages

Paul Hudak

<jats:p> The foundations of functional programming languages are examined from both historical and technical perspectives. Their evolution is traced through several critical periods: early work on lambda calculus and combinatory calculus, Lisp, Iswim, FP, ML, and modern functional languages such as Miranda <jats:sup>1</jats:sup> and Haskell. The fundamental premises on which the functional programming methodology stands are critically analyzed with respect to philosophical, theoretical, and pragmatic concerns. Particular attention is paid to the main features that characterize modern functional languages: higher-order functions, lazy evaluation, equations and pattern matching, strong static typing and type inference, and data abstraction. In addition, current research areas—such as parallelism, nondeterminism, input/output, and state-oriented computations—are examined with the goal of predicting the future development and application of functional languages. </jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 359-411