Catálogo de publicaciones - revistas

Compartir en
redes sociales


ACM Computing Surveys (CSUR)

Resumen/Descripción – provisto por la editorial en inglés
A journal of the Association for Computing Machinery (ACM), which publishes surveys, tutorials, and special reports on all areas of computing research. Volumes are published yearly in four issues appearing in March, June, September, and December.
Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Período Navegá Descargá Solicitá
No detectada desde mar. 1969 / hasta dic. 2023 ACM Digital Library

Información

Tipo de recurso:

revistas

ISSN impreso

0360-0300

ISSN electrónico

1557-7341

Editor responsable

Association for Computing Machinery (ACM)

País de edición

Estados Unidos

Fecha de publicación

Tabla de contenidos

Software safety: why, what, and how

Nancy G. Leveson

<jats:p>Software safety issues become important when computers are used to control real-time, safety-critical processes. This survey attempts to explain why there is a problem, what the problem is, and what is known about how to solve it. Since this is a relatively new software research area, emphasis is placed on delineating the outstanding issues and research topics.</jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 125-163

The effect of a priori views on the social implications of computing: the case of office automation

R. A. Hirschheim

<jats:p>Office automation impact has received considerable attention in the literature recently. Unfortunately, that which has appeared is diverse and contradictory. The relevant literature is reviewed, summarized, and placed in a framework containing three positions on office automation impact: optimism, pessimism, and relativism. The basic assumptions and empirical support for each position are noted and analyzed. It is proposed that these positions are based on a number of a priori views which guide the reporting and beliefs about technological impact.</jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 165-195

A logical design methodology for relational databases using the extended entity-relationship model

Toby J. Teorey; Dongqing Yang; James P. Fry

<jats:p>A database design methodology is defined for the design of large relational databases. First, the data requirements are conceptualized using an extended entity-relationship model, with the extensions being additional semantics such as ternary relationships, optional relationships, and the generalization abstraction. The extended entity-relationship model is then decomposed according to a set of basic entity-relationship constructs, and these are transformed into candidate relations. A set of basic transformations has been developed for the three types of relations: entity relations, extended entity relations, and relationship relations. Candidate relations are further analyzed and modified to attain the highest degree of normalization desired.</jats:p> <jats:p>The methodology produces database designs that are not only accurate representations of reality, but flexible enough to accommodate future processing requirements. It also reduces the number of data dependencies that must be analyzed, using the extended ER model conceptualization, and maintains data integrity through normalization. This approach can be implemented manually or in a simple software package as long as a "good" solution is acceptable and absolute optimality is not required.</jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 197-222

Formal models of stepwise refinements of programs

Ali Mili; Jules Desharnais; Jean Raynomd Gagné

<jats:p>Of the many ways to express program specifications, three of the most common are: as a pair of assertions, an input assertion and an output assertion; as a function mapping legal inputs to correct outputs; or as a relation containing the input/output pairs that are considered correct. The construction of programs consists of mapping a potentially complex specification into a program by recursively decomposing complex specifications into simpler ones. We show how this decomposition takes place in all three modes of specification and draw some conclusions on the nature of programming.</jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 231-276

Elimination algorithms for data flow analysis

Barbara G. Ryder; Marvin C. Paull

<jats:p>A unified model of a family of data flow algorithms, called elimination methods, is presented. The algorithms, which gather information about the definition and use of data in a program or a set of programs, are characterized by the manner in which they solve the systems of equations that describe data flow problems of interest. The unified model provides implementation-independent descriptions of the algorithms to facilitate comparisons among them and illustrate the sources of improvement in worst case complexity bounds. This tutorial provides a study in algorithm design, as well as a new view of these algorithms and their interrelationships.</jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 277-316

A comparative analysis of methodologies for database schema integration

C. Batini; M. Lenzerini; S. B. Navathe

<jats:p>One of the fundamental principles of the database approach is that a database allows a nonredundant, unified representation of all data managed in an organization. This is achieved only when methodologies are available to support integration across organizational and application boundaries.</jats:p> <jats:p>Methodologies for database design usually perform the design activity by separately producing several schemas, representing parts of the application, which are subsequently merged. Database schema integration is the activity of integrating the schemas of existing or proposed databases into a global, unified schema.</jats:p> <jats:p>The aim of the paper is to provide first a unifying framework for the problem of schema integration, then a comparative review of the work done thus far in this area. Such a framework, with the associated analysis of the existing approaches, provides a basis for identifying strengths and weaknesses of individual methodologies, as well as general guidelines for future improvements and extensions.</jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 323-364

Dataflow machine architecture

Arthur H. Veen

<jats:p>Dataflow machines are programmable computers of which the hardware is optimized for fine-grain data-driven parallel computation. The principles and complications of data-driven execution are explained, as well as the advantages and costs of fine-grain parallelism. A general model for a dataflow machine is presented and the major design options are discussed.</jats:p> <jats:p>Most dataflow machines described in the literature are surveyed on the basis of this model and its associated technology. For general-purpose computing the most promising dataflow machines are those that employ packet-switching communication and support general recursion. Such a recursion mechanism requires an extremely fast mechanism to map a sparsely occupied virtual space to a physical space of realistic size. No solution has yet proved fully satisfactory.</jats:p> <jats:p>A working prototype of one processing element is described in detail. On the basis of experience with this prototype, some of the objections raised against the dataflow approach are discussed. It appears that the overhead due to fine-grain parallelism can be made acceptable by sophisticated compiling and employing special hardware for the storage of data structures. Many computing-intensive programs show sufficient parallelism. In fact, a major problem is to restrain parallelism when machine resources tend to get overloaded. Another issue that requires further investigation is the distribution of computation and data structures over the processing elements.</jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 365-396

Different perspectives on information systems: problems and solutions

Kalle Lyytinen

<jats:p>The paper puts information systems (IS) research dealing with IS problems into perspective. IS problems are surveyed and classified. Using the IS research framework suggested by Ives, Hamilton, and Davis, research into IS problems is classified into several perspectives whose relevance in coping with the problems is discussed. Research perspectives focusing on IS operations environment, IS development process, IS development organization, IS development methods, and IS theories are distinguished. The paper concludes with suggestions for future research and how to deal with IS problems in practice.</jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 5-46

An analysis of geometric modeling in database systems

Alfons Kemper; Mechtild Wallrath

<jats:p> The data-modeling and computational requirements for integrated computer aided manufacturing (CAM) databases are analyzed, and the most common representation schemes for modeling solid geometric objects in a computer are described. The <jats:italic>primitive instancing</jats:italic> model, the <jats:italic>boundary representation,</jats:italic> and the <jats:italic>constructive solid geometry</jats:italic> model are presented from the viewpoint of database representation. Depending on the representation scheme, one can apply geometric transformations to the stored geometric objects. The standard transformations, scaling, translation, and rotation, are outlined with respect to the data structure aspects. Some of the more recent developments in the area of engineering databases with regard to supporting these representation schemes are then explored, and a classification scheme for technical database management systems is presented that distinguishes the systems according to their level of object orientation: <jats:italic>structural</jats:italic> or <jats:italic>behavioral object orientation</jats:italic> . First, several systems that are extensions to the relational model are surveyed, then the functional data model DAPLEX, the nonnormalized relational model NF <jats:sup>2</jats:sup> , and the database system R <jats:sup>2</jats:sup> D <jats:sup>2</jats:sup> that provides abstract data types in the NF <jats:sup>2</jats:sup> model are described. </jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 47-91

Types and persistence in database programming languages

Malcolm P. Atkinson; O. Peter Buneman

<jats:p> Traditionally, the interface between a programming language and a database has either been through a set of relatively low-level subroutine calls, or it has required some form of embedding of one language in another. Recently, the necessity of integrating database and programming language techniques has received some long-overdue recognition. In response, a number of attempts have been made to construct programming languages with completely integrated database management systems. These languages, which we term <jats:italic>database programming languages</jats:italic> , are the subject of this review. </jats:p> <jats:p>The design of these languages is still in its infancy, and the purpose of writing this review is to identify the areas in which further research is required. In particular, we focus on the problems of providing a uniform type system and mechanisms for data to persist. Of particular importance in solving these problems are issues of polymorphism, type inheritance, object identity, and the choice of structures to represent sets of similar values. Our conclusion is that there are areas of programming language research—modules, polymorphism, persistence, and inheritance—that must be developed and applied to achieve the goal of a useful and consistent database programming language. Other research areas of equal importance, such as implementation, transaction handling, and concurrency, are not examined here in any detail.</jats:p>

Palabras clave: General Computer Science; Theoretical Computer Science.

Pp. 105-170