Catálogo de publicaciones - revistas
ACM Computing Surveys (CSUR)
Resumen/Descripción – provisto por la editorial en inglés
A journal of the Association for Computing Machinery (ACM), which publishes surveys, tutorials, and special reports on all areas of computing research. Volumes are published yearly in four issues appearing in March, June, September, and December.Palabras clave – provistas por la editorial
No disponibles.
Disponibilidad
Institución detectada | Período | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | desde mar. 1969 / hasta dic. 2023 | ACM Digital Library |
Información
Tipo de recurso:
revistas
ISSN impreso
0360-0300
ISSN electrónico
1557-7341
Editor responsable
Association for Computing Machinery (ACM)
País de edición
Estados Unidos
Fecha de publicación
1969-
Cobertura temática
Tabla de contenidos
An updated survey of GA-based multiobjective optimization techniques
Carlos A. Coello
<jats:p>After using evolutionary techniques for single-objective optimization during more than two decades, the incorporation of more than one objective in the fitness function has finally become a popular area of research. As a consequence, many new evolutionary-based approaches and variations of existing techniques have recently been published in the technical literature. The purpose of this paper is to summarize and organize the information on these current approaches, emphasizing the importance of analyzing the operations research techniques in which most of them are based, in an attempt to motivate researchers to look into these mathematical programming approaches for new ways of exploiting the search capabilities of evolutionary algorithms. Furthermore, a summary of the main algorithms behind these approaches is provided, together with a brief criticism that includes their advantages and disadvantages, degree of applicability, and some known applications. Finally, further trends in this area and some possible paths for further research are also addressed.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 109-143
Information retrieval on the web
Mei Kobayashi; Koichi Takeda
<jats:p>In this paper we review studies of the growth of the Internet and technologies that are useful for information search and retrieval on the Web. We present data on the Internet from several different sources, e.g., current as well as projected number of users, hosts, and Web sites. Although numerical figures vary, overall trends cited by the sources are consistent and point to exponential growth in the past and in the coming decade. Hence it is not surprising that about 85% of Internet users surveyed claim using search engines and search services to find specific information. The same surveys show, however, that users are not satisfied with the performance of the current generation of search engines; the slow retrieval speed, communication delays, and poor quality of retrieved results (e.g., noise and broken links) are commonly cited problems. We discuss the development of new techniques targeted to resolve some of the problems associated with Web-based information retrieval and speculate on future trends.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 144-173
Data prefetch mechanisms
Steven P. Vanderwiel; David J. Lilja
<jats:p>The expanding gap between microprocessor and DRAM performance has necessitated the use of increasingly aggressive techniques designed to reduce or hide the latency of main memory access. Although large cache hierarchies have proven to be effective in reducing this latency for the most frequently used data, it is still not uncommon for many programs to spend more than half their run times stalled on memory requests. Data prefetching has been proposed as a technique for hiding the access latency of data referencing patterns that defeat caching strategies. Rather than waiting for a cache miss to initiate a memory fetch, data prefetching anticipates such misses and issues a fetch to the memory system in advance of the actual memory reference. To be effective, prefetching must be implemented in such a way that prefetches are timely, useful, and introduce little overhead. Secondary effects such as cache pollution and increased memory bandwidth requirements must also be taken into consideration. Despite these obstacles, prefetching has the potential to significantly improve overall program execution time by overlapping computation with memory accesses. Prefetching strategies are diverse, and no single strategy has yet been proposed that provides optimal performance. The following survey examines several alternative approaches, and discusses the design tradeoffs involved when implementing a data prefetch strategy.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 174-199
Electronic document addressing
Helen Ashman
<jats:p>The management of electronic document collections is fundamentally different from the management of paper documents. The ephemeral nature of some electronic documents means that the document address (i.e., reference details of the document) can become incorrect some time after coming into use, resulting in references, such as index entries and hypertext links, failing to correctly address the document they describe. A classic case of invalidated references is on the World Wide Web—links that point to a named resource fail when the domain name, file name, or any other aspect of the addressed resource is changed, resulting in the well-known Error 404. Additionally, there are other errors which arise from changes to document collections.</jats:p> <jats:p> This paper surveys the strategies used both in World Wide Web software and other hypertext systems for managing the integrity of references and hence the integrity of links. Some strategies are <jats:italic>preventative</jats:italic> , not permitting errors to occur; others are <jats:italic>corrective</jats:italic> , discovering references errors and sometimes attempting to correct them; while the last strategy is <jats:italic>adaptive</jats:italic> , because references are calculated on a just-in-time basis, according the current state of the document collection. </jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 201-212
Techniques for obtaining high performance in Java programs
Iffat H. Kazi; Howard H. Chen; Berdenia Stanley; David J. Lilja
<jats:p>This survey describes research directions in techniques to improve the performance of programs written in the Java programming language. The standard technique for Java execution is interpretation, which provides for extensive portability of programs. A Java interpreter dynamically executes Java bytecodes, which comprise the instruction set of the Java Virtual Machine (JVM). Execution time performance of Java programs can be improved through compilation, possibly at the expense of portability. Various types of Java compilers have been proposed, including Just-In-Time (JIT) compilers that compile bytecode into native processor instructions on the fly; direct compilers that directly translate the Java source code into the target processor's native language; and bytecode-to-source translators that generate either native code or an intermediate language, such as C, from the bytecodes. Additional techniques, including bytecode optimization, dynamic compilation, and executing Java programs in parallel, attempt to improve Java run-time performance while maintaining Java's portability. Another alternative for executing Java programs is a Java processor that implements the JVM directly in hardware. In this survey, we discuss the basis features, and the advantages and disadvantages, of the various Java execution techniques. We also discuss the various Java benchmarks that are being used by the Java community for performance evaluation of the different techniques. Finally, we conclude with a comparison of the performance of the alternative Java execution techniques based on reported results.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 213-240
Process migration
Dejan S. Milojičić; Fred Douglis; Yves Paindaveine; Richard Wheeler; Songnian Zhou
<jats:p>Process migration is the act of transferring a process between two machines. It enables dynamic load distribution, fault resilience, eased system administration, and data access locality. Despite these goals and ongoing research efforts, migration has not achieved widespread use. With the increasing deployment of distributed systems in general, and distributed operating systems in particular, process migration is again receiving more attention in both research and product development. As high-performance facilities shift from supercomputers to networks of workstations, and with the ever-increasing role of the World Wide Web, we expect migration to play a more important role and eventually to be widely adopted.</jats:p> <jats:p>This survey reviews the field of process migration by summarizing the key concepts and giving an overview of the most important implementations. Design and implementation issues of process migration are analyzed in general, and then revisited for each of the case studies described: MOSIX, Sprite, Mach, and Load Sharing Facility. The benefits and drawbacks of process migration depend on the details of implementation and, therefore, this paper focuses on practical matters. This survey will help in understanding the potentials of process migration and why it has not caught on.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 241-299
An introduction to quantum computing for non-physicists
Eleanor Rieffel; Wolfgang Polak
<jats:p>Richard Feynman's observation that certain quantum mechanical effects cannot be simulated efficiently on a computer led to speculation that computation in general could be done more efficiently if it used these quantum effects. This speculation proved justified when Peter Shor described a polynomial time quantum algorithm for factoring intergers.</jats:p> <jats:p>In quantum systems, the computational space increases exponentially with the size of the system, which enables exponential parallelism. This parallelism could lead to exponentially faster quantum algorithms than possible classically. The catch is that accessing the results, which requires measurement, proves tricky and requires new nontraditional programming techniques.</jats:p> <jats:p>The aim of this paper is to guide computer scientists through the barriers that separate quantum computing from conventional computing. We introduce basic principles of quantum mechanics to explain where the power of quantum computers comes from and why it is difficult to harness. We describe quantum cryptography, teleportation, and dense coding. Various approaches to exploiting the power of quantum parallelism are explained. We conclude with a discussion of quantum error correction.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 300-335
Logical models of argument
Carlos Iván Chesñevar; Ana Gabriela Maguitman; Ronald Prescott Loui
<jats:p>Logical models of arguement formalize commonsense reasoning while taking process and computation seriously. This survey discusses the main ideas that characterize different logical models of argument. It presents the formal features of a few features of a few main approaches to the modeling of argumentation. We trace the evolution of argumentation from the mid-1980s, when argument systems emerged as an alternative to nonmonotonic formalisms based on classical logic, to the present, as argument in embedded in different complex systems for real-world applications, and allow more formal work to be done in different areas, such as AI and Law, case-based reasoning and negotiation among intelligent agents.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 337-383
Extracting usability information from user interface events
David M. Hilbert; David F. Redmiles
<jats:p>Modern window-based user interface systems generate user interface events as natural products of their normal operation. Because such events can be automatically captured and because they indicate user behavior with respect to an application's user interface, they have long been regarded as a potentially fruitful source of information regarding application usage and usability. However, because user interface events are typically voluminos and rich in detail, automated support is generally required to extract information at a level of abstraction that is useful to investigators interested in analyzing application usage or evaluating usability. This survey examines computer-aided techniques used by HCI practitioners and researchers to extract usability-related information from user interface events. A framework is presented to help HCI practitioners and researchers categorize and compare the approaches that have been, or might fruitfully be, applied to this problem. Because many of the techniques in the research literature have not been evaluated in practice, this survey provides a conceptual evaluation to help identify some of the relative merits and drawbacks of the various classes of approaches. Ideas for future research in this area are also presented. This survey addresses the following questions: How might user interface events be used in evaluating usability? How are user interface events related to other forms of usability data? What are the key challenges faced by investigators wishing to exploit this data? What approaches have been brought to bear on this problem and how do they compare to one another? What are some of the important open research questions in this area?</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 384-421
The state of the art in distributed query processing
Donald Kossmann
<jats:p>Distributed data processing is becoming a reality. Businesses want to do it for many reasons, and they often must do it in order to stay competitive. While much of the infrastructure for distributed data processing is already there (e.g., modern network technology), a number of issues make distributed data processing still a complex undertaking: (1) distributed systems can become very large, involving thousands of heterogeneous sites including PCs and mainframe server machines; (2) the state of a distributed system changes rapidly because the load of sites varies over time and new sites are added to the system; (3) legacy systems need to be integrated—such legacy systems usually have not been designed for distributed data processing and now need to interact with other (modern) systems in a distributed environment. This paper presents the state of the art of query processing for distributed database and information systems. The paper presents the “textbook” architecture for distributed query processing and a series of techniques that are particularly useful for distributed database systems. These techniques include special join techniques, techniques to exploit intraquery paralleli sm, techniques to reduce communication costs, and techniques to exploit caching and replication of data. Furthermore, the paper discusses different kinds of distributed systems such as client-server, middleware (multitier), and heterogeneous database systems, and shows how query processing works in these systems.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 422-469