Catálogo de publicaciones - revistas
ACM Computing Surveys (CSUR)
Resumen/Descripción – provisto por la editorial en inglés
A journal of the Association for Computing Machinery (ACM), which publishes surveys, tutorials, and special reports on all areas of computing research. Volumes are published yearly in four issues appearing in March, June, September, and December.Palabras clave – provistas por la editorial
No disponibles.
Disponibilidad
Institución detectada | Período | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | desde mar. 1969 / hasta dic. 2023 | ACM Digital Library |
Información
Tipo de recurso:
revistas
ISSN impreso
0360-0300
ISSN electrónico
1557-7341
Editor responsable
Association for Computing Machinery (ACM)
País de edición
Estados Unidos
Fecha de publicación
1969-
Cobertura temática
Tabla de contenidos
Local ratio
Reuven Bar-Yehuda; Keren Bendel; Ari Freund; Dror Rawitz
<jats:p> The <jats:italic>local ratio</jats:italic> technique is a methodology for the design and analysis of algorithms for a broad range of optimization problems. The technique is remarkably simple and elegant, and yet can be applied to several classical and fundamental problems (including covering problems, packing problems, and scheduling problems). The local ratio technique uses elementary math and requires combinatorial insight into the structure and properties of the problem at hand. Typically, when using the technique, one has to invent a weight function for a problem instance under which every "reasonable" solution is "good." The local ratio technique is closely related to the <jats:italic>primal-dual</jats:italic> schema, though it is not based on weak LP duality (which is the basis of the primal-dual approach) since it is not based on linear programming.In this survey we, introduce the local ratio technique and demonstrate its use in the design and analysis of algorithms for various problems. We trace the evolution path of the technique since its inception in the 1980's, culminating with the most recent development, namely, <jats:italic>fractional local ratio</jats:italic> , which can be viewed as a new LP rounding technique. </jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 422-463
Lineage retrieval for scientific data processing: a survey
Rajendra Bose; James Frew
<jats:p>Scientific research relies as much on the dissemination and exchange of data sets as on the publication of conclusions. Accurately tracking the lineage (origin and subsequent processing history) of scientific data sets is thus imperative for the complete documentation of scientific work. Researchers are effectively prevented from determining, preserving, or providing the lineage of the computational data products they use and create, however, because of the lack of a definitive model for lineage retrieval and a poor fit between current data management tools and scientific software. Based on a comprehensive survey of lineage research and previous prototypes, we present a metamodel to help identify and assess the basic components of systems that provide lineage retrieval for scientific data products.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 1-28
Access control in collaborative systems
William Tolone; Gail-Joon Ahn; Tanusree Pai; Seng-Phil Hong
<jats:p>Balancing the competing goals of collaboration and security is a difficult, multidimensional problem. Collaborative systems often focus on building useful connections among people, tools, and information while security seeks to ensure the availability, confidentiality, and integrity of these same elements. In this article, we focus on one important dimension of this problem---access control. The article examines existing access control models as applied to collaboration, highlighting not only the benefits, but also the weaknesses of these models.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 29-41
Optimistic replication
Yasushi Saito; Marc Shapiro
<jats:p>Data replication is a key technology in distributed systems that enables higher availability and performance. This article surveys optimistic replication algorithms. They allow replica contents to diverge in the short term to support concurrent work practices and tolerate failures in low-quality communication links. The importance of such techniques is increasing as collaboration through wide-area and mobile networks becomes popular.Optimistic replication deploys algorithms not seen in traditional “pessimistic” systems. Instead of synchronous replica coordination, an optimistic algorithm propagates changes in the background, discovers conflicts after they happen, and reaches agreement on the final contents incrementally.We explore the solution space for optimistic replication algorithms. This article identifies key challenges facing optimistic replication systems---ordering operations, detecting and resolving conflicts, propagating changes efficiently, and bounding replica divergence---and provides a comprehensive survey of techniques developed for addressing these challenges.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 42-81
Lowering the barriers to programming
Caitlin Kelleher; Randy Pausch
<jats:p>Since the early 1960's, researchers have built a number of programming languages and environments with the intention of making programming accessible to a larger number of people. This article presents a taxonomy of languages and environments designed to make programming more accessible to novice programmers of all ages. The systems are organized by their primary goal, either to teach programming or to use programming to empower their users, and then, by each system's authors' approach, to making learning to program easier for novice programmers. The article explains all categories in the taxonomy, provides a brief description of the systems in each category, and suggests some avenues for future work in novice programming environments and languages.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 83-137
Algorithms and data structures for flash memories
Eran Gal; Sivan Toledo
<jats:p>Flash memory is a type of electrically-erasable programmable read-only memory (EEPROM). Because flash memories are nonvolatile and relatively dense, they are now used to store files and other persistent objects in handheld computers, mobile phones, digital cameras, portable music players, and many other computer systems in which magnetic disks are inappropriate. Flash, like earlier EEPROM devices, suffers from two limitations. First, bits can only be cleared by erasing a large block of memory. Second, each block can only sustain a limited number of erasures, after which it can no longer reliably store data. Due to these limitations, sophisticated data structures and algorithms are required to effectively use flash memories. These algorithms and data structures support efficient not-in-place updates of data, reduce the number of erasures, and level the wear of the blocks in the device. This survey presents these algorithms and data structures, many of which have only been described in patents until now.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 138-163
Topology control in wireless ad hoc and sensor networks
Paolo Santi
<jats:p>Topology Control (TC) is one of the most important techniques used in wireless ad hoc and sensor networks to reduce energy consumption (which is essential to extend the network operational time) and radio interference (with a positive effect on the network traffic carrying capacity). The goal of this technique is to control the topology of the graph representing the communication links between network nodes with the purpose of maintaining some global graph property (e.g., connectivity), while reducing energy consumption and/or interference that are strictly related to the nodes' transmitting range. In this article, we state several problems related to topology control in wireless ad hoc and sensor networks, and we survey state-of-the-art solutions which have been proposed to tackle them. We also outline several directions for further research which we hope will motivate researchers to undertake additional studies in this field.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 164-194
Power reduction techniques for microprocessor systems
Vasanth Venkatachalam; Michael Franz
<jats:p>Power consumption is a major factor that limits the performance of computers. We survey the “state of the art” in techniques that reduce the total power consumed by a microprocessor system over time. These techniques are applied at various levels ranging from circuits to architectures, architectures to system software, and system software to applications. They also include holistic approaches that will become more important over the next decade. We conclude that power management is a multifaceted discipline that is continually expanding with new techniques being developed at every level. These techniques may eventually allow computers to break through the “power wall” and achieve unprecedented levels of performance, versatility, and reliability. Yet it remains too early to tell which techniques will ultimately solve the power problem.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 195-237
Survey and taxonomy of packet classification techniques
David E. Taylor
<jats:p>Packet classification is an enabling function for a variety of Internet applications including quality of service, security, monitoring, and multimedia communications. In order to classify a packet as belonging to a particular flow or set of flows, network nodes must perform a search over a set of filters using multiple fields of the packet as the search key. In general, there have been two major threads of research addressing packet classification, algorithmic and architectural. A few pioneering groups of researchers posed the problem, provided complexity bounds, and offered a collection of algorithmic solutions. Subsequently, the design space has been vigorously explored by many offering new algorithms and improvements on existing algorithms. Given the inability of early algorithms to meet performance constraints imposed by high speed links, researchers in industry and academia devised architectural solutions to the problem. This thread of research produced the most widely-used packet classification device technology, Ternary Content Addressable Memory (TCAM). New architectural research combines intelligent algorithms and novel architectures to eliminate many of the unfavorable characteristics of current TCAMs. We observe that the community appears to be converging on a combined algorithmic and architectural approach to the problem. Using a taxonomy based on the high-level approach to the problem and a minimal set of running examples, we provide a survey of the seminal and recent solutions to the problem. It is our hope to foster a deeper understanding of the various packet classification techniques while providing a useful framework for discerning relationships and distinctions.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 238-275
A survey and analysis of Electronic Healthcare Record standards
Marco Eichelberg; Thomas Aden; Jörg Riesmeier; Asuman Dogac; Gokce B. Laleci
<jats:p>Medical information systems today store clinical information about patients in all kinds of proprietary formats. To address the resulting interoperability problems, several Electronic Healthcare Record standards that structure the clinical content for the purpose of exchange are currently under development. In this article, we present a survey of the most relevant Electronic Healthcare Record standards, examine the level of interoperability they provide, and assess their functionality in terms of content structure, access services, multimedia support, and security. We further investigate the complementarity of the standards and assess their market relevance.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 277-315