Catálogo de publicaciones - revistas
ACM Computing Surveys (CSUR)
Resumen/Descripción – provisto por la editorial en inglés
A journal of the Association for Computing Machinery (ACM), which publishes surveys, tutorials, and special reports on all areas of computing research. Volumes are published yearly in four issues appearing in March, June, September, and December.Palabras clave – provistas por la editorial
No disponibles.
Disponibilidad
Institución detectada | Período | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | desde mar. 1969 / hasta dic. 2023 | ACM Digital Library |
Información
Tipo de recurso:
revistas
ISSN impreso
0360-0300
ISSN electrónico
1557-7341
Editor responsable
Association for Computing Machinery (ACM)
País de edición
Estados Unidos
Fecha de publicación
1969-
Cobertura temática
Tabla de contenidos
doi: 10.1145/2723701
Accounting and Accountability in Content Distribution Architectures
Diarmuid Ó Coileáin; Donal O'mahony
<jats:p> Many content distribution architectures offer desirable elements that lead to less bandwidth usage, reduced network congestion, higher content availability, and reduced costs. However, their level of adoption and utilization in commercial environments depends on the level of content <jats:italic>accounting</jats:italic> and <jats:italic>accountability</jats:italic> they offer. Content accounting refers to any information that a content distributor needs to track relating to the delivery of content to its intended consumers. In contrast, content accountability refers to the willingness of the communicating infrastructure to produce accurate and verifiable information about the content distribution process. This article surveys existing and proposed future content delivery architectures detailing their methodologies for providing efficient low-cost content distribution, content accounting, and accountability across trustworthy and untrustworthy infrastructures. We use two methods to help identify the drawbacks and merits of these architectures. The first is a taxonomy for accounting information based on our analysis of logging information gathered from the surveyed systems. The second is a generic model for content distribution based on a synthesis of desirable elements from the surveyed architectures. These methods help highlight key architectural elements required for efficient low-cost content distribution. Finally, we identify outstanding challenges related to establishing accountability in accounting information produced for content distributed across trusted and untrusted infrastructures. </jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 1-35
doi: 10.1145/2755621
Assessing the Structural Complexity of Computer and Communication Networks
Andrzej Kamisiński; Piotr Chołda; Andrzej Jajszczyk
<jats:p>In this tutorial, 17 structural complexity indices are presented and compared, each representing one of the following categories: adjacency- and distance-based metrics, Shannon entropy-based metrics, product measures, subgraph-based metrics, and path- and walk-based metrics. The applicability of these indices to computer and communication networks is evaluated with the aid of different elementary, specifically designed, random, and real network topologies. On the grounds of the evaluation study, advantages and disadvantages of particular metrics are identified. In addition, their general properties and runtimes are assessed, and a general view on the structural network complexity is presented.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 1-36
doi: 10.1145/2771588
Processing Social Media Messages in Mass Emergency
Muhammad Imran; Carlos Castillo; Fernando Diaz; Sarah Vieweg
<jats:p>Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 1-38
doi: 10.1145/2719920
Text and Data Mining Techniques in Adverse Drug Reaction Detection
Sarvnaz Karimi; Chen Wang; Alejandro Metke-Jimenez; Raj Gaire; Cecile Paris
<jats:p>We review data mining and related computer science techniques that have been studied in the area of drug safety to identify signals of adverse drug reactions from different data sources, such as spontaneous reporting databases, electronic health records, and medical literature. Development of such techniques has become more crucial for public heath, especially with the growth of data repositories that include either reports of adverse drug reactions, which require fast processing for discovering signals of adverse reactions, or data sources that may contain such signals but require data or text mining techniques to discover them. In order to highlight the importance of contributions made by computer scientists in this area so far, we categorize and review the existing approaches, and most importantly, we identify areas where more research should be undertaken.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 1-39
doi: 10.1145/2732197
A Survey on Implicit Surface Polygonization
B. R. de Araújo; Daniel S. Lopes; Pauline Jepp; Joaquim A. Jorge; Brian Wyvill
<jats:p>Implicit surfaces (IS) are commonly used in image creation, modeling environments, modeling objects, and scientific data visualization. In this article, we present a survey of different techniques for fast visualization of IS. The main classes of visualization algorithms are identified along with the advantages of each in the context of the different types of IS commonly used in computer graphics. We focus closely on polygonization methods, as they are the most suited to fast visualization. Classification and comparison of existing approaches are presented using criteria extracted from current research. This enables the identification of the best strategies according to the number of specific requirements, such as speed, accuracy, quality, or stylization.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 1-39
doi: 10.1145/2791577
Systems Approaches to Tackling Configuration Errors
Tianyin Xu; Yuanyuan Zhou
<jats:p>In recent years, configuration errors (i.e., misconfigurations) have become one of the dominant causes of system failures, resulting in many severe service outages and downtime. Unfortunately, it is notoriously difficult for system users (e.g., administrators and operators) to prevent, detect, and troubleshoot configuration errors due to the complexity of the configurations as well as the systems under configuration. As a result, the cost of resolving configuration errors is often tremendous from the aspects of both compensating the service disruptions and diagnosing, recovering from the failures. The prevalence, severity, and cost have made configuration errors one of the most thorny system problems that desire to be addressed.</jats:p> <jats:p>This survey article provides a holistic and structured overview of the systems approaches that tackle configuration errors. To understand the problem fundamentally, we first discuss the characteristics of configuration errors and the challenges of tackling such errors. Then, we discuss the state-of-the-art systems approaches that address different types of configuration errors in different scenarios. Our primary goal is to equip the stakeholder with a better understanding of configuration errors and the potential solutions for resolving configuration errors in the spectrum of system development and management. To inspire follow-up research, we further discuss the open problems with regard to system configuration. To the best of our knowledge, this is the first survey on the topic of tackling configuration errors.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 1-41
doi: 10.1145/2733306
Securing Android
Sufatrio; Darell J. J. Tan; Tong-Wei Chua; Vrizlynn L. L. Thing
<jats:p>Recent years have seen a global adoption of smart mobile devices, particularly those based on Android. However, Android’s widespread adoption is marred with increasingly rampant malware threats. This article gives a survey and taxonomy of existing works that secure Android devices. Based on Android app deployment stages, the taxonomy enables us to analyze schemes that share similar objective and approach and to inspect their key differences. Additionally, this article highlights the limitations of existing works and current challenges. It thus distills the state of the art in Android security research and identifies potential research directions for safeguarding billions (and keep counting) of Android-run devices.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 1-45
doi: 10.1145/2742642
A Survey of Multiobjective Evolutionary Clustering
Anirban Mukhopadhyay; Ujjwal Maulik; Sanghamitra Bandyopadhyay
<jats:p>Data clustering is a popular unsupervised data mining tool that is used for partitioning a given dataset into homogeneous groups based on some similarity/dissimilarity metric. Traditional clustering algorithms often make prior assumptions about the cluster structure and adopt a corresponding suitable objective function that is optimized either through classical techniques or metaheuristic approaches. These algorithms are known to perform poorly when the cluster assumptions do not hold in the data. Multiobjective clustering, in which multiple objective functions are simultaneously optimized, has emerged as an attractive and robust alternative in such situations. In particular, application of multiobjective evolutionary algorithms for clustering has become popular in the past decade because of their population-based nature. Here, we provide a comprehensive and critical survey of the multitude of multiobjective evolutionary clustering techniques existing in the literature. The techniques are classified according to the encoding strategies adopted, objective functions, evolutionary operators, strategy for maintaining nondominated solutions, and the method of selection of the final solution. The pros and cons of the different approaches are mentioned. Finally, we have discussed some real-life applications of multiobjective clustering in the domains of image segmentation, bioinformatics, web mining, and so forth.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 1-46
doi: 10.1145/2785988
Lifetime Reliability Enhancement of Microprocessors
Hyejeong Hong; Jaeil Lim; Hyunyul Lim; Sungho Kang
<jats:p>Ensuring lifetime reliability of microprocessors has become more critical. Continuous scaling and increasing temperatures due to growing power density are threatening lifetime reliability. Negative bias temperature instability (NBTI) has been known for decades, but its impact has been insignificant compared to other factors. Aggressive scaling, however, makes NBTI the most serious threat to chip lifetime reliability in today's and future process technologies. The delay of microprocessors gradually increases as time goes by, due to stress and recovery phases. The delay eventually becomes higher than the value required to meet design constraints, which results in failed systems. In this article, the mechanism of NBTI and its effects on lifetime reliability are presented, then various techniques to mitigate NBTI degradation on microprocessors are introduced. The mitigation can be addressed at either the circuit level or architectural level. Circuit-level techniques include design-time techniques such as transistor sizing and NBTI-aware synthesis. Forward body biasing, and adaptive voltage scaling are adaptive techniques that can mitigate NBTI degradation at the circuit level by controlling the threshold voltage or supply voltage to hide the lengthened delay caused by NBTI degradation. Reliability has been regarded as something to be addressed by chip manufacturers. However, there are recent attempts to bring lifetime reliability problems to the architectural level. Architectural techniques can reduce the cost added by circuit-level techniques, which are based on the worst-case degradation estimation. Traditional low-power and thermal management techniques can be successfully extended to deal with reliability problems since aging is dependent on power consumption and temperature. Self-repair is another option to enhance the lifetime of microprocessors using either core-level or lower-level redundancy. With a growing thermal crisis and constant scaling, lifetime reliability requires more intensive research in conjunction with other design issues.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 1-25
doi: 10.1145/2807593
The Multiple Facets of Software Diversity
Benoit Baudry; Martin Monperrus
<jats:p>Early experiments with software diversity in the mid 1970s investigated N-version programming and recovery blocks to increase the reliability of embedded systems. Four decades later, the literature about software diversity has expanded in multiple directions: goals (fault tolerance, security, software engineering), means (managed or automated diversity), and analytical studies (quantification of diversity and its impact). Our article contributes to the field of software diversity as the first work that adopts an inclusive vision of the area, with an emphasis on the most recent advances in the field. This survey includes classical work about design and data diversity for fault tolerance, as well as the cybersecurity literature that investigates randomization at different system levels. It broadens this standard scope of diversity to include the study and exploitation of natural diversity and the management of diverse software products. Our survey includes the most recent works, with an emphasis from 2000 to the present. The targeted audience is researchers and practitioners in one of the surveyed fields who miss the big picture of software diversity. Assembling the multiple facets of this fascinating topic sheds a new light on the field.</jats:p>
Palabras clave: General Computer Science; Theoretical Computer Science.
Pp. 1-26