Catálogo de publicaciones - libros

Compartir en
redes sociales


High Performance Computing: HiPC 2006: 13th International Conference, Bangalore, India, December 18-21, 2006, Proceedings

Yves Robert ; Manish Parashar ; Ramamurthy Badrinath ; Viktor K. Prasanna (eds.)

En conferencia: 13º International Conference on High-Performance Computing (HiPC) . Bangalore, India . December 18, 2006 - December 21, 2006

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Processor Architectures; Software Engineering/Programming and Operating Systems; Computer Systems Organization and Communication Networks; Algorithm Analysis and Problem Complexity; Computation by Abstract Devices; Mathematics of Computing

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-68039-0

ISBN electrónico

978-3-540-68040-6

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

Navigability of Small World Networks

Pierre Fraigniaud

The “Small World Phenomenon” a.k.a. “Six Degree of Separation Between Individuals” was identified by Stanley Milgram at the end of the 60s. Milgram’s experiment demonstrated that letters from arbitrary sources and bound to an arbitrary target can be transmitted along short chains of closely related individuals based solely on some characteristics of the target (occupation, state of leaving, etc.). In his seminal work, Jon Kleinberg modeled and analyzed this phenomenon in the framework of “augmented networks”. A network is navigable if it can be augmented by random links so that greedy routing performs in a polylogarithmic expected number of steps between any pair of nodes. This talk will survey the recent results in this field. In particular, the connections between navigability and low doubling dimension will be described. The possible use of the concept of navigable networks in the framework of Grid Computing and P2P networks will also be discussed.

- Keynote Addresses | Pp. 1-1

Opportunities and Challenges for Future Generation Grid Research

Dennis Gannon

Grid systems are now a standard approach to solving problems in large scale, multidisciplinary scientific endeavors. These research groups are often geographically distributed, and to conduct their research, they need to share access to physical resources such as supercomputers, large databases, on-line instruments and distributed applications. Grid infrastructure helps solve their problems because it can provide a layer of middleware that virtualizes the access to these resources. The users see a single coherent computer system instead of a complex network of distributed resources. In most cases, users enter the grid though a ”gateway” portal, which may be a web portal or a ”thick” desktop client. The gateway gives the user a way to browse metadata about computational experiments to access data products, to monitor active workflows and to run applications and share results. The user can focus on the problems of science and not computer systems. All of this is made possible because of the Service Oriented Architecture (SOA) that underlies the core Grid middleware. In this presentation, we will look at several examples of successful Scientific Grids and Gateways. We will also describe the fundamentals of the web service SOAs that works best in Grid systems. We will illustrate these ideas with an example called LEAD which is a Grid designed to improve our ability to predict meso-scale weather events such as hurricanes, typhoons and tornadoes. We will also describe how this entire approach to service virtualization is now being used in industry to better use the resources of a single, but distributed business enterprise. While a great deal of progress has been made there are many exciting and unsolved problems. As we go through the talk, we will highlight these challenges and research opportunities.

- Keynote Addresses | Pp. 2-3

Software Challenges for Multicore Computing

Kenneth Kennedy

Current technological trends have led chip manufacturers to move to designs that include multiple processors, or cores, on each chip. In some cases, these processors are homogeneous (e.g., Intel, AMD) and in others they are heterogeneous (e.g., Cell). It is clear is that these designs represent the future of computational chips and they will effect enormous changes in the way software is designed and implemented to take advantage of their power. In this talk, I will survey issues that will be critical to making systems, particularly HPC systems based on multicore chips usable by application developers. The talk presents a series of “big questions” (not to be confused with “grand challenges”) about software, particularly compilers and programming tools, for multicore chips. Topics include utilization of bandwidth (both on and off chip), on-chip memory hierarchy (shared versus separate caches), methods for exploitation of parallelism (data parallelism versus pipelining), and inter-core synchronization mechanisms. The talk will also address the special challenges presented by on-chip heterogeneous parallelism such as that found on the IBM Cell chip and planned for future Intel designs. I will conclude with a discussion of my own group’s preliminary research on compilers and tools for systems based on multicore chips and future research directions for the computer science community as a whole.

- Keynote Addresses | Pp. 4-4

Imaging-Based Systems Biology

Gene Myers

Arguably the most significant contribution of the human genome project is that we can now build a recombinant construct of every gene and every promotor in C. elegans (worm), D. melanogaster (fly), M. musculus (mouse), and H. sapiens (human). These include fluorescent proteins and other markers that can be induced at controlled time points via a change in temperature, light, or chemistry. Combined with tremendous advances in light and electron microscopy in recent years, I believe we are now poised to visualize the meso-scale of the cell, and development and small organs (e.g. a fly’s brain) at the resolution of individual cells. Toward this end, my group is working on a number of preliminary imaging projects along these lines. These include (a) studies of development and gene expression in worms and flies, (b) neural patterning in flies and mice, and (c) the interpretation of signals from a new sub-wavelength resolution light microscope. We describe preliminary results on limited data sets and extrapolate on what we might be able to infer from such data. We further speculate on the potential implications of such work for the future of molecular biology.

- Keynote Addresses | Pp. 5-5

Advanced Scientific Computing: An Extraordinary Tool for Extraordinary Science

Jeffrey Wadsworth

Work is under way throughout the world to realize the promise of petascale computing and to complete the emergence of simulation as the third leg of science, joining theory and experiment. By the end of this decade, our ability to attack previously unsolvable problems will provide the basis for transformational advances in science and engineering that will enable us to address global challenges in energy, environment, and national security. In the United States, three government agencies are pursuing petascale initiatives: the Defense Advanced Research Projects Agency, the National Science Foundation, and the U.S. Department of Energy (DOE). I will describe the new capability for high-end science that is being fielded by DOE’s Leadership Computing Facility at Oak Ridge National Laboratory and discuss how this capability will be applied to such computationally challenging problems as climate modeling and prediction, astrophysics, nuclear fusion, systems biology, and materials design at the nanoscale.

- Keynote Addresses | Pp. 6-6

High-Performance Computing for the Masses

Zhiwei Xu

In this talk, the speaker will review the history and trends of high-performance computing from the users’ viewpoint. Evolutional milestones in workload, usage modes, programming models and systems architectures will be identified. Essential challenges and bottlenecks will be analyzed. He will highlight the newly formed e-Nation strategy for China of 2006-2020, and summarize R & D efforts in China to provide a high-performance computing infrastructure that could benefit half of the population of China.

- Keynote Addresses | Pp. 7-7

Conquering Complexity in Information Systems

Harrick Vin

The complexity of large-scale “information plants” – consisting of a number of hardware and software components – has been increasing rapidly and is fast approaching a barrier. I argue that continuous “evolution” is a key contributor to this complexity. Information plants evolve to accommodate new software functionalities, hardware technology, application and user requirements, as well as changes in operating conditions (workload, faults, etc.). Today, evolving information plants in a timely manner while maintaining desired levels of performance, stability, and security is an art; system evolution tasks are manual and intuition-based. In this talk, I will illustrate, through examples, the complexity resulting from evolution in modern information systems, and advocate a broad research agenda in computing to conquer this complexity through managed evolution.

Palabras clave: Software Component; User Requirement; Doctoral Fellowship; Hardware Technology; Program Committee Member.

- Banquet Speech | Pp. 8-8

Algorithmic Ramifications of Prefetching in Memory Hierarchy

Akshat Verma; Sandeep Sen

External Memory models, most notable being the I-O Model [3], capture the effects of memory hierarchy and aid in algorithm design. More than a decade of architectural advancements have led to new features not captured in the I-O model – most notably the prefetching capability. We propose a relatively simple Prefetch model that incorporates data prefetching in the traditional I-O models and show how to design algorithms that can attain close to peak memory bandwidth. Unlike (the inverse of) memory latency, the memory bandwidth is much closer to the processing speed, thereby, intelligent use of prefetching can considerably mitigate the I-O bottleneck. For some fundamental problems, our algorithms attain running times approaching that of the idealized Random Access Machines under reasonable assumptions. Our work also explains the significantly superior performance of the I-O efficient algorithms in systems that support prefetching compared to ones that do not.

Palabras clave: Memory Bandwidth; Memory Hierarchy; Parallel Disk; Prediction Sequence; Fast Memory.

- Plenary Session – Best Papers | Pp. 9-21

A Cache-Partitioning Aware Replacement Policy for Chip Multiprocessors

Haakon Dybdahl; Per Stenström; Lasse Natvig

Chip multiprocessors (CMPs) usually employ shared, last-level caches to use on-chip memory resources effectively. Unfortunately, conventional replacement policies applied to shared caches fail to partition memory resources among cores to achieve an optimal execution throughput. This paper presents a novel replacement policy that dynamically estimates how many misses would be eliminated if one more block per set would be allocated to a certain processor taking into account the extra misses for some other processor. Our implementation makes novel use of shadow tags for the estimation. We show that it can yield 50% higher execution throughput on a 4-way CMP and in contrast to previously proposed schemes, we did not observe any noticeable degradation of performance for any application in the SPEC2000 we used.

Palabras clave: Cache Size; Replacement Policy; Cache Line; Cache Block; Cache Space.

- Plenary Session – Best Papers | Pp. 22-34

A Security-Oriented Task Scheduler for Heterogeneous Distributed Systems

Tao Xie; Xiao Qin

High quality of security is increasingly critical for applications running on heterogeneous distributed systems, where processors operate at different speeds and communication channels have different bandwidths. Although there are a few scheduling algorithms in the literature for heterogeneous distributed systems, they generally do not take into account of security requirements of applications. In this paper, we propose a novel heuristic scheduling algorithm, or SATS, which is conducive to improving security of heterogeneous distributed systems. First, we formalize a concept of security heterogeneity for our scheduling model in the context of distributed systems. Next, we devise the SATS algorithm aiming at scheduling tasks to maximize the probability that all tasks are executed without any risk of being attacked. Empirical results demonstrate that with respect to security and performance, the proposed scheduling algorithm outperforms existing approaches for heterogeneous distributed systems.

Palabras clave: Security heterogeneity; heterogeneous distributed system; scheduling; degree of security deficiency; risk-free probability.

- Session I – Scheduling and Load Balancing | Pp. 35-46