Catálogo de publicaciones - libros
Algorithms and Computation: 18th International Symposium, ISAAC 2007, Sendai, Japan, December 17-19, 2007. Proceedings
Takeshi Tokuyama (eds.)
En conferencia: 18º International Symposium on Algorithms and Computation (ISAAC) . Sendai, Japan . December 17, 2007 - December 19, 2007
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Theory of Computation; Algorithm Analysis and Problem Complexity; Discrete Mathematics in Computer Science; Numeric Computing; Computer Communication Networks; Computer Graphics
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2007 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-77118-0
ISBN electrónico
978-3-540-77120-3
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2007
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2007
Cobertura temática
Tabla de contenidos
Kinetic Maintenance of Mobile k-Centres on Trees
Stephane Durocher; Christophe Paul
Let denote a set of mobile clients, each of which follows a continuous trajectory on a weighted tree . We establish tight bounds on the maximum relative velocity of the 1-centre and 2-centre of . When each client in moves with linear motion along a path on we derive a tight bound of () on the complexity of the motion of the 1-centre and corresponding bounds of (()) and () for a 2-centre, where () denotes the inverse Ackermann function. We describe efficient algorithms for calculating the trajectories of the 1-centre and 2-centre of : the 1-centre can be found in optimal time ( log) when the distance function between mobile clients is known or () when the function must be calculated, and a 2-centre can be found in time ( log). These algorithms lend themselves to implementation within the framework of kinetic data structures, resulting in structures that are compact, efficient, responsive, and local.
- 4A Data Structure I | Pp. 341-352
Checking Value-Sensitive Data Structures in Sublinear Space
Michael T. Goodrich; Jonathan Z. Sun
Checking value-sensitive data structures in sublinear space has been an open problem for over a decade. In this paper, we suggest a novel approach to solving it. We show that, in combination with other techniques, a previous method for checking value-insensitive data structures in log space can be extended for checking the more complicated value-sensitive data structures, using log space as well. We present the theoretical model of checking data structures and discuss the types of invasions a checker might bring to the data structure server. We also provide our idea of designing sublinear space checkers for value-sensitive data structures and give a concrete example – a log space checker for the search data structures (SDS).
- 4A Data Structure I | Pp. 353-364
Manipulation in Games
Raphael Eidenbenz; Yvonne Anne Oswald; Stefan Schmid; Roger Wattenhofer
This paper studies to which extent the social welfare of a game can be influenced by an interested third party within economic reason, i.e., by taking the implementation cost into account. Besides considering classic, benevolent mechanism designers, we also analyze malicious mechanism designers. For instance, this paper shows that a malicious mechanism designer can often corrupt games and worsen the players’ situation to a larger extent than the amount of money invested. Surprisingly, no money is needed at all in some cases. We provide algorithms for finding the so-called leverage in games and show that for optimistic mechanism designers, computing the leverage or approximations thereof is -hard.
- 4B Game Theory | Pp. 365-376
Using Nash Implementation to Achieve Better Frugality Ratios
Chien-Chung Huang; Ming-Yang Kao; Xiang-Yang Li; Weizhao Wang
Most of the recent works on algorithmic mechanism design exploit the solution concept of dominant strategy equilibria. Such work designs a proper payment scheme so that selfish agents maximize their utility by truthfully revealing their types. It has been pointed out that these truthful mechanisms, the famous among them being the VCG mechanisms, often incur high payments and fruglity ratios. In this work, we exploit the solution concept of Nash implementation to overcome this problem. Our mechanisms induce a set of Nash equilibria so that selfish agents have incentive to act based on a Nash equilibrium. We prove that our mechanisms enjoy substantial advantages over the truthful mechanisms in terms of payment and frugality.
- 4B Game Theory | Pp. 377-389
The Price of Nash Equilibria in Multicast Transmissions Games
Vittorio Bilò
We consider the problem of sharing the cost of multicast transmissions in non-cooperative undirected networks with non-negative edge costs. In such a setting, there is a set of receivers who want to be connected to a common source . The set of choices available to each receiver ∈ is represented by the set of all (,)-paths in the network. Given the set of choices performed by each receiver, a public known cost sharing method determines the cost share to be charged to each of them. Receivers are selfish agents aiming to receive the transmission at the minimum cost share and their interactions create a non-cooperative game. We study the problem of designing cost sharing methods yielding games whose price of anarchy (price of stability), defined as the worst-case (best-case) ratio between the cost of a Nash equilibrium and that of an optimal solution, is not too high. None of the methods currently known in the literature is able to achieve a good behavior on the price of anarchy and very little is known about their price of stability. We first give a lower bound on the price of stability of such methods, then we define and investigate some classes of cost sharing methods in order to characterize their weak points. Finally, we propose a new method, namely the , which if from one hand it cannot improve in general on the price of anarchy of multicast transmission games, on the other one, it admits a polynomial time algorithm for computing a pure Nash equilibrium whose cost is at most twice the optimal one.
- 4B Game Theory | Pp. 390-401
An Efficient Algorithm for Enumerating Pseudo Cliques
Takeaki Uno
The problem of finding dense structures in a given graph is quite basic in informatics including data mining and data engineering. Clique is a popular model to represent dense structures, and widely used because of its simplicity and ease in handling. Pseudo cliques are natural extension of cliques which are subgraphs obtained by removing small number of edges from cliques. We here define a pseudo clique by a subgraph such that the ratio of the number of its edges compared to that of the clique with the same number of vertices is no less than a given threshold value. In this paper, we address the problem of enumerating all pseudo cliques for given a graph and a threshold value. We first show that it seems to be difficult to obtain polynomial time algorithms using straightforward divide and conquer approaches. Then, we propose a polynomial time, polynomial delay in precise, algorithm based on reverse search. We show the efficiency of our algorithm in practice by computational experiments.
- 5A Database Applications | Pp. 402-414
Fast Adaptive Diagnosis with a Minimum Number of Tests
Samuel Guilbault; Andrzej Pelc
We study adaptive system-level fault diagnosis for multiprocessor systems. Processors can test each other and later tests can be scheduled on the basis of previous test results. Fault-free testers correctly identify the fault status of tested processors, while faulty testers can give arbitrary test results. The goal is to identify correctly the status of all processors, assuming that the number of faults does not exceed a given upper bound , where is the number of processors. Tests involving disjoint pairs of processors can be performed simultaneously in one round.
Two most important measures of quality of a diagnosis algorithm are its worst-case cost (the number of tests used) and time (the number of rounds used). It is known that the optimal worst-case cost of a diagnosis algorithm is + − 1. However, the known algorithms of this cost use time (). We present an algorithm with optimal cost + − 1 using time (log), provided that the upper bound on the number of faults satisfies ( + 1) ≤ . Hence, for moderate numbers of faults which we assume, our algorithm achieves exponential speed-up, compared to the previously known diagnosis algorithms of optimal cost.
- 5A Database Applications | Pp. 415-426
Dynamic Structures for Top- Queries on Uncertain Data
Jiang Chen; Ke Yi
In an where is the consisting of elements, :→[0,1] a probability function, and :→ℝ a score function, each element ∈ with score () appears independently with probability (). The top- query on asks for the set of elements that has the maximum probability of appearing to be the elements with the highest scores in a random instance of . Computing the top- answer on a fixed is known to be easy. In this paper, we consider the dynamic problem, that is, how to maintain the top- query answer when changes, including element insertion and deletions in the ground set , changes in the probability function and the score function . We present a fully dynamic data structure that handles an update in (log log) time, and answers a top- query in (log + ) time for any ≤ . The structure has () size and can be constructed in ( log) time. As a building block of our dynamic structure, we present an algorithm for the problem, that is, computing the top- answers for all = 1,..., , which may be of independent interest.
- 5A Database Applications | Pp. 427-438
Separating Populations with Wide Data: A Spectral Analysis
Avrim Blum; Amin Coja-Oghlan; Alan Frieze; Shuheng Zhou
In this paper, we consider the problem of partitioning a small data sample drawn from a mixture of product distributions. We are interested in the case that individual features are of low average quality , and we want to use as few of them as possible to correctly partition the sample. We analyze a spectral technique that is able to approximately optimize the total data size—the product of number of data points and the number of features —needed to correctly perform this partitioning as a function of 1/ for > . Our goal is motivated by an application in clustering individuals according to their population of origin using markers, when the divergence between any two of the populations is small.
- 5A Database Applications | Pp. 439-451
A Constant-Competitive Algorithm for Online OVSF Code Assignment
F. Y. L. Chin; H. F. Ting; Y. Zhang
Orthogonal Variable Spreading Factor (OVSF) code assignment is a fundamental problem in Wideband Code-Division Multiple-Access (W-CDMA) systems, which plays an important role in third generation mobile communications. In the OVSF problem, codes must be assigned to incoming call requests with different data rate requirements, in such a way that they are mutually orthogonal with respect to an OVSF code tree. An OVSF code tree is a complete binary tree in which each node represents a code associated with the combined bandwidths of its two children. To be mutually orthogonal, each leaf-to-root path must contain at most one assigned code. In this paper, we focus on the online version of the OVSF code assignment problem and give a 10-competitive algorithm, improving the previous ()-competitive result, where is the height of the code tree, and also another recent constant-competitive result, where the competitive ratio is only constant under amortized analysis and the constant is never determined. Finally, we also improve the lower bound of the problem from 3/2 to 5/3.
- 5B Online Algorithms | Pp. 452-463