Catálogo de publicaciones - libros

Compartir en
redes sociales


Principles and Practice of Constraint Programming: CP 2007: 13th International Conference, CP 2007, Providence, RI, USA, September 23-27, 2007. Proceedings

Christian Bessière (eds.)

En conferencia: 13º International Conference on Principles and Practice of Constraint Programming (CP) . Providence, RI, USA . September 23, 2007 - September 27, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-74969-1

ISBN electrónico

978-3-540-74970-7

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

A Multi-engine Solver for Quantified Boolean Formulas

Luca Pulina; Armando Tacchella

In this paper we study the problem of yielding robust performances from current state-of-the-art solvers for quantified Boolean formulas (QBFs). Building on top of existing QBF solvers, we implement a new multi-engine solver which can inductively learn its solver selection strategy. Experimental results confirm that our solver is always more robust than each single engine, that it is stable with respect to various perturbations, and that such results can be partially explained by a handful of features playing a crucial role in our solver.

- Full Research Papers | Pp. 574-589

Decomposing Global Grammar Constraints

Claude-Guy Quimper; Toby Walsh

A wide range of constraints can be specified using automata or formal languages. The constraint restricts the values taken by a sequence of variables to be a string from a given context-free language. Based on an AND/OR decomposition, we show that this constraint can be converted into clauses in conjunctive normal form without hindering propagation. Using this decomposition, we can propagate the constraint in () time. The decomposition also provides an efficient incremental propagator. Down a branch of the search tree of length , we can enforce GAC times in the same () time. On specialized languages, running time can be even better. For example, propagation of the decomposition requires just (||) time for regular languages where || is the size of the transition table of the automaton recognizing the regular language. Experiments on a shift scheduling problem with a constraint solver and a state of the art SAT solver show that we can solve problems using this decomposition that defeat existing constraint solvers.

- Full Research Papers | Pp. 590-604

Structural Relaxations by Variable Renaming and Their Compilation for Solving MinCostSAT

Miquel Ramírez; Hector Geffner

Searching for optimal solutions to a problem using lower bounds obtained from a relaxation is a common idea in Heuristic Search and Planning. In SAT and CSPs, however, explicit relaxations are seldom used. In this work, we consider the use of explicit relaxations for solving MinCostSAT, the problem of finding a minimum cost satisfying assignment. We start with the observation that while a number of SAT and CSP tasks have a complexity that is exponential in the treewidth, such models can be relaxed into weaker models of bounded treewidth by a simple form of variable renaming. The relaxed models can then be compiled in polynomial time and space, so that their solutions can be used effectively for pruning the search in the original problem. We have implemented a MinCostSAT solver using this idea on top of two off-the-shelf tools, a d-DNNF compiler that deals with the relaxation, and a SAT solver that deals with the search. The results over the entire suite of 559 problems from the 2006 Weighted Max-SAT Competition are encouraging: , the new solver, solves 56% of the problems when the bound on the relaxation treewidth is set to  = 8, while Toolbar, the winner, solves 73% of the problems, Lazy, the runner up, 55%, and MinCostChaff, a recent MinCostSAT solver, 26%. The relation between the proposed relaxation method and existing structural solution methods such as cutset decomposition and derivatives such as mini-buckets is also discussed.

- Full Research Papers | Pp. 605-619

Bound-Consistent Deviation Constraint

Pierre Schaus; Yves Deville; Pierre Dupont

Deviation is a recent constraint to balance a set of variables with respect to a given mean. We show that the propagators recently introduced are not bound-consistent when the mean is rational. We introduce bound-consistent propagators running in linear time with respect to the number of variables. We evaluate the improvement in terms of efficiency and pruning obtained with the new propagators on the Balanced Academic Curriculum Problem.

- Full Research Papers | Pp. 620-634

Constructive Interval Disjunction

Gilles Trombettoni; Gilles Chabert

This paper presents two new filtering operators for numerical CSPs (systems with constraints over the reals) based on , as well as a new splitting heuristic. The fist operator () is a generic algorithm enforcing constructive disjunction with intervals. The second one () is a hybrid algorithm mixing constructive disjunction and , another technique already used with numerical CSPs through the algorithm . Finally, the splitting strategy learns from the CID filtering step the next variable to be split, with no overhead.

Experiments have been conducted with 20 benchmarks. On several benchmarks, and produce a gain in performance of orders of magnitude over a standard strategy. compares advantageously to the operator while being simpler to implement. Experiments suggest to fix the CID-related parameter in , offering thus to the user a promising variant of .

- Full Research Papers | Pp. 635-650

An LP-Based Heuristic for Optimal Planning

Menkes van den Briel; J. Benton; Subbarao Kambhampati; Thomas Vossen

One of the most successful approaches in automated planning is to use heuristic state-space search. A popular heuristic that is used by a number of state-space planners is based on relaxing the planning task by ignoring the delete effects of the actions. In several planning domains, however, this relaxation produces rather weak estimates to guide search effectively. We present a relaxation using (integer) linear programming that respects delete effects but ignores action ordering, which in a number of problems provides better distance estimates. Moreover, our approach can be used as an admissible heuristic for optimal planning.

- Full Research Papers | Pp. 651-665

A Cost-Based Model and Algorithms for Interleaving Solving and Elicitation of CSPs

Nic Wilson; Diarmuid Grimes; Eugene C. Freuder

We consider Constraint Satisfaction Problems in which constraints can be initially incomplete, where it is unknown whether certain tuples satisfy the constraint or not. We assume that we can determine such an unknown tuple, i.e., find out whether this tuple is in the constraint or not, but doing so incurs a known cost, which may vary between tuples. We also assume that we know the probability of an unknown tuple satisfying a constraint. We define algorithms for this problem, based on backtracking search. Specifically, we consider a simple iterative algorithm based on a cost limit on which unknowns may be determined, and a more complex algorithm that delays determining an unknown in order to estimate better whether doing so is worthwhile. We show experimentally that the more sophisticated algorithms can greatly reduce the average cost.

- Full Research Papers | Pp. 666-680

On Universal Restart Strategies for Backtracking Search

Huayue Wu; Peter van Beek

Constraint satisfaction and propositional satisfiability problems are often solved using backtracking search. Previous studies have shown that a technique called randomization and restarts can dramatically improve the performance of a backtracking algorithm on some instances. We consider the commonly occurring scenario where one is to solve an ensemble of instances using a backtracking algorithm and wish to learn a good restart strategy for the ensemble. In contrast to much previous work, our focus is on universal strategies. We contribute to the theoretical understanding of universal strategies and demonstrate both analytically and empirically the pitfalls of non-universal strategies. We also propose a simple approach for learning good universal restart strategies and demonstrate the effectiveness and robustness of our approach through an extensive empirical evaluation on a real-world testbed.

- Full Research Papers | Pp. 681-695

Hierarchical Hardness Models for SAT

Lin Xu; Holger H. Hoos; Kevin Leyton-Brown

Empirical hardness models predict a solver’s runtime for a given instance of an -hard problem based on efficiently computable features. Previous research in the SAT domain has shown that better prediction accuracy and simpler models can be obtained when models are trained separately on satisfiable and unsatisfiable instances. We extend this work by training separate hardness models for each class, predicting the probability that a novel instance belongs to each class, and using these predictions to build a hierarchical hardness model using a mixture-of-experts approach. We describe and analyze classifiers and hardness models for four well-known distributions of SAT instances and nine high-performance solvers. We show that surprisingly accurate classifications can be achieved very efficiently. Our experiments show that hierarchical hardness models achieve higher prediction accuracy than the previous state of the art. Furthermore, the classifier’s confidence correlates strongly with prediction error, giving a useful per-instance estimate of prediction error.

- Full Research Papers | Pp. 696-711

: The Design and Analysis of an Algorithm Portfolio for SAT

Lin Xu; Frank Hutter; Holger H. Hoos; Kevin Leyton-Brown

It has been widely observed that there is no “dominant” SAT solver; instead, different solvers perform best on different instances. Rather than following the traditional approach of choosing the best solver for a given class of instances, we advocate making this decision online on a per-instance basis. Building on previous work, we describe a per-instance solver portfolio for SAT, , which uses so-called empirical hardness models to choose among its constituent solvers. We leverage new model-building techniques such as censored sampling and hierarchical hardness models, and demonstrate the effectiveness of our techniques by building a portfolio of state-of-the-art SAT solvers and evaluating it on several widely-studied SAT data sets. Overall, we show that our portfolio significantly outperforms its constituent algorithms on every data set. Our approach has also proven itself to be effective in practice: in the 2007 SAT competition, won three gold medals, one silver, and one bronze; it is available online at .

- Full Research Papers | Pp. 712-727