Catálogo de publicaciones - libros

Compartir en
redes sociales


Recent Advances in Optimization

Alberto Seeger (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-28257-0

ISBN electrónico

978-3-540-28258-7

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

On the Asymptotic Behavior of a System of Steepest Descent Equations Coupled by a Vanishing Mutual Repulsion

Felipe Alvarez; Alexandre Cabot

We investigate the behavior at infinity of a special dissipative system, which consists of two steepest descent equations coupled by a non-autonomous conservative repulsion. The repulsion term is parametrized in time by an asymptotically vanishing factor. We show that under a simple slow parametrization assumption the limit points, if any, must satisfy an optimality condition involving the repulsion potential. Under some additional restrictive conditions, requiring in particular the equilibrium set to be one-dimensional, we obtain an asymptotic convergence result. Finally, some open problems are listed.

Part I - Optimization Theory and Algorithms | Pp. 3-17

Inverse Linear Programming

Stephan Dempe; Sebastian Lohse

Let (, ) be the solution set mapping of a linear parametric optimization problem with parameters in the right hand side and in the objective function. Then, given a point we search for parameter values and as well as for an optimal solution ∈ (, ) such that ‖ − ‖ is minimal. This problem is formulated as a bilevel programming problem. Focus in the paper is on optimality conditions for this problem. We show that, under mild assumptions, these conditions can be checked in polynomial time.

Part I - Optimization Theory and Algorithms | Pp. 19-28

Second-Order Conditions in Vector Optimization with Inequality and Equality Constraints

Ivan Ginchev; Angelo Guerraggio; Matteo Rocca

The present paper studies the following constrained vector optimization problem: min (), () ∈ −, () = 0, where : ℝ → ℝ : ℝ → ℝ are functions, : ℝ → ℝ is function, and ⊂ ℝ and ⊂ ℝ are closed convex cones with nonempty interiors. Two type of solutions are important for the consideration, namely -minimizers (weakly efficient points) and -minimizers (isolated minimizers). In terms of the second-order Dini directional derivative second-order necessary conditions a point to be a -minimizer and second-order sufficient conditions to be an -minimizes of order two are formulated and proved. The effectiveness of the obtained conditions is shown on examples.

Part I - Optimization Theory and Algorithms | Pp. 29-44

Benson Proper Efficiency in Set-Valued Optimization on Real Linear Spaces

Elvira Hernández; Bienvenido Jiménez; Vicente Novo

In this work, a notion of cone-subconvexlikeness of set-valued maps on linear spaces is given and several characterizations are obtained. An alternative theorem is also established for this kind of set-valued maps. Using the notion of vector closure introduced recently by Adán and Novo, we also provide, in this framework, an adaptation of the proper efficiency in the sense of Benson for set-valued maps. The previous results are then applied to obtain different optimality conditions for this Benson-vectorial proper efficiency by using scalarization and multiplier rules.

Part I - Optimization Theory and Algorithms | Pp. 45-59

Some Results About Proximal-Like Methods

Alexander Kaplan; Rainer Tichatschke

We discuss some ideas for improvement, extension and application of proximal point methods and the auxiliary problem principle to variational inequalities in Hilbert spaces. These methods are closely related and will be joined in a general framework, which admits a consecutive approximation of the problem data including applications of finite element techniques and the ε-enlargement of monotone operators. With the use of a ”reserve of monotonicity” of the operator in the variational inequality, the concepts of weak- and elliptic proximal regularization are developed. Considering Bregman-function-based proximal methods, we analyze their convergence under a relaxed error tolerance criterion in the subproblems. Moreover, the case of variational inequalities with non-paramonotone operators is investigated, and an extension of the auxiliary problem principle with the use of Bregman functions is studied. To emphasize the basic ideas, we renounce all the proofs and sometimes precise descriptions of the convergence results and approximation techniques. Those can be found in the referred papers.

Part I - Optimization Theory and Algorithms | Pp. 61-86

Application of the Proximal Point Method to a System of Extended Primal-Dual Equilibrium Problems

Igor V. Konnov

We consider a general system of equilibrium type problems which can be viewed as an extension of Lagrangean primal-dual equilibrium problems. We propose to solve the system by an inexact proximal point method, which converges to a solution under monotonicity assumptions. In order to make the method implementable, we suggest to make use of a dual descent algorithm and utilize gap functions for ensuring satisfactory accuracy of certain auxiliary problems. Some examples of applications are also given.

Part I - Optimization Theory and Algorithms | Pp. 87-102

On Stability of Multistage Stochastic Decision Problems

Alexander Mänz; Silvia Voge1

The paper considers a general multistage stochastic decision problem which contains Markovian decision processes and multistage stochastic programming problems as special cases. The objective functions, the constraint sets and the probability measures are approximated. Making use of the Bellman Principle, (semi) convergence statements for the optimal value functions and the optimal decisions at each stage are derived. The considerations rely on stability assertions for parametric programming problems which are extended and adapted to the multistage case. Furthermore, new sufficient conditions for the convergence of objective functions which are integrals with respect to decision-dependent probability measures are presented. The paper generalizes results by Langen(1981) with respect to the convergence notions, the integrability conditions and the continuity assumptions.

Part I - Optimization Theory and Algorithms | Pp. 103-118

Nonholonomic Optimization

Constantin Udrişte; Oltin Dogarul; Massimiliano Ferrara; Ionel Ţevy

In this paper one generalizes various types of constrained extremism, keeping the Lagrange or Kuhn-Tucker multipliers rule. The context which supports this development is the nonholonomic optimization theory which requires a holonomic or nonholonomic objective function subject to nonholonomic or holonomic constraints. We refined such a problem using two new ideas: the replacement of the point or velocity constraints by a curve selector, and the geometrical interpretation of the Lagrange and Kuhn-Tucker parameters. The classical optimization theory is recovered as a particular case of extremism constrained by a curve selector.

Part I - Optimization Theory and Algorithms | Pp. 119-132

A Note on Error Estimates for some Interior Penalty Methods

Alexey F. Izmailov; Mikhail V. Solodov

We consider the interior penalty methods based on the logarithmic and inverse barriers. Under the Mangasarian-Fromovitz constraint qualification and appropriate growth conditions on the objective function, we derive computable estimates for the distance from the subproblem solution to the solution of the original problem. Some of those estimates are shown to be sharp.

Part I - Optimization Theory and Algorithms | Pp. 133-145

—Optimal Boundary Control of a String to Rest in Finite Time

Martin Gugat

In this paper, the problem to control a finite string to the zero state in finite time from a given initial state by controlling the state at the two boundary points is considered. The corresponding optimal control problem where the objective function is the -norm of the controls is solved in the sense that the controls that are successful and minimize at the same time the objective function are determined as functions of the initial state.

Part II - Optimal Control and Calculus of Variations | Pp. 149-162