Views
3 years ago

An Adaptive Sequential Linear Programming Algorithm for Optimal ...

An Adaptive Sequential Linear Programming Algorithm for Optimal ...

Step 2: LP Subproblem

Step 2: LP Subproblem SolutionAny LP algorithm can be used to solve the subproblemLP(µ k X ,∆k ). A step vector s is obtained as the solution to theLP subproblem. The eligibility of the trial point, which is thecurrent design point plus the step, is examined in the next step.fABdominated by BDStep 3: Acceptance of Trial PointNot all subproblem solutions are accepted as the next iterantpoint. The criteria for acceptability are adopted from the filteralgorithm introduced by Fletcher et al. [28]. In this algorithm,a filter is used to replace penalty functions in determining theacceptance of the trial point towards establishing global convergence.Before proceeding with the definition of the filter we definetwo terms for specific use in the present context: infeasibilityand dominance. Infeasibility of the kth design µ k Xis the maximumconstraint violation at the kth design and is represented ash(g ′ (µ k X )):h(g ′ (µ k X )) = max(0,g ′ j(µ k X )) (22)With this measure of infeasibility, for every design point, a pairthat indicates the infeasibility and the objective function values,{h(g ′ (µ k X )), f (µk X)}, can be computed. Design k dominates designl if both h k ≤ h l and f k ≤ f l and the kth pair dominates thelth pair.Global convergence arguments can be built by trading off reductionin the objective functions versus reduction in any infeasibilitiesat a given iteration point. If we consider a bi-objectiveoptimization problem with f and h as the objectives, one way totradeoff these reductions is to use a penalty function that combinesobjective and infeasibility using weights, essentially creatinga Pareto point generation, as shown in Eq.(23)jc(µ X ) = f (µ X ) + w · h(g ′ (µ X )) (23)Difficulties associated with the use of penalty functions, and theproper choice of penalty parameter w in particular [31], motivatethe introduction of the filter concept. A filter is defined as a setcontaining only non-dominated pairs and treats f and h as separateobjectives without the need for weights. Figure 4 illustratesthe concept of using a filter to determine if a trial point is acceptable.The line in the figure represents all current filter entries.Points A, B, and C are currently in the filter. A new trial point Dis dominated by point B and is not accepted to the filter. The filtercontains only current Pareto points in the {h, f } bi-objectiveoptimization. A point that is not dominated by previous pairs isaccepted.In order to enhance convergence characteristics, Fletcher etal. [28] suggested that two non-zero parameters ω and γ be used,such that the condition for a point being acceptable to the filterbe that its {h, f } pair satisfies either h ≤ ωh l or f ≤ f l − γh l forall l ∈ F , where F denotes the current set of filter entries, andFigure 4.Non-dominated entries in the filter1 > ω > γ > 0. An iteration is f-type if the iteration is acceptableto the filter ,and has an actual reduction ∆ f k and a predicted reduction∆l k that satisfy the sufficient reduction criteria in Eq.(24)below, where h k is the short form of h(g ′ (µ k X)), and ζ is a userselected parameter with values within the range (γ,1).∆ f k = f (µ k+1X ) − f (µk X ) ≥ ζ∆lk , with ζ ∈ (γ,1) (24)∆l k = ∇ f T · s ≥ λ(h k ) 2 , with λ > 0If an iteration acceptable to the filter has a pair {h, f } that satisfiesEq.(25), then is is called an h-type.C∆l k < δ(h k ) 2 (25)An f-type iteration aims at reducing the objective function valuewhile possibly increasing infeasibility. An h-type iteration, onthe other hand, aims at decreasing infeasibility while possiblyincreasing the objective function. Following [28], not all acceptable{h, f } entries are added to the filter; indeed only h-typeiterations are added into the filter. This prevents loss of algorithmconvergence due to entries that have h = 0 and f < f ∗ [31].Step 4. Trust Region UpdateThe trust region is reduced by half if the trial point (µ k X +sk )is not accepted to the filter.4 CONVERGENCE ARGUMENTSIn this section the convergence proof of the SLP-filter algorithmfrom [28] is extended to the proposed algorithm for NLPproblems with probabilistic constraints. The original SLP filteralgorithm is reviewed and the required extensions at each stepare discussed.The differences between the original algorithm and the proposedone are in (i) the creation of the initial adaptive trust region,(ii) the construction of the LP subproblem, and (iii) theevaluation of trial points. Since the LP subproblems are in bothcases deterministic, we are able to apply the convergence prooffrom [28]. We will show that the convergence logic is not affectedbecause: (i) The new algorithm converges as the trust re-h6 Copyright c○ 2005 by ASME

An Adaptive Sequential Experimentation Methodology for n
Introduction to Linear Optimization
Linear, Integer Linear and Dynamic Programming
Progress in Linear and Mixed-Integer Programs
An Adaptive Sequential Linear Programming Algorithm for Optimal ...
A Sequential Linear Programming Coordination Algorithm for ...
A new adaptive algorithm for linear multiobjective programming ...
An Efficient Sequential Linear Quadratic Algorithm for ... - IEEE Xplore
A Sequential Linear Assignment Algorithm for ... - IEEE Xplore
aerodynamic shape optimization by means of sequential linear - Fyper
stabilized sequential quadratic programming for optimization and a ...
Optimal Sequential and Parallel Algorithms for Cut Vertices and ...
Surrogate-based Design Optimization with Adaptive Sequential ...
Discrete Time Optimal Adaptive Control for Linear Stochastic Systems
on performance of linear adaptive filtering algorithms in acoustic
Recursive Algorithms for Adaptive Transversal Filters: Optimality and ...
THE OPTIMAL LINEAR ARRANGEMENT PROBLEM: ALGORITHMS ...
a strongly polynomial algorithm for a special class of linear programs
Adaptive Methods for Linear Programming Decoding - STAR
Polynomial-Time Algorithms for Linear and Convex Optimization on ...
Linear Programming with Post-Optimality Analyses
The optimal uncertainty algorithm in the mystic framework
ROBUST LINEAR PROGRAMMING AND OPTIMAL CONTROL ...
Particle Swarm Optimization for Adaptive IIR Filter Structures
Optimal Online Deterministic Algorithms and Adaptive Heuristics for ...
An Adaptive Parallel Genetic Algorithm for VLSI-Layout Optimization
An Adaptive Optimized RTO Algorithm for Multi-homed Wireless ...
A FAST SIMPLEX ALGORITHM FOR LINEAR PROGRAMMING* 1 ...
Linear programming in fixed dimension - Geometric Algorithms Group
Joint optimization of linear predictors in speech - Acoustics ... - Rowan