11.07.2015 Views

A TUTORIAL ON SIMULATION OPTIMIZATION Farhad Azadivar ...

A TUTORIAL ON SIMULATION OPTIMIZATION Farhad Azadivar ...

A TUTORIAL ON SIMULATION OPTIMIZATION Farhad Azadivar ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Proceedings of the 1992 Winter Simulation Conferenceeel. J. J. Swain, D. Golclsman, R. C. Crain, and J. R. WilsonA <strong>TUTORIAL</strong> <strong>ON</strong> SIMULATI<strong>ON</strong> OPTIMIZATI<strong>ON</strong><strong>Farhad</strong><strong>Azadivar</strong>Department of Industrial EngineeringKansas State UniversityManhattan, KS 66502ABSTRACTThis tutorial discusses the issues and procedures forusing simulation as a tool for optimization of stochasticcomplex systems that are modeled by computersimulation. It is intended to be a tutorial rather than anexhaustive literature search. Its emphasis is mostly onissues that are specific to simulation optimization insteadof concentrating on the general optimization andmathematical programming techniques, Even though alot of effort has been spent to provide a comprehensiveoverview of the field, still there are methods andtechniques that have not been covered and valuableworksthat may not have been mentioned.1 INTRODUCTI<strong>ON</strong>Computer simulation has proved to be a very powerfultool in evaluating complex systems. These evaluationsare usually in the from of responses to “what if”questions. In recent years the success of computersimulation has been extended to answering “how to”questions as well. “What if” questions demand answerson certain performance measures for a given set ofvalues for the decision variables of the system. “How to”questions, on the other hand, seek optimum values forthe decision variables of the system so that a givenresponse or a vector of responses are maximized orminimized. In the past ten years a considerable amountof effort has been expended on simulation optimizationprocedures that deal with optimization of the quantitativedecision variables of a simulated system. In addition,there seems to be an increasing need for procedures thataddress optimization of the structures of the complexsystems. These are the problems where the performanceof the system depends more on operation policies of thesystem than the values of the quantitative decisionvariables.Comprehensive reviews of literature onsimulation optimization has been provided byGlynn (1986), Meketon (1987), Jacobson and Schruben(1989) and Safizadeh (1990). In this tutorial thesecitations will not all be repeated. Instead, issues thatmake simulation optimization distinct from genericoptimization procedures will be addressed, variousclassifications of these problems will be presented andsolution procedures suggested in the literature andapplied in practice will be explored.2 ISSUES IN SIMULATI<strong>ON</strong> OPTIMIZATI<strong>ON</strong>Using simulation as an optimization tool for complexsystems presents several challenges. Some of thesechallenges are those involved in optimization of anycomplex and highly nonlinear function. Others are morespecifically related to the special nature of the simulationmodeling. Simply stated, a simulation optimizationproblem is an optimization problem where the objectivefunction(objective functions in case of a multi-criteriaproblem), constraints, or both are responses that canonly be evaluated by computer simulation. As such,these functions are only implicit functions of decisionparameters of the system. In addition, these functions areoften stochastic in nature as well. With thesecharacteristics in mind, the major issues to address whencomparing them to generic non-linear programmingproblems are as follows:-There does not exist an analytical expression ofthe objective function or the constraints. This eliminatesthe possibility of differentiation or exact calculation oflocalgradients.- The objective function(s) and constraints arestochastic functions of the deterministic decisionvariables. This presents a major problem in estimation ofeven approximate local derivatives. Furthermore, thisworks against even using complete enumeration becausebased on just one observation at each point the bestdecision point cannot be determined.198


200 <strong>Azadivar</strong>problems. Most of the efforts will be spent on exploringprocedures applied to single objective problems withcontinuous or discrete quantitative decision variablessubject to deterministic or stochastic constraints. Severalapproaches to solving multiple objective problems willbe discussed next. Finally, a short discussion on nonparametricoptimization problems will be presented.5 SINGLE OBJECTIVE PROBLEMSThere have basically been four major approaches tosolving these problems. These are:- Gradient based search methods- Stochastic approximation methods- Response surface methods- Heuristic search methods5.1 Gradient Based Search MethodsThese methods attempt to take advantage of the vastamount of literature available on search methodsdeveloped for non-linear programming problems. Themajor contribution of practitioners in simulationoptimization to this field has been the various methodsof efficient estimation of gradients. Two major factors indetermining the success of these methods are thereliability and the efficiency. Reliability is importantbecause simulation responses are stochastic and a largeerror in gradient estimation may result in a mo~ement inan entirely wrong direction. The efficiency is a majorfactor because simulation experiments are expensive andit is desirable to estimate gradients with minimumnumber of function evaluations. The gradient estimationmethods often employed in simulation optimization areas follows:5.1.1 Finite Difference EstimationThis is the crudest method of estimating the gradient.Partial derivatives of f(X) in this case are estimated by:8f/6xi=[f(x,,.., Xi+ AXi,..,XP)-f(Xl,.., XP)AXiXi(5.1.1.1)5.1.2 Infhitesirnal Perturbation Analysis(IPA)Perturbation analysis, when applied properly and tomodels that satisfy certain conditions estimates allgradients of the objective function from a singlesimulation experiment. In a relatively short time since itsintroduction to simulation field a significant volume ofwork on this topic is reported in the literature. A sampleof these works can be found in Ho (1984), Ho et al(1983), Ho et al (1984), and Suri (1983). A completediscussion of all issues in IPA has been published in arecent book by Ho and Cao (1991).The main principle behind perturbation analysisis that if a decision parameter of a system is perturbedby an infinitesimal amount, the sensitivity of theresponse of the system to that parametercan be estimatedby tracing its pattern of propagation through the system.This will be a function of the fraction of thepropagations that die before having a significant effecton the response of interest. The fact that all derivativescan be derived from the same simulation run, representsa significant advantage to IPA in terms of the efficiency.However, some restrictive conditions have to be satisfiedfor IPA to be applicable. For instance if as a result ofperturbation of a given parameter, the sequence of eventsthat govern the behavior of the system changes, theresults obtained by perturbation analysis may not bereliable. Considering the complex nature of mostsimulation models this condition may not be satisfiedmost of the time. Heidelburger (1986) presents a studyof deficiencies of IPA in estimating the gradients. Thereare also reports that additional work done in this area inrecent years may alleviate some of the problems in itsapplication to simulation optimization.One difficulty with application of IPA to simulationoptimization problems is that the modeler has to have athorough knowledge of the simulation model and insome situations must have built it from scratch to beable to add additional tracking capabilities that areneeded by IPA. Most practitioners build their simulationmodels using some kind of simulation language. Withthe advance of object oriented simulation methodologyand languages, it will become even more difficult tobuild these additional tracking capabilities into a reusablesimulationmodel.As a result, to estimate the gradient at each point at leastp+ 1 evaluations of the simulation model will berequired. Furthermore, to obtain a more reliable estimateof the derivatives there may be a need for multipleobservations for each derivative. An example of applyingthis method in conjunction with the Hooke and Jeevespattern search technique is presented by Pegden andGately (1977).5.1.3 Frequency Domain AnalysisFrequency domain analysis in estimating the sensitivityand gradients of the responses of simulation models wassuggested by Schruben and Cogliano (1981). Additionalwork on the subject has been reported by Jacobson(1988) and Jacobson and Schruben (1988). The gradientsare estimated by analyzing the power spectrum of thesimulation output function which is affected by inducingspecific sinusoidal oscillations to the input parameters. In


Simulation Optimization 201a recent work, Jacobson and Schruben (1991) have usedthis in applying the Newton’s method to simulationoptimization. The frequency domain analysis suffersfrom the same difficulty as IPA because of thecomplexity of incorporating it with independently builtsimulationmodels.5.1.4 Likelihood Ratio EstimatorsGlynn (1987) presents an overview ofLikelihood Ratio Estimators and their potential use insimulation optimization. He provides two algorithms bywhich the gradient of a simulation response functionwith respect to its parameters can be estimated.Rubenstein (1989) suggests a variation of this metlhodand shows how it can be used in estimation of Hessiansand higher level gradients to be incorporated in theNewton’smethod.Once the method of estimating the gradients isdecided upon, one of the available search techniques canbe employed to search for the optimum. For a recentwork using Quasi-Newton’s method refer to Safizadeh(1992).5.2 Stochastic Approximation Methods (SAM)Stochastic approximation methods refer to a family ofrecursive procedures that approach to the minimum ormaximum of the theoretical regression function clf astochastic response surface using noisy observationsmade on the function. These are based on the originalwork by Robbins and Monro (1951) and Kiefer andWolfowitz (1952). The original recursive formula isgiven for a single variable function and is stated as:Xn+l =Xn+ (~/2cJ[f(xn+ cJ-f(&-cJ] (5.2.1)where ~ and c. are two series of real numbers thatsatisfy the following conditions:Z%< =, Li~_(c~ =0, and Li~_(~/c~2 < m (5.2.2)It has been proven that as n approaches infinity &approaches to a solution such that the theoreticalregression function of the stochastic response ismaximized or minimized. This proof has been extendedto multi-dimensional decision variables as well.A neat characteristic of the stochasticapproximation method when applied to simulationoptimization is that the optimum of the expected value ofthe response could be reached using noisy observations.The difficulty is that a large number of iterations of therecursive formula will be required to obtain theoptimum. Besides, for multi-dimensional decisionvectors, p+ 1 observations will be needed for eachiteration. Glynn (1986) has provided estimates of speedof convergence for some variations of this method. Theother difficult y with these methods is the incorporationof the constraints into the optimization.An earlier work in application of stochasticapproximation method to simulation optimization isreported by <strong>Azadivar</strong> and Talavage (1980). In this workan automatic optimum seeking algorithm has beendeveloped that could be interfaced with anyindependently built simulation model. In this algorithmthe decision variables can be constrained by a set oflinear deterministic constraints.5.3 Response Surface Methodology (RSM)Response surface methodology is the procedure of fittinga series of regression models to the responses of thesimulation model evaluated at several points and tryingto optimize the resulting regression function. The processusually starts with first order regression timction andafter reaching the vicinity of the optimum, higher degreeregression functions are utilized. Among the earlierworks in application of RSM to simulation optimizationare those of Biles (1974) and Smith (1976). Additionalwork has been reported by Daugherty and Turnquist(1980), and Wilson (1987). Smith developed anautomatic optimum seeking program based on RSM thatcould be interfaced with independently built simulationmodels. This program was developed for bothconstrained and unconstrained problems. Compared tomany gradient based methods, RSM is a relativelyefficient method of simulation optimization in terms ofthe number of simulation experiments needed. However,<strong>Azadivar</strong> and Talavage (1980) show that for complexfunctions with sharp ridges and flat valleys it does notprovide good answers.5.4 Heuristic MethodsThere are two heuristic methods that have shownpromise in application of simulation optimization. Theseare Box’s (1965) Complex Search method and SimulatedAnnealing.5.4.1 Complex SearchComplex search is an extension of Nelder andMead’s (1965) Simplex search that has been modified forconstrained problems. The search starts with evaluationof points in a simplex consisting of p + 1 vertices in thefeasible region. It proceeds by continuously dropping theworst points from among the points in the simplex andadding new points determined by the reflection of this


202 <strong>Azadivar</strong>point through the centroid of the remaining vertices. Themajor issue in applying this procedure to simulationmodels is the determination of the worst point. Since theresponses are stochastic, an apparently worst point mayactually be one of the better points and dropping it maytake the search away from the optimum region.- Variations of goal programming approach asthose reported by Biles and Swain (1980), Clayton et al(1982), and Rees et al (1985).- Multi-attribute value function methods such asthe one used by Mollaghasemi et al (1991) andMollaghasemi and Evans (1992).<strong>Azadivar</strong> and Lee (1988) developed a programbased on Complex Search that automatically applies thisprocess to any given simulation model. The decisionvariables of these models can be constrained bydeterministic as well as stochastic constraints that mayberesponses of the same or other simulation models. Inorder to avoid making a wrong decision regarding theworst point the values of the responses at vertices arecompared statistically. If the result of the multiplecomparison is conclusive and shows that one point issignificantly worse than the others it is dropped.Otherwise additional simulation runs are made to reducethe variance and the comparison is repeated.Among the procedures that have been developedspecifically for simulation optimization Teleb and<strong>Azadivar</strong> (1992) use the stochastic nature of theresponses to the advantage of optimization. They use theComplex search method but suggest an alternative wayof comparing the responses at vertices. For each point inthe complex they calculate a probability that the responsevector belongs to the random vector representing the bestvalue for all objective functions. The point with thelowest probability is dropped and its reflection withrespect to the centroid of the rest of the points is addedto the simplex.5.4.2 Simulated AnnealingSimulated annealing is a relatively new method thatcould be utilized for simulation optimization. Adescription of this procedure is presented by Eglese(1990). Simulated annealing is a gradient search methodthat attempts to achieve a global optimum. In order notto be trapped in a locally optimum region, this proceduresometimes accepts movements in directions other thatsteepest ascend or descend. The acceptance of an uphillrather that a downhill direction is controlled by asequence of random variables with a controlledprobability.6 MULTI-CRITERIA OPTIMIZATI<strong>ON</strong>In addition to the common difficulties with all othermulti-criteria optimization problems, multi-criteriasimulation optimization possesses its own complexitieswhich are mostly due to the stochastic nature of theresponse functions. Most of the work done in this areaare slight modifications of the techniques used inoperations research for generic multi-objectiveoptimization. Some of these approaches are:- Using one of the responses as the primaryresponse to be optimized subject to certain levels ofachievement on the other objective functions. Biles(1975, 1977) uses this approach in conjunction with aversion of Box’s complex method and alternatively witha variation of gradient and gradient projection method.7. N<strong>ON</strong>-PARAMETRIC OPTIMIZATI<strong>ON</strong>Many industrial, service, and other complex systems thatare modeled by computer simulation need to beoptimized in terms of their structural designs andoperational policies. Mathematical programmingtechniques are not usually applicable in these situations.Examples of these systems are scheduling policies,layout problems and part routing policies. In order toaddress these problems, each function evaluation requiresa new configuration of the simulation model.Furthermore, since the decision variables are notquantitative, regular hill climbing, infinitesimalperturbation analysis, and stochastic approximationmethods are not quite applicable. To deal with theseproblems an automatic model generation and a newoptimization procedure has to be developed.Since this is a new area of attention not muchwork has been reported in the literature. An example ofthis approach is the work by Prakash and Shamon(1989). We believe this is a very important topic for thefiture of the simulation optimization. Developments inthis area will provide the real answer to “how to”questions.8 C<strong>ON</strong>CLUSI<strong>ON</strong>S AND RECOMMENDATI<strong>ON</strong>SThe choice of the procedure to employ in simulationoptimization depends on the analyst and the problem to


Simulation Optimization 203be solved, However we believe the modeler is often not. .a good mathematlclm ~d the mathematician is notnecessarily a good simulation modeler. When it colmestime to model a complex system a team of experts willwork on developing a valid simulation model. Thesemodels are usually rather complex and do not yieldthemselves to the type of tracking needed in perturbationanalysis and frequency domain analysis. Untill asignificant progress is made in these areas, practitionerswill treat their simulation model as a black ‘boxdemanding instruction from the optimization routinesshould be such that they can directly interface withthese black boxes and operate on them in an inputoutputmode not putting too much demand on the modelers tomodi~ them for each iteration.We recommend, parallel to additional effortsspent on advancing theoretical concepts such as IPA andfrequency domain analysis, researchers work on makingsimulation optimization procedures more suitable tct beinterfaced with independently built models. We believeintelligent frameworks to perform these interfaces ‘willmake this task more feasible. task.REFERENCES<strong>Azadivar</strong>, F. and Talavage, J.J., 1980. Optimization ofstochastic simulation models, Mathematics andComputers in Simulation, 22;23 1-241.<strong>Azadivar</strong>, F. and Lee, Y. H., 1988. Optimization ofdiscrete variable stochastic systems by computersimulation, Mathematics and Computers in Simulation,30:331-345.Biles, W. E., 1974. A gradient regression searchprocedure for simulation experimentation, Proceedingsof 1974 Winter Simulation Conference, 491-497.Biles, W. E., 1975. A response surface method forexperimental optimization of multiresponse processes,Industrial and Engineering Chemistry: Process Designand Development, 14:152-158.Biles, W. E., 1977. Optimization of multiple objectivecomputer simulation: A non linear goal programmingapproach, Proceedings of AIDS Conference, Chicago,250-252.Biles, W. E., and Swain, J.J., 1980. Optimization andindustrial experimentation, Wiley Znterscience, NewYork.BOX, E. P., 1956. A new method for constrainedoptimization and a comparison with other methods,i’le Computer Journal, 8:42-52.Clayton, E. R., W.E. Weber, and B. W. Taylor, 1982.A goal programming approach to the optimization ofmultiresponse simulation models, ZZE Transactions,14:282-287.Daugherty, A. F., and Turnquist, M. A., 1980.Simulation optimization using response surfaces basedon spline approximations, Proceedings of the WinterSimulation Conference, 183-193.Eglese, R. W., 1990. Simulated annealing: a tool foroperational research, European Journal of OperationalResearch, 46:271-281.Glynn, P. W., 1986. Optimization of stochasticsystems, Proceedings of 1986 Winter SimulationConference, 52-59.Glynn, P. W., 1987. Likelihood ratio gradientestimation: an overview, Proceedings of 1987 WinterSimulation Conference, 366-375.Glynn, P. W., 1986. Stochastic approximation forMonte Carlo optimization, Proceedings of WinterSimulation Conference, 356-364.P. Heidelberger, 1986. Limitations of infinitesimalperturbation analysis, IBMResearch Repon RC 11891,IBM Watson Research Labs, Yorktown Heights, NY.Ho, Y. C., 1984. Perturbation analysis methods fordiscrete event dynamical systems and simulations,Proceedings of Winter Simulation Conference, 171-173.Ho, Y. C., and Cao, X. R., 1991. Perturbation analysisof discrete event dynamic systems, Kluwer AcademicPublishers.Ho, Y. C., Cao, X. R., and Cassandra, C., 1983.Infinitesimal and finite perturbation analysis ofqueuing networks, Automatic, 19:439-445.Ho, T. C., Suri, R., Cao, X. R., Diehl, G. W., Dine,J. W., and Zazanis, M., 1984. Optimization of largemulticlass (non-product-form) queuing networksusing perturbation analysis, Large ScaleSystems, 7:165-180.Jacobson, S. H., and L. W. Schruben, 1989. Techniquesfor simulation response optimization, OperationsResearch Letters, 8:1-9.Jacobson, S. H., and Schruben, L. W., 1991. Asimulation optimization procedure using harmonicanalysis, working paper.Kiefer, J., and Wolfowitz, J., 1952. Stochasticestimation of the maximum of a regression function,Annals of Mathematical Statistics, 23:462-446.Meketon, M. S., 1987. Optimization in simulation: asurvey of recent results, Proceedings of the 1987Winter Simulation Conference, 58-67.Mollaghasemi, M., Evans, G. W., and Biles, W. E.,1991. An approach for optimizing multiple responsesimulation models, Proceedings of 13th AnnualConference in Computers and Industrial Engineering.Mollaghasemi, M., and Evans, G. W., 1992. Amethodology for stochastic optimization of multipleresponse simulation models, working paper.


204 <strong>Azadivar</strong>J. A. Nelderand R. Mead, 1976. Asimplex method forfunction minimization, Computer Journal, 7, 308-313(1965).9.Pegden, C. D., and M. P. Gately, DecisionOptimization for GASP IV Simulation Models,Proceedings of the 1977 Winter SimulationConference, 125-133.Prakash, S., and Shannon, R. E., 1989. Intelligent backend of a goal oriented simulation environment fordiscrete-part manufacturing, Proceedings of 1989Winter Simulation Conference, 883-891.Rees, L. P., E. R. Calyton, and B. W. Taylor, 1985.Solving multiple response simulation models usingresponse surface methodology within a lexicographicgoal programming framework, ZZE Transactions,17:47-57.Robbins, H., and Monro, S., 1951. A stochasticapproximation method, Annals of MathematicalStatistics, 22:400-407.Rubinstein, R. Y., 1989. Sensitivity analysis andperformance extrapolation for computer simulationmethods, Operations Research, 37: No. 1.Safizadeh, M. H., 1990. Optimization in simulation:current issues and the future outlook, Naval ResearchLogistics, 37:807-825.Satizadeh, M. H. and Signorila, R., 1992. Optimizationof simulation via quasi-newton methods, workingpaper.Schruben, L. W., 1986. Simulation optimization usingfrequency domain methods, Proceedings of the WinterSimulation Conference, 366-369.Schruben, L. W., and Cogliano, V, J., 1991.Simulation sensitivity analysis: a frequency domainapproach, Proceedings of the Winter SimulationConference, 455-459.D.E. Smith, Automatic optimum-seeking program fordigital simulation, Simulation 2227-32.Suri, R., 1983. Infinitesimal perturbation analysis ofdiscrete even dynamic systems: a general theory,Proceedings of the Twenty-Second IEEE Conference onDecision and Control, 1030-1038.Teleb, R., and F. <strong>Azadivar</strong>, 1992. A methodology forsolving multi-objective simulation-optimizationproblems, European Journal of Operational Research,to appear.Wilson, J. R., 1987. Future direction in responsesurface methodology for simulation, Proceedings ofthe Winter Simulation Conference, 1987, 378-381.AUTHORBIOGRAPHYFARHAD AZADIVAR is a professor in the Departmentof Industrial Engineering at Kansas State Universit y. Heis also the director of the Advanced ManufacturingInstitute of the university which is a Center ofExcellence in research on manufacturing for the state ofKansas. He received his B.S. degree in mechanicalengineering and his M. S. degree in systems engineering.He received his Ph.D. in industrial engineering fromPurdue University in 1980. His areas of research are inmodeling and optimization of manufacturing systems.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!