1、Development of Optimization Algorithms for the Solution of Problems involving Computationally Expensive Analysis,Kalyan Shankar Bhattacharjee Supervisor: Tapabrata Ray Co-supervisor: Hemant Kumar Singh,Presentation overview,What class of problems do they represent and why is it important to solve th
2、em ? Existing approaches and their limitations Approaches proposed in this research Preliminary results Future Direction Timeline and deliverables,What is the problem ?,Optimization problems in which the objectives and/or constraints are evaluated using computationally expensive simulations, define
3、the general scope of such problems. Examples include multidisciplinary design, models of complex natural systems or processes where the underlying analysis involves iterative solvers (finite element analysis, computational fluid dynamics, computational electro-magnetics, weather models, bushfire mod
4、els etc. Objective functions and constraints are highly nonlinear often with function and slope discontinuity. Solvers are often in black box form. Large number of variables. Evaluation of a single solution is computationally expensive, several CPU hours.To deal with such classes of problems, popula
5、tion based stochastic algorithms are typically used often with the aid of approximations.,Existing approaches,Several approaches have been proposed so far to deal with such classes of problems: Use of clusters, parallel computing infrastructure, GPs etc. Available to a small group of industries or e
6、lite research institutes. Is not within the scope of this research. Use of surrogates or computationally cheaper approximation models in lieu of computationally expensive analysis during the course of optimization. Well established area of Surrogate Assisted Optimization. Not considered within this
7、scope of research directly. Identification of promising solutions and evaluating only them to reduce computational cost. Rarely investigated in the field. Selective evaluation of constraints and objectives of these promising solutions can lead to further savings. Use of multi-fidelity optimization s
8、trategies. Fewer reports and less investigated in the field.,Existing approaches: Selective evaluation,Identification of promising solutions based on SVM has been suggested for MO problems. No reports on whether evaluation of one is sufficient ? Which one ? And can one always evaluate just one ? Par
9、tial evaluation has never been studied. There are no reports on classifier guided constraint selection.Can we design means to identify promising solutions and only evaluate relevant objectives and constraints of them during the course of search ?,Limitations: Selective evaluation,Existing approaches
10、: Multi-fidelity optimization,Using correction/scaling/bridging MFO: misguided search, self adaptive correction functions. Non-linearity handling in RSM. Knowledge based (e.g. KBNN) algorithms: dependent on many parameters, proper network architecture identification, extrapolation beyond the limit o
11、f dataset is impossible. Trust region model and space mapping: constraint handling is not implemented, significant overhead calculations at each iteration for model update, 1st order consistency with the Lagrangian at the centre of trust region. Gradient based models: misguidance by low fidelity est
12、imates, expensive to build metamodel at each iteration especially when number of data points are high. Switching fidelity models: involve many parameters, switching mechanism activates when there is no improvement based on low fidelity estimates, misguided search. Lack of efficient constraint handli
13、ng mechanism in multi-fidelity optimization. Multi/many-objective constrained/unconstrained multi-fidelity problem solving is not studied so far. Rank based learning model to identify worth of a solution and selective fidelity evaluations of potential solutions are still not well visited areas. Benc
14、hmark test suite of multi-fidelity optimization does not exist.,Limitations: Multi-fidelity optimization,Research objectives,Development of classifier guided selective evaluation strategies i.e. ones where promising solutions are identified and/or selected set of objectives and constraints are evalu
15、ated.Development of practical and computationally efficient optimization algorithms to solve multi/many-objective constrained/unconstrained MFO problems.Development of a test suite for MFO (single/multi/many-objective, constrained/unconstrained, scalable, tunable problems).,Proposed approach: Select
16、ive evaluation of solutions and constraints,1. Selective evaluation of solutions and constraints (constrained single objective problems):,1 Suykens, J. A. K. and Van Gestel, T. and De Brabanter, J. and De Moor, B. and Vandewalle, J., Least squares support vector machines. World Scientific, 2002, vol
17、. 4. 2 Chapelle, O. and Keerthi, S. S., “Efficient algorithms for ranking with SVMs,” Information Retrieval, vol. 13, no. 3, pp. 201215, 2010.,2. Selective evaluation of objectives (Unconstrained bi-objective problems):,Proposed approach: Selective evaluation of objectives,Preliminary results: Selec
18、tive evaluation of constraints,Problem Definition (G5) 1 Number of variables (x)= 4 Number of constraints (g)= 4 Number of objective (f) =1 Best know Optimum at X = 679.9453, 1026.067, 0.1188764, 0.3962336, f = 5126.4981 (Sum of CV = 1e-4),fitness evaluation = 348,000 2 Feasibility ratio () = 0.0000
19、%,Findings (within 1000 fitness evaluation): Performance of CGCSM is best in terms of convergence rate of mean objective function value with evaluation cost. IDEA and NSGA-II performance is the same. SRES performs worst in all aspects.,Mean sum CV convergence plot (G5),1 Koziel, S., & Michalewicz, Z
20、. (1999). Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization. Evolutionary computation, 7(1), 19-44. 2 Mezura-Montes, E., Coello, C. A. C., & Tun-Morales, E. I. (2004). Simple feasibility rules and differential evolution for constrained optimization. In MICAI 2004
21、: Advances in Artificial Intelligence (pp. 707-716). Springer Berlin Heidelberg.,Problems investigated are: G series (G1-G11), Engineering design (Belleville Spring, Car Side Impact, Disk Clutch Brake, Helical Spring, Speed Reducer, Step Cone Pulley, Pressure Vessel, Welded Beam)Performance of CGCSM
22、 is better in 13 out of 19 problems based on convergence of sum of constraint violations with evaluation budget. CGCSM seems to work better with problems having very small feasible region i.e. problems with constraint which are hard to satisfy such as equality constraints.CGCSM sequentially tries to
23、 satisfy constraints in the ascending order of their respective percentage ratios of feasible region with the whole search space. This approach essentially brings down sum of constraint violations at a faster rate for most of the problems than other algorithms.Considering evaluation cost when first
24、feasible is obtained, CGCSM performs better in 5 problems. Although in 8 problems performance of other algorithms are same because of presence of feasible solution in the initialization stage itself.,Summary: Selective evaluation of constraints,Problem Definition (DTLZ2) 1 Number of variables (x)= 1
25、1 Number of constraints (g)= 0 Number of objective (f) =2,Findings (within 20000 cost (each objective evaluation = 1 unit cost): Performance of X model is best in terms of convergence rate of mean hypervolume with evaluation cost. False-positives are lowest for XF models. False-negatives are low for
26、 all the models,Mean hypervolume with cost (DTLZ2),False positive misclassification with cost (DTLZ2),False negative misclassification with cost (DTLZ2),Preliminary results: Selective evaluation of objectives,1 Deb, K. and Thiele, L. and Laumanns, M. and Zitzler, E., Scalable test problems for evolu
27、tionary multiobjective optimization. Springer, 2005,Problems investigated are: DTLZ 1-5, bi-objective optimization problems.Performance of X model is best in DTLZ2 and DTLZ5. Full evaluation offers best result in DTLZ1 and DTLZ3. In terms of false positives, XF1 and XF2 tends to have lower values. T
28、his supports the argument that the models are cautious i.e. they are selectively evaluating promising solutions. It is interesting to note that the false negatives of all models are fairly low. This suggests, that all the models quite accurate in discarding i.e. very few around 1% of promising solut
29、ions are incorrectly eliminated.,Summary: Selective evaluation of objectives,1. Generational fidelity switching approach (Constrained/Unconstrained single objective problems):,2. Steady state selective fidelity evaluation approach (Constrained/Unconstrained single objective problems):,Proposed appro
30、ach: Multi-fidelity optimization,Preliminary results: Test function,Constrained MFO Problem,Problem DefinitionOne dimensional. Number of fidelity levels: 5 Number of constraints: 5 Global Optimum at X = 3.67618, f5 = -0.9217 Feasible space in g5 lies betweenX = 2.122 to 2.452 and X = 3.554 to3.844,F
31、indings: Use of Pr=0.05 will result in evaluation of large number of solutions(cautious model). Expected to deliver solutions with low std. Use of Pr=0.2 will result in evaluation of few solutions and there is some possibility of misguidance.,Mean Plot of 30 runs using different values of Pr.,Prelim
32、inary results: Test function,Number of evaluation and copy at each Pr,Misclassification plot for different Pr,Preliminary results: Test function,Pr = 0.05,Pr = 0.1,Pr = 0.15,Pr = 0.2,Feasible solutions (at median run) Infeasible solutions,Preliminary results: Test function,Comparison of Results: Sin
33、gle fidelity, Progressive fidelity, Generational-progressive multi-fidelity, Steady state selective multi-fidelity approaches,Preliminary results: Test function,It is interesting to note that misclassification percentage increases with increasing evaluation cost. As the evolution process continues,
34、solutions in the population starts to loose diversity due to convergence to an optimum which results in poor training of the model in the context of progressive-fidelity-switching mechanism. Due to loss in diversity with increasing evaluation cost, median distance among solutions in the population r
35、educes to near zero. Number of evaluations will increase with forced evaluation. Selective fidelity evaluation based on multi-fidelity approach delivers better results in terms of convergence of objective value with evaluation cost.,Summary: Multi-fidelity optimization,Future direction,Instead of us
36、ing a decaying parameter in order to balance exploration/exploitation trade-off, an adaptive scheme needs to be adopted in the context of constraint handling. Incorporate multi/many-objective scheme in the context of constraint handling using selective evaluation approach. In the context of multi/ma
37、ny-objective unconstrained optimization problems, selective objective evaluation of potential solutions with an efficient non-dominated ranking scheme has to be incorporated. Local linear model of ranking has to be replaced by SVM ranking. There is a need to study the effects. Incorporate potential
38、child selection scheme within the progressive-fidelity-switching multi-fidelity optimization approach. Incorporate multi/many-objective scheme in the context of selective and progressive-fidelity-switching multi-fidelity approaches. Develop generic, scalable, tunable multi/many-objective constrained
39、/unconstrained test suite for multi-fidelity optimization problems.,Research outcome,1 Bhattacharjee, K.S., and Ray, T. , “Selective evaluation in multiobjective optimization: A less explored avenue,” in Proceedings of the IEEE Congress on Evolutionary Computation, (Sendai, Japan), 2015, In Press, (
40、Accepted 02/2015).2 Bhattacharjee, K.S., and Ray, T., “Selective constraint evaluation in multidisciplinary design optimization,” in Proceedings of the Eleventh World Congress of Structural and Multidisciplinary Optimization,(Sydney, Australia), 2015. In Press, (Accepted 03/2015).3 Bhattacharjee, K.
41、S., and Ray, T., “An Evolutionary Algorithm with Classifier Guided Constraint Evaluation Strategy for Computationally Expensive Optimization Problems,” in Proceedings of the Genetic and Evolutionary Computation Conference (GECCO),(Madrid, Spain), 2015. (Submitted)4 Bhattacharjee, K.S., and Ray, T., “Cost to evaluate versus Cost to learn ? Performance of selective evaluation strategies in Multiobjective Optimization,” in Proceedings of the Genetic and Evolutionary Computation Conference (GECCO),(Madrid, Spain), 2015. (Submitted),Timeline,Timeline,Thank you,