ImageVerifierCode 换一换
格式:PPT , 页数:25 ,大小:1.47MB ,
资源ID:374346      下载积分:2000 积分
快捷下载
登录下载
邮箱/手机:
温馨提示:
如需开发票,请勿充值!快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝扫码支付 微信扫码支付   
注意:如需开发票,请勿充值!
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【http://www.mydoc123.com/d-374346.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(Development of Optimization Algorithms for the Solution of .ppt)为本站会员(周芸)主动上传,麦多课文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知麦多课文库(发送邮件至master@mydoc123.com或直接QQ联系客服),我们立即给予删除!

Development of Optimization Algorithms for the Solution of .ppt

1、Development of Optimization Algorithms for the Solution of Problems involving Computationally Expensive Analysis,Kalyan Shankar Bhattacharjee Supervisor: Tapabrata Ray Co-supervisor: Hemant Kumar Singh,Presentation overview,What class of problems do they represent and why is it important to solve th

2、em ? Existing approaches and their limitations Approaches proposed in this research Preliminary results Future Direction Timeline and deliverables,What is the problem ?,Optimization problems in which the objectives and/or constraints are evaluated using computationally expensive simulations, define

3、the general scope of such problems. Examples include multidisciplinary design, models of complex natural systems or processes where the underlying analysis involves iterative solvers (finite element analysis, computational fluid dynamics, computational electro-magnetics, weather models, bushfire mod

4、els etc. Objective functions and constraints are highly nonlinear often with function and slope discontinuity. Solvers are often in black box form. Large number of variables. Evaluation of a single solution is computationally expensive, several CPU hours.To deal with such classes of problems, popula

5、tion based stochastic algorithms are typically used often with the aid of approximations.,Existing approaches,Several approaches have been proposed so far to deal with such classes of problems: Use of clusters, parallel computing infrastructure, GPs etc. Available to a small group of industries or e

6、lite research institutes. Is not within the scope of this research. Use of surrogates or computationally cheaper approximation models in lieu of computationally expensive analysis during the course of optimization. Well established area of Surrogate Assisted Optimization. Not considered within this

7、scope of research directly. Identification of promising solutions and evaluating only them to reduce computational cost. Rarely investigated in the field. Selective evaluation of constraints and objectives of these promising solutions can lead to further savings. Use of multi-fidelity optimization s

8、trategies. Fewer reports and less investigated in the field.,Existing approaches: Selective evaluation,Identification of promising solutions based on SVM has been suggested for MO problems. No reports on whether evaluation of one is sufficient ? Which one ? And can one always evaluate just one ? Par

9、tial evaluation has never been studied. There are no reports on classifier guided constraint selection.Can we design means to identify promising solutions and only evaluate relevant objectives and constraints of them during the course of search ?,Limitations: Selective evaluation,Existing approaches

10、: Multi-fidelity optimization,Using correction/scaling/bridging MFO: misguided search, self adaptive correction functions. Non-linearity handling in RSM. Knowledge based (e.g. KBNN) algorithms: dependent on many parameters, proper network architecture identification, extrapolation beyond the limit o

11、f dataset is impossible. Trust region model and space mapping: constraint handling is not implemented, significant overhead calculations at each iteration for model update, 1st order consistency with the Lagrangian at the centre of trust region. Gradient based models: misguidance by low fidelity est

12、imates, expensive to build metamodel at each iteration especially when number of data points are high. Switching fidelity models: involve many parameters, switching mechanism activates when there is no improvement based on low fidelity estimates, misguided search. Lack of efficient constraint handli

13、ng mechanism in multi-fidelity optimization. Multi/many-objective constrained/unconstrained multi-fidelity problem solving is not studied so far. Rank based learning model to identify worth of a solution and selective fidelity evaluations of potential solutions are still not well visited areas. Benc

14、hmark test suite of multi-fidelity optimization does not exist.,Limitations: Multi-fidelity optimization,Research objectives,Development of classifier guided selective evaluation strategies i.e. ones where promising solutions are identified and/or selected set of objectives and constraints are evalu

15、ated.Development of practical and computationally efficient optimization algorithms to solve multi/many-objective constrained/unconstrained MFO problems.Development of a test suite for MFO (single/multi/many-objective, constrained/unconstrained, scalable, tunable problems).,Proposed approach: Select

16、ive evaluation of solutions and constraints,1. Selective evaluation of solutions and constraints (constrained single objective problems):,1 Suykens, J. A. K. and Van Gestel, T. and De Brabanter, J. and De Moor, B. and Vandewalle, J., Least squares support vector machines. World Scientific, 2002, vol

17、. 4. 2 Chapelle, O. and Keerthi, S. S., “Efficient algorithms for ranking with SVMs,” Information Retrieval, vol. 13, no. 3, pp. 201215, 2010.,2. Selective evaluation of objectives (Unconstrained bi-objective problems):,Proposed approach: Selective evaluation of objectives,Preliminary results: Selec

18、tive evaluation of constraints,Problem Definition (G5) 1 Number of variables (x)= 4 Number of constraints (g)= 4 Number of objective (f) =1 Best know Optimum at X = 679.9453, 1026.067, 0.1188764, 0.3962336, f = 5126.4981 (Sum of CV = 1e-4),fitness evaluation = 348,000 2 Feasibility ratio () = 0.0000

19、%,Findings (within 1000 fitness evaluation): Performance of CGCSM is best in terms of convergence rate of mean objective function value with evaluation cost. IDEA and NSGA-II performance is the same. SRES performs worst in all aspects.,Mean sum CV convergence plot (G5),1 Koziel, S., & Michalewicz, Z

20、. (1999). Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization. Evolutionary computation, 7(1), 19-44. 2 Mezura-Montes, E., Coello, C. A. C., & Tun-Morales, E. I. (2004). Simple feasibility rules and differential evolution for constrained optimization. In MICAI 2004

21、: Advances in Artificial Intelligence (pp. 707-716). Springer Berlin Heidelberg.,Problems investigated are: G series (G1-G11), Engineering design (Belleville Spring, Car Side Impact, Disk Clutch Brake, Helical Spring, Speed Reducer, Step Cone Pulley, Pressure Vessel, Welded Beam)Performance of CGCSM

22、 is better in 13 out of 19 problems based on convergence of sum of constraint violations with evaluation budget. CGCSM seems to work better with problems having very small feasible region i.e. problems with constraint which are hard to satisfy such as equality constraints.CGCSM sequentially tries to

23、 satisfy constraints in the ascending order of their respective percentage ratios of feasible region with the whole search space. This approach essentially brings down sum of constraint violations at a faster rate for most of the problems than other algorithms.Considering evaluation cost when first

24、feasible is obtained, CGCSM performs better in 5 problems. Although in 8 problems performance of other algorithms are same because of presence of feasible solution in the initialization stage itself.,Summary: Selective evaluation of constraints,Problem Definition (DTLZ2) 1 Number of variables (x)= 1

25、1 Number of constraints (g)= 0 Number of objective (f) =2,Findings (within 20000 cost (each objective evaluation = 1 unit cost): Performance of X model is best in terms of convergence rate of mean hypervolume with evaluation cost. False-positives are lowest for XF models. False-negatives are low for

26、 all the models,Mean hypervolume with cost (DTLZ2),False positive misclassification with cost (DTLZ2),False negative misclassification with cost (DTLZ2),Preliminary results: Selective evaluation of objectives,1 Deb, K. and Thiele, L. and Laumanns, M. and Zitzler, E., Scalable test problems for evolu

27、tionary multiobjective optimization. Springer, 2005,Problems investigated are: DTLZ 1-5, bi-objective optimization problems.Performance of X model is best in DTLZ2 and DTLZ5. Full evaluation offers best result in DTLZ1 and DTLZ3. In terms of false positives, XF1 and XF2 tends to have lower values. T

28、his supports the argument that the models are cautious i.e. they are selectively evaluating promising solutions. It is interesting to note that the false negatives of all models are fairly low. This suggests, that all the models quite accurate in discarding i.e. very few around 1% of promising solut

29、ions are incorrectly eliminated.,Summary: Selective evaluation of objectives,1. Generational fidelity switching approach (Constrained/Unconstrained single objective problems):,2. Steady state selective fidelity evaluation approach (Constrained/Unconstrained single objective problems):,Proposed appro

30、ach: Multi-fidelity optimization,Preliminary results: Test function,Constrained MFO Problem,Problem DefinitionOne dimensional. Number of fidelity levels: 5 Number of constraints: 5 Global Optimum at X = 3.67618, f5 = -0.9217 Feasible space in g5 lies betweenX = 2.122 to 2.452 and X = 3.554 to3.844,F

31、indings: Use of Pr=0.05 will result in evaluation of large number of solutions(cautious model). Expected to deliver solutions with low std. Use of Pr=0.2 will result in evaluation of few solutions and there is some possibility of misguidance.,Mean Plot of 30 runs using different values of Pr.,Prelim

32、inary results: Test function,Number of evaluation and copy at each Pr,Misclassification plot for different Pr,Preliminary results: Test function,Pr = 0.05,Pr = 0.1,Pr = 0.15,Pr = 0.2,Feasible solutions (at median run) Infeasible solutions,Preliminary results: Test function,Comparison of Results: Sin

33、gle fidelity, Progressive fidelity, Generational-progressive multi-fidelity, Steady state selective multi-fidelity approaches,Preliminary results: Test function,It is interesting to note that misclassification percentage increases with increasing evaluation cost. As the evolution process continues,

34、solutions in the population starts to loose diversity due to convergence to an optimum which results in poor training of the model in the context of progressive-fidelity-switching mechanism. Due to loss in diversity with increasing evaluation cost, median distance among solutions in the population r

35、educes to near zero. Number of evaluations will increase with forced evaluation. Selective fidelity evaluation based on multi-fidelity approach delivers better results in terms of convergence of objective value with evaluation cost.,Summary: Multi-fidelity optimization,Future direction,Instead of us

36、ing a decaying parameter in order to balance exploration/exploitation trade-off, an adaptive scheme needs to be adopted in the context of constraint handling. Incorporate multi/many-objective scheme in the context of constraint handling using selective evaluation approach. In the context of multi/ma

37、ny-objective unconstrained optimization problems, selective objective evaluation of potential solutions with an efficient non-dominated ranking scheme has to be incorporated. Local linear model of ranking has to be replaced by SVM ranking. There is a need to study the effects. Incorporate potential

38、child selection scheme within the progressive-fidelity-switching multi-fidelity optimization approach. Incorporate multi/many-objective scheme in the context of selective and progressive-fidelity-switching multi-fidelity approaches. Develop generic, scalable, tunable multi/many-objective constrained

39、/unconstrained test suite for multi-fidelity optimization problems.,Research outcome,1 Bhattacharjee, K.S., and Ray, T. , “Selective evaluation in multiobjective optimization: A less explored avenue,” in Proceedings of the IEEE Congress on Evolutionary Computation, (Sendai, Japan), 2015, In Press, (

40、Accepted 02/2015).2 Bhattacharjee, K.S., and Ray, T., “Selective constraint evaluation in multidisciplinary design optimization,” in Proceedings of the Eleventh World Congress of Structural and Multidisciplinary Optimization,(Sydney, Australia), 2015. In Press, (Accepted 03/2015).3 Bhattacharjee, K.

41、S., and Ray, T., “An Evolutionary Algorithm with Classifier Guided Constraint Evaluation Strategy for Computationally Expensive Optimization Problems,” in Proceedings of the Genetic and Evolutionary Computation Conference (GECCO),(Madrid, Spain), 2015. (Submitted)4 Bhattacharjee, K.S., and Ray, T., “Cost to evaluate versus Cost to learn ? Performance of selective evaluation strategies in Multiobjective Optimization,” in Proceedings of the Genetic and Evolutionary Computation Conference (GECCO),(Madrid, Spain), 2015. (Submitted),Timeline,Timeline,Thank you,

copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
备案/许可证编号:苏ICP备17064731号-1