ASHRAE LV-11-C070-2011 Designing HVAC Systems Using Particle Swarm Optimization.pdf

上传人:proposalcash356 文档编号:455473 上传时间:2018-11-23 格式:PDF 页数:14 大小:2.47MB
下载 相关 举报
ASHRAE LV-11-C070-2011 Designing HVAC Systems Using Particle Swarm Optimization.pdf_第1页
第1页 / 共14页
ASHRAE LV-11-C070-2011 Designing HVAC Systems Using Particle Swarm Optimization.pdf_第2页
第2页 / 共14页
ASHRAE LV-11-C070-2011 Designing HVAC Systems Using Particle Swarm Optimization.pdf_第3页
第3页 / 共14页
ASHRAE LV-11-C070-2011 Designing HVAC Systems Using Particle Swarm Optimization.pdf_第4页
第4页 / 共14页
ASHRAE LV-11-C070-2011 Designing HVAC Systems Using Particle Swarm Optimization.pdf_第5页
第5页 / 共14页
亲,该文档总共14页,到这儿已超出免费预览范围,如果喜欢就下载吧!
资源描述

1、Ramiro H. Bravo and Forrest W. Flocker are associate professors of mechanical engineering in the Department of Engineering and Technology at the University of Texas of the Permian Basin, Odessa, Texas. Designing HVAC Systems Using Particle Swarm Optimization Ramiro H. Bravo, PhD Forrest W. Flocker,

2、PhD Member ASHRAE ABSTRACT Many design and operating goals in heating, ventilating, and air conditioning (HVAC) systems are optimization problems. For example, building operators want to determine the combination of equipment and set points that obtain the most energy efficient operating conditions.

3、 Similarly, HVAC system designers want to determine the best combination of equipment, components and operating conditions that provide minimum lifecycle cost. In both problems, the goal is to find an optimal set of system design variables. The purpose of this work is to apply a relatively recent op

4、timization technique known as particle swarm optimization (PSO) to the design and operation of HVAC systems. This algorithm is based on the idea of swarms of animals, such as flying birds or insects, searching for food. In these swarms, each member gains knowledge from the whole and in turn contribu

5、tes his individual knowledge back to the swarm. The result is a very efficient way to find the best available food source. The PSO algorithm is well suited to computers with multiple processors and to HVAC problems because it does not require continuous or well behaved mathematical functions. Anothe

6、r advantage of the method is that it is remarkably simple to implement. This paper illustrates the PSO algorithm by applying it to the design of an HVAC piping system. INTRODUCTION In engineering design, the designer frequently reaches a point where decisions must be made that involve tradeoffs betw

7、een many design variables. The goal is to find the optimal solution and, frequently, optimization software is used to help in the process. This paper describes the particle swarm optimization (PSO) procedure and shows how it can be applied to heating, ventilating, and air conditioning (HVAC) systems

8、. Traditional optimization programs using computationally efficient, gradient-based algorithms are backed up with rigorous mathematical justification. In contrast, the PSO procedure is a search algorithm that attempts to mimic the swarming behavior observed in some animal species such as insects, bi

9、rds or fish. In the PSO procedure, many particles are sent out to search for the optimal solution. In the case of animals, the “particles” are individual members of the swarm and “the solution” is frequently a source of food. In the case of engineering design, the particles search through a design s

10、pace, evaluating and comparing the merits of each position investigated. Here the word “position” is used to mean a candidate set of design variables. The motion of the particles through the design space incorporates three elements: randomness, knowledge gained by the individual particle, and knowle

11、dge gained by the entire swarm. Each particle remembers the best position that it has seen and communicates this to the swarm. The swarm then communicates the overall best position back to the individuals who then move toward this position, all the while looking for better positions. The swarm final

12、ly converges to the optimal position. The PSO method was first introduced by (Kennedy and Eberhart) and (Eberhart and Kennedy) in the area of neural networks. Poli, Kennedy, and Blackwell provide a recent survey article of the method including many of the LV-11-C070 2011 ASHRAE 5652011. American Soc

13、iety of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). Published in ASHRAE Transactions, Volume 117, Part 1. For personal use only. Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAES prior written per

14、mission.recent developments and applications. A particularly lucid description of the method applied to structural analysis can be found in Venter and Sobieszczanski-Sobieski. The main advantages of the PSO method are (1) it is easy to implement and (2) it can be used on a wide variety of mathematic

15、al functions including nonlinear and discontinuous ones. For example, suppose that we are pumping a liquid through a pipe loop of constant inside diameter. We know that large pipes have low pumping losses but they also have higher initial cost. Here the tradeoff is between initial cost and operating

16、 cost. Therefore, we can write a lifecycle cost function as COST = f(d) where d is the inside diameter of the pipe and hope to find the best value of d, the one that minimizes the overall lifecycle cost. The initial cost includes the installation of all components, including pumps, valves and coils,

17、 and the operating cost includes energy and maintenance costs for all the alternatives. As in nearly all practical engineering designs, the function f(d) is not continuous. That is, we cannot choose from an infinite number of ds; instead, we must choose from a discreet set of standard pipe sizes, sa

18、y, schedule 40 pipe. Similalrly we must choose from a discrete set of pump sizes and other components. In this example f(d) is both nonlinear and discontinuous. As we will see below, the PSO algorithm easily locates the optimum for these types of functions. THE PSO ALGORITHM The procedure starts by

19、defining the design variables for a particular problem and the corresponding design space. The problem is quantified through an objective function F(x) that depends only on the design variables. The swarm then searches through the design space for the set of variables that gives the best solution to

20、 the problem, the one that minimizes the objective function. Let x denote the vector of design variables. For each variable in x, there is an upper and lower value in the design space. Let xmin and xmax be the vectors of upper and lower bounds, respectively. The swarm is initially distributed random

21、ly throughout the design space in accordance with nullnullnull = nullmin + nullnull(nullmax nullmin) (1) Here R1 is a vector of independent random numbers between 0 and 1. Subscripts on x indicate the search iteration number and superscripts indicate the particle number. Let denote the displacement

22、vector for the design variables. This is the amount that the design variables will change at the next search iteration. For each particle we assign an initial displacement nullnull = nullmin + nullnull(nullmax nullmin) (2) Here R2 is another vector of independent random numbers between 0 and 1 and t

23、he subscript and superscript convention on is the same as that for x. After each iteration the objective function is evaluated at each position occupied by a particle. Let X i denote the best position found by particle i and X g the best position found by the swarm at the j t h iteration. The displa

24、cement for iteration j+1 is calculated with nullnullnullnull = nullnullnull + nullnullnullnullnullnullnull nullnullnullnull + nullnullnullnullnullnullnull nullnullnullnull (3) where is a weighting parameter called the particle inertia; R3 and R4 are vectors of independent random numbers between 0 an

25、d 1; and Cp and Cs are weighting parameters that indicate how much confidence the particle has in itself and in the swarm, respectively. The selection of the three weighting parameters is discussed below. Once the new displacements are calculated, the particles move to their new positions given by n

26、ullnullnullnullnull = nullnullnull + nullnullnullnull (4) 566 ASHRAE TransactionsThe process of updating displacements and positions is repeated with the individual particle motion being partially random, but guided by the success of itself and the swarm. If successful, the entire swarm converges to

27、 the optimal solution, the one that minimizes the objective function. The PSO Algorithm and Parallel Processing Aside from being easy to implement, an important advantage of the PSO method is that it is ideally suited for parallel processing by using many computer processors simultaneously. Parallel

28、 processing requires a computation algorithm that divides all computations in many smaller independent calculations. In PSO, the objective function of each particle can be evaluated independently of the other particles and calculated in a separate processor. Only at the completion of an iteration do

29、 they need to communicate with each other. Therefore, there is the potential for solving huge optimization problems with massively paralleled computers PSO Parameters In the PSO algorithm discussed here, there are three user-selected parameters: , Cp, and Cs. The inertia parameter, , gets its name b

30、ecause it tends to keep a particle moving in the same direction. Larger inertia values tend to give a more global search whereas smaller ones give a more local search. Shi and Eberhart recommend an inertia parameter of around 1.0 and also propose a scheme to dynamically reduce the parameter from hig

31、her to lower values during searching. Venter and Sobieszczanski-Sobieski give a dynamic reduction algorithm that it tied to the convergence of the swarm. They recommend starting at 1.4 and reducing to 0.35 as the swarm converges. Kennedy and Eberhart recommend that the confidence parameters Cp and C

32、s be around 2.0. The random numbers in Equation (3) each have an average of 0.5; therefore, when they are multiplied by confidence parameters of 2.0, coefficients with an average of unity are obtained. Venter and Sobieszczanski-Sobieski found that values of Cp = 1.50 and Cs = 2.50 worked well for st

33、ructural optimization, placing slightly more confidence in the swarm than the individual particles. Global Convergence and Swarm Density Refinement A robust convergence criterion is important for any optimization algorithm because it is essential to know when to stop the iteration process after an o

34、ptimal solution is found. In this paper, two different methods to establish convergence were used. In the first method, as described by Venter and Sobieszczanski-Sobieski, the best value for the objective function obtained by the swarm was monitored. No change in the best value for a specified numbe

35、r of search iterations was used to signal convergence. In the second method, developed in this work, the distance between particles was monitored. When the maximum distance between any two particles was below a specified value, convergence was assumed. The distance between two particles was taken to

36、 be the usual Euclidean measure as the square root of the summation of squares of the distances between particles in each of the dimensions. The second method gives a better measure of the entire swarm converging to a single point in the design space. Convergence of the PSO algorithm depends on the

37、parameters , Cp, and Cs as well as the number of particles. Using a huge number of particles the algorithm should be able to find the global optimum, but this is not practical. On the contrary, using a small number of particles the PSO algorithm will converge, but it may not converge to the global o

38、ptimum. The question of how many particles are required to ensure that the global optimum is found is critical and a process of swarm density refinement should be done. This process is illustrated in Figure 1 where the optimal value of one of the variables found by the swarm is plotted as a function

39、 of swarm size. In this figure we can observe that the design variable does not change for ten or more particles. The data used to construct this figure were taken from a three-variable optimization problem described later in this paper. In this case, the design variables all had dimensions between

40、three and twenty inches (0.076 and 0.50 m) and a value of the Euclidean distance less than 0.06 inch (1.5x10-3 m) stopped the iteration process. The refinement process starts with two 2011 ASHRAE 567particles and increases until the swarm converges repeatedly to the same point. In this case, ten par

41、ticles and above locate the global optimum. Typically, the number of iterations needed for convergence to the global optimum increases with the number of particles. Here, about 50 iterations were needed for convergence with 20 particles, and about 130 iterations with 1000 particles. Two points are c

42、rucial in this discussion: (1) for the illustrative problem considered here, swarms of less than ten particles converged, but not to the correct solution; and (2) one cannot determine the minimum number of particles needed to get convergence to the global optimum without doing swarm density study. F

43、igure 1 Swarm density study for the PSO method. Discreet and Constrained Design Variables Frequently in engineering design we do not have continuous design variables. Instead we must choose from a set or collection of standard sized parts or components. For example, when choosing structural members

44、for a building, the engineer chooses from standard size beams, channels, angles, etc. Similarly, the HVAC designer selects pumps, pipes, tubing, and the like from applicable standard components. Another common design problem is one in which a design variable is subject to a constraint. For example,

45、the weight of a structural member may not be allowed to exceed a specified value. A common constraint in piping problems is that pipe diameters must be constrained to be greater than zero. Venter and Sobieszczanski-Sobieski show how to handle both of these problems, which are frequently encountered

46、in engineering design. With regard to discreet variables, after each position update, the particles are moved to a discreet point by rounding to the appropriate standard size. The user decides the rounding method appropriate for the design. For example, stressed structural members are usually rounde

47、d to the next higher standard size. For less critical applications, rounding to the nearest standard size may be appropriate. The constrained optimization problem is not considered here, but the implementation is straightforward. Particles that move into a region that violates a constraint are “pena

48、lized” by adding an arbitrarily large value to the objective function, thereby ensuring that the violated position will not be a minimum. TWO-VARIABLE ILLUSTRATIVE PROBLEM To illustrate the method, a two-variable problem is chosen because the objective function can be visualized on a three-dimension

49、al plot. In this problem we consider a function of two variables, x and y: 1616.216.416.616.81717.21 10 100DesignVariableNumber of Particles568 ASHRAE Transactionsnull(null,null) = sin(null) cos (null)nullnullnullnullnullnull nullnullnullnullnullnull (5) The function is shown in Figure 2, where a minimum exists at x = -1.4289, y = 0 with F(x,y) = -0.893875. The function has several local extrema, tapering of

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 标准规范 > 国际标准 > 其他

copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
备案/许可证编号:苏ICP备17064731号-1