1、Analysis of Parallel Algorithms for Energy Conservation in Scalable Multicore Architectures,Vijay Anand Reddy and Gul Agha University of Illinois,2,Overview,Motivation Problem Definition & Assumptions Methodology & A case study Related Work & Conclusion,3,Energy and Multi-core,2% of energy consumed
2、in the US is by computers. Efficiency = Performance/Watt Want to optimize efficiency: Low power processors are typically more efficient. Varying the frequency at which cores run to balance performance and energy consumption.,4,Parallel Programming,Parallel programming involves Dividing computation i
3、nto autonomous actors Specifying interaction (shared memory or message passing) between them.,Parallel Performance,How many actors may execute at the same time: Concurrency Index The number of available cores The speed at which they execute How much and when they need to communicate: Communication o
4、verhead Network congestion at memory affects performance Performance depends on both the parallel application and parallel architecture.,5,Scalable Multicore Architectures,We are interested in (energy) efficiency as the number of cores are scaled up Can multicore architectures be scaled up?,6,7,Perf
5、ormance Vs Number of cores,Taken from IEEE spectrum Magazine (Sandia Research Labs),Increasing Cores may not benefit parallel programming applications if shared memory is maintained.,8,Message Passing, Performance and Energy Consumption,Parallel programming involves message passing between actors. I
6、ncreasing the number of cores: Leads to an increase in the number of messages communicated between them. Increasing cores may reduce performance. May lead to increased energy consumption. Depends on the parallel application and architectural parameters.,9,Energy versus Performance,For a fixed perfor
7、mance target, increasing cores may decrease the energy consumed for computation: Cores can be run at lower frequency But increasing cores will also increase the energy consumed for communication.Question: what is the trade off? Depends on the parallel application. Depends on the network architecture
8、. Depends on the memory structure at each core.,10,Energy Scalability under Iso-Performance,Given a parallel algorithm, an architecture model and the performance measure, what is the appropriate number of cores required for minimum energy consumption as a function of input size? Important for respon
9、se time in interactive Applications.,11,Simplifying Architectural Assumptions,All cores operate at the same speed. Speed of cores can be varied by frequency scaling. Computation time of the cores can be scaled (by controlling the speed), but not communication time between cores. Communication time b
10、etween cores is constant. No memory hierarchy at the cores.,12,Energy Model,Energy: E = Ec (number of cycles) T X3 Where Ec is hardware constant X is the frequency of the processor Running Time T = (number of cycles) (1/X),13,Constants,Em : Energy Consumed per message. F : Maximum frequency of a cor
11、e. N : Input Size. M : Number of cores. Kc : Number of cycles at max frequency for single message communication time. Pidle : Static power consumed per unit of time.,14,Case Study: Adding N Numbers,Example N numbers 4 Actors,N/4 additions,1,2,3,4,Communication period,In the end, actor 1 stores the s
12、um of all N numbers,15,Methodology,Step 1: Evaluate the critical path of the parallel algorithm,1,2,3,4,16,Methodology,Step 1: Evaluate the critical path of the parallel algorithmStep 2: Partition the critical path based on communication and computation steps.,1,2,3,4,Computation,Communication,17,Me
13、thodology,Step 3: Scale computation steps so that the parallel performance matches the sequential performance.F = F (N/M 1 + log(M) N log(M) Kcwhere is the number of cycles per addition,1,2,3,4,18,Methodology,Step 3: Scale computation steps so that the parallel performance matches the sequential per
14、formance.Step 4: Evaluate number of messages sends in the parallel algorithm,1,2,3,4,M 1 Messages,19,Methodology,Step 5: Frame an equation for energy consumption for the parallel application Energy for communicationEcomm = Em (M - 1)Energy for computationEcomp = Ec (N - 1) F2 Energy for Idle Computa
15、tion (static power).,20,Methodology,Step 6 : Analyze the equation to obtain appropriate number of cores required for minimum energy consumption as a function of input size.Differentiate w.r.t. the number of cores.,21,Plot: Energy-N-M, = 1Kc = 5 unitsEm / (Ec F2)= 500 Ps /F = 1,270 cores atN =1010,70
16、 cores at N = 108,Sensitivity Analysis ( k = Em / (Ec F2 ),22,As k increases , optimal number of cores decreases.,Nave Quicksort,Assume input array is on a single core. A single core partitions an array and sends part of it to another core. Recursively divide the array until all the cores are used (
17、assume static division). Merge the numbers.,23,Nave Quicksort Analysis,24,25,Case Study: Nave Quicksort,Energy,No: of Cores,Input Size,No Tradeoff: Single Core is good enough,Parallel Quicksort,Data to be sorted is distributed across the cores (assume parallel I/O). A single pivot is broadcast to al
18、l cores. Each core partitions its own data Data is moved so that the lessers are at cores in one region, and greaters are in another. Recursively quicksort each region.,26,Parallel Quicksort Analysis,27,Parallel Quicksort Algorithm,28,Comparing Quicksort Algorithms,Recall: Parallel Quicksort has sca
19、lability characteristics under performance iso-efficiency compared to that of Nave Quicksort. (Vipin. et al.) Both Quicksort algorithms have similar bad energy scalability under Iso-performance characteristics.,29,LU Factorization,Given an N x N matrix A, find a unit lower triangular matrix L and an
20、 upper triangular matrix U, such that A = L U Use the coarse-grain 1-D column parallel algorithm,30,LU Factorization Analysis,31,32,Case Study : LU Factorization,33,Related Work,Hardware Simulation Based Technique (J. Li and J.F. Martinez) Runtime adaptation technique (online) Goal: Find the appropr
21、iate frequency and number of cores for power efficient execution. Search space: O(L M), where L is the number of available frequency levels and M is the number of cores. Prediction Based Technique (Matthew et.al) Performance prediction model with low runtime overhead: Dynamically adjust L and M . St
22、atistically analyzes samples of hardware rate events (collected from performance monitors). Based on profiled data collected from real work load,34,Conclusion and Future work,Theoretical methodology has been proposed to evaluate the Energy-performance tradeoffs for parallel applications on multi-cor
23、e architectures as a function of input size. We plan to analyze various genre of parallel algorithms for Energy-performance trade offs. We also plan to build on this methodology to consider various memory structures for energy analysis of parallel applications.,35,References,1. Introduction to Parallel Computing by Vipin Kumar et al. 2. Dynamic Power-Performance Adaptation of Parallel Computation on Chip Multiprocessor, J. Li and J.F. Martinez, 2006 3. Prediction Models for Multi dimensional Power-Performance Optimization on Many Cores. Matthew et al., 2008,