Download presentation
Presentation is loading. Please wait.
Published byJuniper Cunningham Modified over 9 years ago
1
VADA Lab.SungKyunKwan Univ. 1 Lower Power Architecture Design 1999. 8.2 성균관대학교 조 준 동 교수 http://vada.skku.ac.kr
2
VADA Lab.SungKyunKwan Univ. 2 Architectural-level Synthesis Translate HDL models into sequencing graphs. Behavioral-level optimization: –Optimize abstract models independently from the implementation parameters. Architectural synthesis and optimization : –Create macroscopic structure: data-path and control-unit. –Consider area and delay information Hardware compilation: –Compile HDL model into sequencing graph. –Optimize sequencing graph. –Generate gate-level interconnection for a cell library. of the implementation.
3
VADA Lab.SungKyunKwan Univ. 3 Architecture-Level Solutions Architecture-Driven Voltage Scaling: Choose more parallel architecture, Lowering V dd reduces energy, but increase delays Regularity: to minimize the power in the control hardware and the interconnection network. Modularity: to exploit data locality through distributed processing units, mem- ories and control. –Spatial locality: an algorithm can be partitioned into natural clusters based on connectivity – Temporal locality:average lifetimes of variables (less temporal storage, probability of future accesses referenced in the recent past). Few memory references: since references to memories are expensive in terms of power. Precompute physical capacitance of Interconnect and switching activity (number of bus accesses
4
VADA Lab.SungKyunKwan Univ. 4 Power Measure of P
5
VADA Lab.SungKyunKwan Univ. 5 Architecture Trade-off Reference Data Path
6
VADA Lab.SungKyunKwan Univ. 6 Parallel Data Path
7
VADA Lab.SungKyunKwan Univ. 7 Pipelined Data Path
8
VADA Lab.SungKyunKwan Univ. 8 A Simple Data Path, Result
9
VADA Lab.SungKyunKwan Univ. 9 Uni-processor Implementation
10
VADA Lab.SungKyunKwan Univ. 10 Multi-Processor Implementation
11
VADA Lab.SungKyunKwan Univ. 11 Datapath Parallelization
12
VADA Lab.SungKyunKwan Univ. 12 FIR Parallelization Mahesh Mejendale, Sunil D. Sherlekar, G. Venkatesh “Low-Power Realization of FIR Filters on Programmable DSP’s” IEEE Transations on very large scale integration (VLSI) system, Vol. 6, No. 4, December 1998
13
VADA Lab.SungKyunKwan Univ. 13 Memory Parallelization At first order P= C * f/2 * Vdd 2
14
VADA Lab.SungKyunKwan Univ. 14 VLIW Architecture
15
VADA Lab.SungKyunKwan Univ. 15 VLIW - cont. Compiler takes the responsibility for finding the operations that can be issued in parallel and creating a single very long instruction containing these operations. VLIW instruction decoding is easier than superscalar instruction due to the fixed format and to no instruction dependency. The fixed format could present more limitations to the combination of operations. Intel P6: CISC instructions are combined on chip to provide a set of micro-operations (i.e., long instruction word) that can be executed in parallel. As power becomes a major issue in the design of fast -Pro, the simple is the better architecture. VLIW architecture, as they are simpler than N-issue machines, could be considered as promising architectures to achieve simultaneously high-speed and low-power.
16
VADA Lab.SungKyunKwan Univ. 16 Synchronous VS. Asynchronous Synchronous system: A signal path starts from a clocked flip- flop through combinational gates and ends at another clocked flip- flop. The clock signals do not participate in computation but are required for synchronizing purposes. With advancement in technology, the systems tend to get bigger and bigger, and as a result the delay on the clock wires can no longer be ignored. The problem of clock skew is thus becoming a bottleneck for many system designers. Many gates switch unnecessarily just because they are connected to the clock, and not because they have to process new inputs. The biggest gate is the clock driver itself which must switch. Asynchronous system (self-timed): an input signal (request) starts the computation on a module and an output signal (acknowledge) signifies the completion of the computation and the availability of the requested data. Asynchronous systems are potentially response to transitions on any of their inputs at anytime, since they have no clock with which to sample their inputs.
17
VADA Lab.SungKyunKwan Univ. 17 Asynchronous - Cont. More difficult to implement, requiring explicit synchronization between communication blocks without clocks If the signal feeds directly to conventional gate-level circuitry, invalid logic levels could propagate throughout the system. Glitches, which are filtered out by the clock in synchronous designs, may cause an asynchronous design to malfunction. Asynchronous designs are not widely used, designers can't find the supporting design tools and methodologies they need. DCC Error Corrector of Compact cassette player saves power of 80% as compared to the synchronous counterpart. Offers more architectural options/freedom encourages distributed, localized control offers more freedom to adapt the supply voltage S. Furber, M. Edwards. “Asynchronous Design Methodologies”. 1993
18
VADA Lab.SungKyunKwan Univ. 18 Asynchronous design with adaptive scaling of the supply voltage (a) Synchronous system (b) Asynchronous system with adaptive scaling of the supply voltage
19
VADA Lab.SungKyunKwan Univ. 19 Asynchronous Pipeline
20
VADA Lab.SungKyunKwan Univ. 20 PIPELINED SELF-TIMED micro P
21
VADA Lab.SungKyunKwan Univ. 21 Hazard-free Circuits 6% more logics
22
VADA Lab.SungKyunKwan Univ. 22 Through WAVE PIPELINING
23
VADA Lab.SungKyunKwan Univ. 23 Wave-pipelining on FPGA Pipeline 의 문제점 –Balanced partitioning –Delay element overhead –Tclk > Tmax - Tmin + clock skew + setup/hold time –Area, Power, 전체 지연시간의 증가 –Clock distribution problem Wavepipelining = high throughput w/o such overhead =Ideal pipelining
24
VADA Lab.SungKyunKwan Univ. 24 FPGA on WavePipeline LUT 의 delay 는 다양한 logic function 에 서도 비슷하다. 동일 delay 를 구성할 수 있다. FPGA element delay (wire, LUT, interconnection) Powerful layout editor Fast design cycle
25
VADA Lab.SungKyunKwan Univ. 25 WP advantages Area efficient - register, clock distribution network & clock buffer 필요 없음. Low power dissipation Higher throughput Low latency
26
VADA Lab.SungKyunKwan Univ. 26 Disadvantage Degraded performance in certain case Difficult to achieve sharp rise and fall time in synchronous design Layout is critical for balancing the delay Parameter variation - power supply and temperature dependence
27
VADA Lab.SungKyunKwan Univ. 27 Experimental Results By 이재형, SKKU
28
VADA Lab.SungKyunKwan Univ. 28 Observation WP multiplier 는 delay 를 조절하기 위한 LUTs 의 추가 가 많아서 전력소모 면에서 큰 이득은 보지 못했다. FPGA 에서 delay 를 조절하기 위해 LUTs 나 net delay 를 사용하지 않고 별도의 delay 소자를 사용하면 보다 효과적 또한, 동일한 level 을 가지는 multiplier 를 설계하면 WP 구현이 용이하고 pipeline 구조보다 전력소모나 면 적에서 큰 이득을 얻을 수 있을 것이다.
29
VADA Lab.SungKyunKwan Univ. 29 VON NEUMANN VERSUS HARVARD
30
VADA Lab.SungKyunKwan Univ. 30 Power vs Area of Micro-coded Microprocessor 1.5V and 10MHz clock rate: instruction and data memory accesses account for 47% of the total power consumption.
31
VADA Lab.SungKyunKwan Univ. 31 Memory Architecture
32
VADA Lab.SungKyunKwan Univ. 32 Exploiting Locality for Low-Power Design Power consumption (mW) in the maximally time-shared and fully-parallel versions of the QMF sub-band coder filter Improvement of a factor of 10.5 at the expense of a 20% increase in area The interconnect elements (buses, multiplexers, and buffers) consumes 43% and 28% of the total power in the time-shared and parallel versions. A spatially local cluster: group of algorithm operations that are tightly connected to each other in the flow graph representation. Two nodes are tightly connected to each other on the flow graph representation if the shortest distance between them, in terms of number of edges traversed, is low.
33
VADA Lab.SungKyunKwan Univ. 33 Cascade filter layouts (a)Non-local implementation from Hyper (b)Local implementation from Hyper-LP
34
VADA Lab.SungKyunKwan Univ. 34 Frequency Multipliers and Dividers
35
VADA Lab.SungKyunKwan Univ. 35 Low Power DSP 수행시간의 대부분이 DO-LOOP 에서 이 루어짐 VSELP Vocoder: 83.4 % 2D 8x8 DCT: 98.3 % LPC computation: 98.0 % DO-LOOP 의 Power Minimization ==> DSP 의 Power Minimization VSELP : Vector Sum Excited Linear Prediction LPC : Linear Prediction Coding
36
VADA Lab.SungKyunKwan Univ. 36 Low Power DSP Instruction Buffer ( 또는 Cache) locality 이용 Program memory 의 access 를 줄인다. Decoded Instruction Buffer –LOOP 의 첫번째 iteration 의 decoding 결과를 RAM 에 저장한 후 재사용 –Fetch/Decoding 과정을 제거 –30~40% Power Saving
37
VADA Lab.SungKyunKwan Univ. 37 Stage-Skip Pipeline The power savings is achieved by stopping the instruction fetch and decode stages of the processor during the loop execution except its first iteration. DIB = Decoded Instruction Buffer 40 % power savings using DSP or RISC processor.
38
VADA Lab.SungKyunKwan Univ. 38 Stage-Skip Pipeline Selector: selects the output from either the instruction decoder or DIB The decoded instruction signals for a loop are temporarily stored in the DIB and are reused in each iteration of the loop. The power wasted in the conventional pipeline is saved in our pipeline by stopping the instruction fetching and decoding for each loop execution.
39
VADA Lab.SungKyunKwan Univ. 39 Stage-Skip Pipeline Majority of execution cycles in signal processing programs are used for loop execution : 40% reduction in power with area increase 2%.
40
VADA Lab.SungKyunKwan Univ. 40 Optimizing Power using Transformation
41
VADA Lab.SungKyunKwan Univ. 41 Data- flow based transformations Tree Height reduction. Constant and variable propagation. Common subexpression elimination. Code motion Dead-code elimination The application of algebraic laws such as commutability, distributivity and associativity. Most of the parallelism in an algorithm is embodied in the loops. Loop jamming, partial and complete loop unrolling, strength reduction and loop retiming and software pipelining. Retiming: maximize the resource utilization.
42
VADA Lab.SungKyunKwan Univ. 42 Tree-height reduction Example of tree-height reduction using commutativity and associativity Example of tree-height reduction using distributivity
43
VADA Lab.SungKyunKwan Univ. 43 Sub-expression elimination Logic expressions: –Performed by logic optimization. –Kernel-based methods. Arithmetic expressions: –Search isomorphic patterns in the parse trees. –Example: – a= x+ y; b = a+ 1; c = x+ y; – a= x+ y; b = a+ 1; c = a;
44
VADA Lab.SungKyunKwan Univ. 44 Examples of other transformations Dead-code elimination: –a= x; b = x+ 1; c = 2 * x; –a= x; can be removed if not referenced. Operator-strength reduction: –a= x 2 ; b = 3 * x; –a= x * x; t = x<<1; b = x+ t; Code motion: –for ( i = 1; i < a * b) { } –t = a * b; for ( i = 1; i < t) { }
45
VADA Lab.SungKyunKwan Univ. 45 Control- flow based transformations Model expansion. –Expand subroutine flatten hierarchy. – Useful to expand scope of other optimization techniques. – Problematic when routine is called more than once. – Example: –x= a+ b; y= a * b; z = foo( x, y) ; –foo( p, q) {t =q-p; return(t);} –By expanding foo: –x= a+ b; y= a * b; z = y-x; Conditional expansion Transform conditional into parallel execution with test at the end. Useful when test depends on late signals. May preclude hardware sharing. Always useful for logic expressions. Example: y= ab; if ( a) x= b+d; else x= bd; can be expanded to: x= a( b+ d) + a’bd; y= ab; x= y+ d( a+ b);
46
VADA Lab.SungKyunKwan Univ. 46 Strength reduction
47
VADA Lab.SungKyunKwan Univ. 47 Strength Reduction
48
VADA Lab.SungKyunKwan Univ. 48 DIGLOG multiplier 1st Iter 2nd Iter 3rd Iter Worst-case error -25% -6% -1.6% Prob. of Error<1% 10% 70% 99.8% With an 8 by 8 multiplier, the exact result can be obtained at a maximum of seven iteration steps (worst case)
49
VADA Lab.SungKyunKwan Univ. 49 Logarithmic Number System --> Significant Strength Reduction
50
VADA Lab.SungKyunKwan Univ. 50 Switching Activity Reduction (a) Average activity in a multiplier as a function of the constant value (b) A parallel and serial implementations of an adder tree.
51
VADA Lab.SungKyunKwan Univ. 51 Pipelining
52
VADA Lab.SungKyunKwan Univ. 52 Associativity Transformation
53
VADA Lab.SungKyunKwan Univ. 53 Interlaced Accumulation Programming for Low Power
54
VADA Lab.SungKyunKwan Univ. 54 Two’s complement implementation of an accumulator
55
VADA Lab.SungKyunKwan Univ. 55 Sign magnitude implementation of an accumulator.
56
VADA Lab.SungKyunKwan Univ. 56 Number representation trade-off for arithmetic
57
VADA Lab.SungKyunKwan Univ. 57 Signal statistics for Sign Magnitude implementation of the accumulator datapath assuming random inputs.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.