High Performance Embedded Computing © 2007 Elsevier Chapter 7, part 1: Hardware/Software Co-Design High Performance Embedded Computing Wayne Wolf.

Slides:



Advertisements
Similar presentations
© 2004 Wayne Wolf Topics Task-level partitioning. Hardware/software partitioning.  Bus-based systems.
Advertisements

Hardware/ Software Partitioning 2011 年 12 月 09 日 Peter Marwedel TU Dortmund, Informatik 12 Germany Graphics: © Alexandra Nolte, Gesine Marwedel, 2003 These.
Implementation Approaches with FPGAs Compile-time reconfiguration (CTR) CTR is a static implementation strategy where each application consists of one.
Chapter 6 Computer Architecture
Modern VLSI Design 2e: Chapter 8 Copyright  1998 Prentice Hall PTR Topics n High-level synthesis. n Architectures for low power. n Testability and architecture.
Modern VLSI Design 4e: Chapter 8 Copyright  2008 Wayne Wolf Topics High-level synthesis. Architectures for low power. GALS design.
FPGA-Based System Design: Chapter 6 Copyright  2004 Prentice Hall PTR Register-transfer Design n Basics of register-transfer design: –data paths and controllers.
Modern VLSI Design 3e: Chapter 10 Copyright  2002 Prentice Hall Adapted by Yunsi Fei ECE 300 Advanced VLSI Design Fall 2006 Lecture 24: CAD Systems &
Extensible Processors. 2 ASIP Gain performance by:  Specialized hardware for the whole application (ASIC). −  Almost no flexibility. −High cost.  Use.
Reference: Message Passing Fundamentals.
1 HW/SW Partitioning Embedded Systems Design. 2 Hardware/Software Codesign “Exploration of the system design space formed by combinations of hardware.
High-level System Modeling and Power Management Techniques Jinfeng Liu Dept. of ECE, UC Irvine Sep
Courseware High-Level Synthesis an introduction Prof. Jan Madsen Informatics and Mathematical Modelling Technical University of Denmark Richard Petersens.
Mahapatra-Texas A&M-Fall'001 cosynthesis Introduction to cosynthesis Rabi Mahapatra CPSC498.
Process Scheduling for Performance Estimation and Synthesis of Hardware/Software Systems Slide 1 Process Scheduling for Performance Estimation and Synthesis.
Chapter 7 Hardware Accelerators 金仲達教授 清華大學資訊工程學系 (Slides are taken from the textbook slides)
Configurable System-on-Chip: Xilinx EDK
Reconfigurable Computing S. Reda, Brown University Reconfigurable Computing (EN2911X, Fall07) Lecture 08: RC Principles: Software (1/4) Prof. Sherief Reda.
Scheduling with Optimized Communication for Time-Triggered Embedded Systems Slide 1 Scheduling with Optimized Communication for Time-Triggered Embedded.
Data Partitioning for Reconfigurable Architectures with Distributed Block RAM Wenrui Gong Gang Wang Ryan Kastner Department of Electrical and Computer.
Trend towards Embedded Multiprocessors Popular Examples –Network processors (Intel, Motorola, etc.) –Graphics (NVIDIA) –Gaming (IBM, Sony, and Toshiba)
High Performance Embedded Computing © 2007 Elsevier Chapter 6, part 1: Multiprocessor Software High Performance Embedded Computing Wayne Wolf.
Winter-Spring 2001Codesign of Embedded Systems1 Introduction to HW/SW Co-Synthesis Algorithms Part of HW/SW Codesign of Embedded Systems Course (CE )
HW/SW Co-Synthesis of Dynamically Reconfigurable Embedded Systems HW/SW Partitioning and Scheduling Algorithms.
1 Energy Savings and Speedups from Partitioning Critical Software Loops to Hardware in Embedded Systems Greg Stitt, Frank Vahid, Shawn Nematbakhsh University.
Center for Embedded Computer Systems University of California, Irvine and San Diego SPARK: A Parallelizing High-Level Synthesis.
Universität Dortmund  P. Marwedel, Univ. Dortmund, Informatik 12, 2003 Hardware/software partitioning  Functionality to be implemented in software.
1 A survey on Reconfigurable Computing for Signal Processing Applications Anne Pratoomtong Spring2002.
Register-Transfer (RT) Synthesis Greg Stitt ECE Department University of Florida.
SBSE Course 4. Overview: Design Translate requirements into a representation of software Focuses on –Data structures –Architecture –Interfaces –Algorithmic.
C.S. Choy95 COMPUTER ORGANIZATION Logic Design Skill to design digital components JAVA Language Skill to program a computer Computer Organization Skill.
An Introduction Chapter Chapter 1 Introduction2 Computer Systems  Programmable machines  Hardware + Software (program) HardwareProgram.
Levels of Architecture & Language CHAPTER 1 © copyright Bobby Hoggard / material may not be redistributed without permission.
Making FPGAs a Cost-Effective Computing Architecture Tom VanCourt Yongfeng Gu Martin Herbordt Boston University BOSTON UNIVERSITY.
Automated Design of Custom Architecture Tulika Mitra
May 2004 Department of Electrical and Computer Engineering 1 ANEW GRAPH STRUCTURE FOR HARDWARE- SOFTWARE PARTITIONING OF HETEROGENEOUS SYSTEMS A NEW GRAPH.
Winter-Spring 2001Codesign of Embedded Systems1 Co-Synthesis Algorithms: HW/SW Partitioning Part of HW/SW Codesign of Embedded Systems Course (CE )
Modern VLSI Design 4e: Chapter 8 Copyright  2008 Wayne Wolf Topics Testability and architecture. Design methodologies. Multiprocessor system-on-chip.
High Performance Embedded Computing © 2007 Elsevier Chapter 1, part 2: Embedded Computing High Performance Embedded Computing Wayne Wolf.
Lecture 10 Hardware Accelerators Ingo Sander
1 Towards Optimal Custom Instruction Processors Wayne Luk Kubilay Atasu, Rob Dimond and Oskar Mencer Department of Computing Imperial College London HOT.
Hardware/Software Co-design Design of Hardware/Software Systems A Class Presentation for VLSI Course by : Akbar Sharifi Based on the work presented in.
FPGA-Based System Design: Chapter 3 Copyright  2004 Prentice Hall PTR Topics n FPGA fabric architecture concepts.
High Performance Embedded Computing © 2007 Elsevier Chapter 7, part 2: Hardware/Software Co-Design High Performance Embedded Computing Wayne Wolf.
A Graph Based Algorithm for Data Path Optimization in Custom Processors J. Trajkovic, M. Reshadi, B. Gorjiara, D. Gajski Center for Embedded Computer Systems.
L11: Lower Power High Level Synthesis(2) 성균관대학교 조 준 동 교수
High Performance Embedded Computing © 2007 Elsevier Lecture 18: Hardware/Software Codesign Embedded Computing Systems Mikko Lipasti, adapted from M. Schulte.
Modern VLSI Design 4e: Chapter 8 Copyright  2008 Wayne Wolf Topics Basics of register-transfer design: –data paths and controllers; –ASM charts. Pipelining.
- 1 - EE898_HW/SW Partitioning Hardware/software partitioning  Functionality to be implemented in software or in hardware? No need to consider special.
6. A PPLICATION MAPPING 6.3 HW/SW partitioning 6.4 Mapping to heterogeneous multi-processors 1 6. Application mapping (part 2)
L/O/G/O Input Output Chapter 4 CS.216 Computer Architecture and Organization.
System-level power analysis and estimation September 20, 2006 Chong-Min Kyung.
High Performance Embedded Computing © 2007 Elsevier Chapter 7, part 3: Hardware/Software Co-Design High Performance Embedded Computing Wayne Wolf.
Chapter 13 – I/O Systems (Pgs ). Devices  Two conflicting properties A. Growing uniformity in interfaces (both h/w and s/w): e.g., USB, TWAIN.
FPGA-Based System Design: Chapter 7 Copyright  2004 Prentice Hall PTR Topics n Hardware/software co-design.
CSCI1600: Embedded and Real Time Software Lecture 33: Worst Case Execution Time Steven Reiss, Fall 2015.
Pipelined and Parallel Computing Partition for 1 Hongtao Du AICIP Research Dec 1, 2005 Part 2.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 8 Networks and Multiprocessors.
Modern VLSI Design 3e: Chapter 8 Copyright  1998, 2002 Prentice Hall PTR Topics n Basics of register-transfer design: –data paths and controllers; –ASM.
Mapping of Regular Nested Loop Programs to Coarse-grained Reconfigurable Arrays – Constraints and Methodology Presented by: Luis Ortiz Department of Computer.
Winter-Spring 2001Codesign of Embedded Systems1 Co-Synthesis Algorithms: Distributed System Co- Synthesis Part of HW/SW Codesign of Embedded Systems Course.
High Performance Embedded Computing © 2007 Elsevier Lecture 10: Code Generation Embedded Computing Systems Michael Schulte Based on slides and textbook.
1 Hardware-Software Co-Synthesis of Low Power Real-Time Distributed Embedded Systems with Dynamically Reconfigurable FPGAs Li Shang and Niraj K.Jha Proceedings.
ECE 526 – Network Processing Systems Design Programming Model Chapter 21: D. E. Comer.
1 of 14 Lab 2: Formal verification with UPPAAL. 2 of 14 2 The gossiping persons There are n persons. All have one secret to tell, which is not known to.
FPGA-Based System Design: Chapter 3 Copyright  2004 Prentice Hall PTR Topics n FPGA fabric architecture concepts.
CSCI1600: Embedded and Real Time Software
Lesson 4 Synchronous Design Architectures: Data Path and High-level Synthesis (part two) Sept EE37E Adv. Digital Electronics.
HIGH LEVEL SYNTHESIS.
CSCI1600: Embedded and Real Time Software
Presentation transcript:

High Performance Embedded Computing © 2007 Elsevier Chapter 7, part 1: Hardware/Software Co-Design High Performance Embedded Computing Wayne Wolf

© 2006 Elsevier Topics Platforms. Performance analysis. Design representations.

© 2006 Elsevier Design platforms Different levels of integration:  PC + board.  Custom board with CPU + FPGA or ASIC.  Platform FPGA.  System-on-chip.

© 2006 Elsevier CPU/accelerator architecture CPU is sometimes called host. Accelerator communicate via shared memory.  May use DMA to communicate. CPU memory accelerator

© 2006 Elsevier Example: Xilinx Virtex-4 System-on-chip:  FPGA fabric.  PowerPC.  On-chip RAM.  Specialized I/O devices. FPGA fabric is connected to PowerPC bus. MicroBlaze CPU can be added in FPGA fabric.

© 2006 Elsevier Example: WILDSTAR II Pro

© 2006 Elsevier Performance analysis Must analyze accelerator performance to determine system speedup. High-level synthesis helps:  Use as estimator for accelerator performance.  Use to implement accelerator.

© 2006 Elsevier Data path/controller architecture Data path performs regular operations, stores data in registers. Controller provides required sequencing. Data path controller

© 2006 Elsevier High-level synthesis High-level synthesis creates register-transfer description from behavioral description. Schedules and allocates:  Operators.  Variables.  Connections. Control step or time step is one cycle in system controller. Components may be selected from technology library.

© 2006 Elsevier Models Model as data flow graph. Critical path is set of nodes on path that determines schedule length.

© 2006 Elsevier Schedules As-soon-as-possible (ASAP) pushes all nodes to start of slack region. As-late-as-possible (ASAP) pushes all nodes to end of slack region. Useful for bounding schedule. ASAP ALAP

© 2006 Elsevier First-come first-served, critical path FCFS walks through data flow graph from sources to sinks. Schedules each operator in first available slot based on available resources. Critical-path scheduling walks through critical nodes first.

© 2006 Elsevier List scheduling Improvement on critical path scheduling. Estimates importance of nodes off the critical path.  Estimates how close node is to being critical.  D, number of descendants, estimates criticality.  Node with fewer descendants is less likely to become critical. Traverse graph from sources to sinks.  For nodes at a given depth, order nodes by criticality.

© 2006 Elsevier Force-directed scheduling Forces model the connections to other operators.  Forces on operator change as schedule of related operators change. Forces are a linear fucntion of displacement. Predecessor/successor forces relate operator to nearby operators. Place operator at minimum- force location in schedule.

© 2006 Elsevier Distribution graph Bound schedule using ASAP, ALAP. Count number of operators of a given type at each point in the schedule.  Weight by how likely each operator is to be at that time in the schedule.

© 2006 Elsevier Path-based scheduling Minimizes the number of control states in controller. Schedules each path independently, then combines paths into a system schedule. Schedule path combinations using minimum clique covering.

© 2006 Elsevier Accelerator estimation How do we use high-level synthesis, etc. to estimate the performance of an accelerator? We have a behavioral description of the accelerator function. Need an estimate of the number of clock cycles. Need to evaluate a large number of candidate accelerator designs.  Can’t afford to synthesize them all.

© 2006 Elsevier Estimation methods Hermann et al. used numerical methods.  Estimated incremental costs due to adding blocks to the accelerator. Henkel and Ernst used path-based scheduling.  Cut CFDG into subgraphs: reduce loop iteration count; cut at large joins; divide into equal-sized pieces.  Schedule each subgraph independently.

© 2006 Elsevier Henkel and Ernst path-based estimation [Hen01] © 2001 IEEE

© 2006 Elsevier Fast incremental evaluation Vahid and Gajski estimate controller and data path costs incrementally. Hardware cost:  FU = function units.  SU = storage units.  M = multiplexers.  C = control logic.  W = wiring. [Vah95] © 1995 IEEE

© 2006 Elsevier Vahid and Gajski estimation procedure Compile information on data path inputs and outputs, function and storage units, controller states, etc. Update algorithm changes tables based on incremental hardware changes. Executes in constant time for reasonable design characteristics. [Vah95] © 1995 IEEE

© 2006 Elsevier Single- vs. multi-threaded One critical factor is available parallelism:  single-threaded/blocking: CPU waits for accelerator;  multithreaded/non-blocking: CPU continues to execute along with accelerator. To multithread, CPU must have useful work to do.  But software must also support multithreading.

© 2006 Elsevier Total execution time Single-threaded: Multi-threaded: P2 P1 A1 P3 P4 P2 P1 A1 P3 P4

© 2006 Elsevier Execution time analysis Single-threaded:  Count execution time of all component processes. Multi-threaded:  Find longest path through execution.

© 2006 Elsevier Hardware-software partitioning Partitioning methods usually allow more than one ASIC. Typically ignore CPU memory traffic in bus utilization estimates. Typically assume that CPU process blocks while waiting for ASIC. CPU ASIC mem

© 2006 Elsevier Synthesis tasks Scheduling: make sure that data is available when it is needed. Allocation: make sure that processes don’t compete for the PE. Partitioning: break operations into separate processes to increase parallelism, put serial operations in one process to reduce communication. Mapping: take PE, communication link characteristics into account.

© 2006 Elsevier Scheduling and allocation Must schedule/allocate  computation  communication Performance may vary greatly with allocation choice. P1 P2 P3 P1 P2 P3 CPU1 ASIC1

© 2006 Elsevier Problems in scheduling/allocation l Can multiple processes execute concurrently? l Is the performance granularity of available components fine enough to allow efficient search of the solution space? l Do computation and communication requirements conflict? l How accurately can we estimate performance?  software  custom ASICs

© 2006 Elsevier Partitioning example before after r = p1(a,b); s = p2(c,d); z = r + s; r=p1(a,b);s=p2(c,d); z = r + s

© 2006 Elsevier Problems in partitioning l At what level of granularity must partitioning be performed? l How well can you partition the system without an allocation? l How does communication overhead figure into partitioning?

© 2006 Elsevier Problems in mapping Mapping and allocation are strongly connected when the components vary widely in performance. Software performance depends on bus configuration as well as CPU type. Mappings of PEs and communication links are closely related.

© 2006 Elsevier Program representations CDFG: single-threaded, executable, can extract some parallelism. Task graph: task-level parallelism, no operator-level detail.  TGFF generates random task graphs. UNITY: based on parallel programming language.

© 2006 Elsevier Platform representations Technology table describes PE, channel characteristics.  CPU time.  Communication time.  Cost.  Power. Multiprocessor connectivity graph describes PEs, channels. TypeSpeedcost ARM 750E610 MIPS50E68 PE 1 PE 2 PE 3