OSCI TLM-2.0 The Transaction Level Modeling standard of the Open SystemC Initiative (OSCI)

Slides:



Advertisements
Similar presentations
Simics/SystemC Hybrid Virtual Platform A Case Study
Advertisements

OCP-IP SLD New Generation Using OSCI-TLM-2.0 to Model a Real Bus Protocol at Multiple Levels of Abstraction FDL 2008 James Aldis, Texas Instruments Mark.
Primary Author: Girish Verma Secondary Author(s): Navya Prabhakar Presenter: Navya Prabhakar Company/Organization: CircuitSutra USB Modeling Quick Start.
February 28 – March 3, 2011 Stepwise Refinement and Reuse: The Key to ESL Ashok B. Mehta Senior Manager (DTP/SJDMP) TSMC Technology, Inc. Mark Glasser.
Adding SystemC TLM to a RTL Design Flow Bill Bunton Principal Engineer LSI Networking Components Group Austin, Texas.
Workshop - November Toulouse Paul Brelet TRT Modeling of a smart camera systems 24/11/
Lecture Objectives: 1)Explain the limitations of flash memory. 2)Define wear leveling. 3)Define the term IO Transaction 4)Define the terms synchronous.
OSCI TLM-2.0 The Transaction Level Modeling standard of the Open SystemC Initiative (OSCI)
Puneet Arora ESCUG, 09 Abstraction Levels in SoC Modelling.
Transaction Level Modeling with SystemC Adviser :陳少傑 教授 Member :王啟欣 P Member :陳嘉雄 R Member :林振民 P
I/O Hardware n Incredible variety of I/O devices n Common concepts: – Port – connection point to the computer – Bus (daisy chain or shared direct access)
UNIVERSITA’ POLITECNICA DELLE MARCHE, ANCONA (ITALY) Massimo Conti, Giovanni Vece TRANSACTION LEVEL POWER ANALYSIS WITHIN SYSTEMC/TLM APPLICATIONS 22.
04/16/2010CSCI 315 Operating Systems Design1 I/O Systems Notice: The slides for this lecture have been largely based on those accompanying an earlier edition.
Partial Order Reduction for Scalable Testing of SystemC TLM Designs Sudipta Kundu, University of California, San Diego Malay Ganai, NEC Laboratories America.
Midterm Tuesday October 23 Covers Chapters 3 through 6 - Buses, Clocks, Timing, Edge Triggering, Level Triggering - Cache Memory Systems - Internal Memory.
Copyright © 2006 Intel Corporation, released under EPL version /20061 Eclipse DSDP-TM Target Connection Adapters Peter Lachner WW08’06 rev 1.0.
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
1 Today I/O Systems Storage. 2 I/O Devices Many different kinds of I/O devices Software that controls them: device drivers.
Using Two Queues. Using Multiple Queues Suspended Processes Processor is faster than I/O so all processes could be waiting for I/O Processor is faster.
I/O Systems CSCI 444/544 Operating Systems Fall 2008.
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Copyright ©: Nahrstedt, Angrave, Abdelzaher
Revisiting Network Interface Cards as First-Class Citizens Wu-chun Feng (Virginia Tech) Pavan Balaji (Argonne National Lab) Ajeet Singh (Virginia Tech)
Institut für Computertechnik ICT Institute of Computer Technology Interaction of SystemC AMS Extensions with TLM 2.0 Markus Damm, Christoph.
1 PCT Messaging Design CEC Title 24 Reference Design.
1CADENCE DESIGN SYSTEMS, INC. Cadence Proposed Transaction Level Interface Enhancements for SCE-MI SEPTEMBER 11, 2003.
Hardware Definitions –Port: Point of connection –Bus: Interface Daisy Chain (A=>B=>…=>X) Shared Direct Device Access –Controller: Device Electronics –Registers:
I/O Systems I/O Hardware Application I/O Interface
Contact Information Office: 225 Neville Hall Office Hours: Monday and Wednesday 12:00-1:00 and by appointment.
SystemC: A Complete Digital System Modeling Language: A Case Study Reni Rambus Inc.
I/O Example: Disk Drives To access data: — seek: position head over the proper track (8 to 20 ms. avg.) — rotational latency: wait for desired sector (.5.
Recall: Three I/O Methods Synchronous: Wait for I/O operation to complete. Asynchronous: Post I/O request and switch to other work. DMA (Direct Memory.
Presenter: Zong Ze-Huang Fast and Accurate Resource Conflict Simulation for Performance Analysis of Multi- Core Systems Stattelmann, S. ; Bringmann, O.
1 H ardware D escription L anguages Modeling Digital Systems.
Firmware Storage : Technical Overview Copyright © Intel Corporation Intel Corporation Software and Services Group.
SystemC and Levels of System Abstraction: Part I.
© 2012 xtUML.org Bill Chown – Mentor Graphics Model Driven Engineering.
Hardware process When the computer is powered up, it begins to execute fetch-execute cycle for the program that is stored in memory at the boot strap entry.
Computers Operating System Essentials. Operating Systems PROGRAM HARDWARE OPERATING SYSTEM.
Chapter 13: I/O Systems. 13.2/34 Chapter 13: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware.
I/O Computer Organization II 1 Interconnecting Components Need interconnections between – CPU, memory, I/O controllers Bus: shared communication channel.
Accessing I/O Devices Processor Memory BUS I/O Device 1 I/O Device 2.
MODUS Project FP7- SME – , Eclipse Conference Toulouse, May 6 th 2013 Page 1 MODUS Project FP Methodology and Supporting Toolset Advancing.
Processes CSCI 4534 Chapter 4. Introduction Early computer systems allowed one program to be executed at a time –The program had complete control of the.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 13: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem.
SOC Virtual Prototyping: An Approach towards fast System- On-Chip Solution Date – 09 th April 2012 Mamta CHALANA Tech Leader ST Microelectronics Pvt. Ltd,
Chapter 13 – I/O Systems (Pgs ). Devices  Two conflicting properties A. Growing uniformity in interfaces (both h/w and s/w): e.g., USB, TWAIN.
By Fernan Naderzad.  Today we’ll go over: Von Neumann Architecture, Hardware and Software Approaches, Computer Functions, Interrupts, and Buses.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Processes and Threads.
Hardware process When the computer is powered up, it begins to execute fetch-execute cycle for the program that is stored in memory at the boot strap entry.
Chapter 13: I/O Systems Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 13: I/O Systems Overview I/O Hardware Application.
TLM Convenience Socket Parvinder Pal Singh Girish Verma.
Different Microprocessors Tamanna Haque Nipa Lecturer Dept. of Computer Science Stamford University Bangladesh.
Recen progress R93088 李清新. Recent status – about hardware design Finishing the EPXA10 JPEG2000 project. Due to the DPRAM problem can’t be solved by me,
بسم الله الرحمن الرحيم MEMORY AND I/O.
Silberschatz, Galvin, and Gagne  Applied Operating System Concepts Module 12: I/O Systems I/O hardwared Application I/O Interface Kernel I/O.
ISCUG Keynote May 2008 Acknowledgements to the TI-Nokia ESL forum (held Jan 2007) and to James Aldis, TI and OSCI TLM WG Chair 1 SystemC: Untapped Value.
Big Picture Lab 4 Operating Systems C Andras Moritz
TM Freescale, the Freescale logo, AltiVec, C-5, CodeTEST, CodeWarrior, ColdFire, C-Ware, the Energy Efficient Solutions logo, mobileGT, PowerQUICC, QorIQ,
April 15, 2013 Atul Kwatra Principal Engineer Intel Corporation Hardware/Software Co-design using SystemC/TLM – Challenges & Opportunities ISCUG ’13.
Liana Duenha (FACOM and Unicamp) A SystemC Benchmark Suite for Evaluating MPSoC Tools and Methodologies Rodolfo Azevedo (Unicamp)
Memory Management.
Chapter 13: I/O Systems Modified by Dr. Neerja Mhaskar for CS 3SH3.
IEEE1666 SystemC TLM2.0 Coulpling gem5 with Dipl.-Ing. Matthias Jung
OCP-IP SLD New Generation Using OSCI-TLM-2
Introduction to Accellera TLM 2.0
Contact Information Office: 225 Neville Hall Office Hours: Monday and Wednesday 12:00-1:00 and by appointment. Phone:
Chapter 13: I/O Systems.
Presentation transcript:

OSCI TLM-2.0 The Transaction Level Modeling standard of the Open SystemC Initiative (OSCI)

OSCI TLM-2.0 Software version: TLM-2.0.1 Document version: ja8 This presentation authored by: John Aynsley, Doulos Copyright © 2007-2009 by Open SystemC Initiative. All rights reserved

contents Aangepast door Harry Broeders t.b.v. de lessen HM-ES-th1 9-1-2015 contents Introduction Transport Interfaces DMI and Debug Interfaces Sockets (Niet behandeld) The Generic Payload (Niet behandeld)

Introduction Transaction Level Modeling 101 Coding Styles

Transaction Level Modeling 101 RTL Functional Model Transaction level - function call Pin accurate, cycle accurate write(address,data) RTL Functional Model Simulate every event 100-10,000 X faster simulation

Reasons for Using TLM TLM Firmware / software Software development Accelerates product release schedule Test bench Hardware verification RTL TLM = golden model Architectural modeling TLM Fast enough Ready before RTL

Typical Use Cases for TLM Represents key architectural components of hardware platform Architectural exploration, performance modeling Software execution on virtual model of hardware platform Golden model for hardware functional verification Available before RTL Simulates much faster than RTL Fast! Early!

Speed Interoperability TLM-2 Requirements Transaction-level memory-mapped bus modeling Register accurate, functionally complete Fast enough to boot software O/S in seconds Loosely-timed and approximately-timed modeling Interoperable API for memory-mapped bus modeling Generic payload and extension mechanism Avoid adapters where possible Speed Interoperability See TLM_2_0_requirements.pdf

Use Cases, Coding Styles and Mechanisms Software development Architectural analysis Hardware verification Software performance Loosely-timed Approximately-timed TLM-2 Coding styles Mechanisms Blocking interface DMI Quantum Sockets Generic payload Phases Non-blocking interface

Coding Styles Loosely-timed Only sufficient timing detail to boot O/S and run multi-core systems Processes can run ahead of simulation time (temporal decoupling) Each transaction has 2 timing points: begin and end Uses direct memory interface (DMI) Approximately-timed (Niet verder behandeld) aka cycle-approximate or cycle-count-accurate Sufficient for architectural exploration Processes run in lock-step with simulation time Each transaction has 4 timing points (extensible) Guidelines only – not definitive

Loosely-timed Quantum Quantum Quantum Quantum Process 1 Process 2 Each process runs ahead up to quantum boundary sc_time_stamp() advances in multiples of the quantum Deterministic communication requires explicit synchronization

Approximately-timed 10 20 30 40 50 Process 1 Process 2 Process 3 10 20 30 40 50 Process 1 Annotated delays Process 2 Process 3 Each process is synchronized with SystemC scheduler Delays can be accurate or approximate

The TLM 2.0 Classes Generic payload Phases Utilities: Convenience sockets Payload event queues Quantum keeper Instance-specific extn Interoperability layer for bus modeling Initiator and target sockets Analysis ports TLM-1 standard TLM-2 core interfaces: Blocking transport interface Non-blocking transport interface Direct memory interface Debug transport interface Analysis interface IEEE 1666™ SystemC

Interoperability Layer 1. Core interfaces and sockets Initiator Target 2. Generic payload Command Address Data Byte enables Response status Extensions 3. Base protocol BEGIN_REQ END_REQ BEGIN_RESP END_RESP Maximal interoperability for memory-mapped bus models

transport interfaces Initiators and Targets Blocking Transport Interface Timing Annotation and the Quantum Keeper Non-blocking Transport Interface (Niet behandeld) Timing Annotation and the Payload Event Queue (Niet behandeld)

Initiators and Targets Initiator socket Target socket Initiator socket Target socket Initiator Forward path Backward path Interconnect component 0, 1 or many Target Transaction object References to a single transaction object are passed along the forward and backward paths

TLM-2 Connectivity Initiator Target Target Initiator Target/ Initiator Interconnect Target Initiator Target/ Initiator Initiator Target Transaction memory management needed

Convergent Paths Initiator Interconnect Target Initiator

Blocking versus Non-blocking Transport Blocking transport interface Includes timing annotation Typically used with loosely-timed coding style Forward path only Non-blocking transport interface Includes timing annotation and transaction phases Typically used with approximately-timed coding style Called on forward and backward paths Share the same transaction type for interoperability Unified interface and sockets – can be mixed

TLM-2 Core Interfaces - Transport tlm_blocking_transport_if void b_transport( TRANS& , sc_time& ) ; tlm_fw_nonblocking_transport_if tlm_sync_enum nb_transport_fw( TRANS& , PHASE& , sc_time& ); tlm_bw_nonblocking_transport_if tlm_sync_enum nb_transport_bw( TRANS& , PHASE& , sc_time& );

TLM-2 Core Interfaces - DMI and Debug tlm_fw_direct_mem_if bool get_direct_mem_ptr( TRANS& trans , tlm_dmi& dmi_data ) ; tlm_bw_direct_mem_if void invalidate_direct_mem_ptr( sc_dt::uint64 start_range, sc_dt::uint64 end_range ) ; tlm_transport_dbg_if unsigned int transport_dbg( TRANS& trans ) ; May all use the generic payload transaction type

Blocking Transport Transaction type template < typename TRANS = tlm_generic_payload > class tlm_blocking_transport_if : public virtual sc_core::sc_interface { public: virtual void b_transport ( TRANS& trans , sc_core::sc_time& t ) = 0; }; Transaction object Timing annotation

Blocking Transport Target Initiator b_transport(t, 0ns) Simulation time = 100ns b_transport(t, 0ns) Call Return b_transport(t, 0ns) b_transport(t, 0ns) Call Simulation time = 140ns wait(40ns) Return b_transport(t, 0ns) Initiator is blocked until return from b_transport

Timing Annotation Recipient may virtual void b_transport ( TRANS& trans , sc_core::sc_time& delay ) { // Behave as if transaction received at sc_time_stamp() + delay ... delay = delay + latency; } b_transport( transaction, delay ); // Behave as if transaction received at sc_time_stamp() + delay Recipient may Execute transactions immediately, out-of-order – Loosely-timed Schedule transactions to execution at proper time – Approx-timed Pass on the transaction with timing annotation

Temporal Decoupling Target Initiator b_transport(t, 0ns) Simulation time = 100ns Local time offset b_transport(t, 0ns) Call Return b_transport(t, 5ns) +5ns b_transport(t, 20ns) Call +20ns Return b_transport(t, 25ns) +25ns b_transport(t, 40ns) Call +40ns Simulation time = 140ns wait(40ns) Return b_transport(t, 5ns) +5ns

The Time Quantum Target Initiator b_transport(t, 950ns) Quantum = 1us Simulation time = 1us Local time offset b_transport(t, 950ns) Call +950ns Return b_transport(t, 970ns) +970ns b_transport(t, 990ns) Call +990ns Return b_transport(t, 1010ns) +1010ns Simulation time = 2010ns wait(1010ns) b_transport(t, 0ns) Call +0ns

The Quantum Keeper (tlm_quantumkeeper) Quantum is user-configurable Processes can check local time against quantum SMALL quantum lever BIG speed debug

DMI AND debug interfaces Direct Memory Interface Debug Transport Interface

DMI and Debug Transport Direct Memory Interface Gives an initiator a direct pointer to memory in a target, e.g an ISS By-passes the sockets and transport calls Read or write access by default Extensions may permit other kinds of access, e.g. security mode Target responsible for invalidating pointer Debug Transport Interface Gives an initiator debug access to memory in a target Delay-free Side-effect-free May share transactions with transport interface

Direct Memory Interface tlm_fw_direct_mem_if status = get_direct_mem_ptr( transaction, dmi_data ); Access requested Access granted Initiator Interconnect component Target Forward path Forward path Backward path Backward path invalidate_direct_mem_ptr( start_range, end_range ); tlm_bw_direct_mem_if Transport, DMI and debug may all use the generic payload Interconnect may modify address and invalidated range

DMI Transaction and DMI Data Requests read or write access For a given address Permits extensions class tlm_dmi unsigned char* dmi_ptr uint64 dmi_start_address uint64 dmi_end_address dmi_type_e dmi_type; sc_time read_latency sc_time write_latency Direct memory pointer Region granted for given access type Read, write or read/write Latencies to be observed by initiator

DMI Rules 1 Initiator requests DMI from target at a given address DMI granted for a particular access type and a particular region Target can only grant a single contiguous memory region containing given address Target may grant an expanded memory region Target may promote READ or WRITE request to READ_WRITE Initiator may assume DMI pointer is valid until invalidated by target Initiator may keep a table of DMI pointers

DMI Rules 2 DMI request and invalidation use same routing as regular transactions The invalidated address range may get expanded by the interconnect Target may grant DMI to multiple initiators (given multiple requests) and a single invalidate may knock out multiple pointers in multiple initiators Use the Generic Payload DMI hint (described later) Only makes sense with loosely-timed models

Debug Transport Interface tlm_transport_dbg_if num_bytes = transport_dbg( transaction ); Command Address Data pointer Data length Extensions Initiator Interconnect component Target Forward path Forward path Backward path Backward path Uses forward path only Interconnect may modify address, target reads or writes data