LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop.

Slides:



Advertisements
Similar presentations
Nios Multi Processor Ethernet Embedded Platform Final Presentation
Advertisements

IT Essentials: PC Hardware and Software 1 Chapter 3 Assembling a Computer.
June 19, 2002 A Software Skeleton for the Full Front-End Crate Test at BNL Goal: to provide a working data acquisition (DAQ) system for the coming full.
CMS Week Sept 2002 HCAL Data Concentrator Status Report for RUWG and Calibration WG Eric Hazen, Jim Rohlf, Shouxiang Wu Boston University.
12 GeV Trigger Workshop Session II - DAQ System July 8th, 2009 – Christopher Newport Univ. David Abbott.
VMEbus Outline –Introduction –Electrical Characteristics –Mechanics –Functions –Data Transfer –Arbitration –Priority Interrupt Bus –Utilities Goal –Understand.
I/O Hardware n Incredible variety of I/O devices n Common concepts: – Port – connection point to the computer – Bus (daisy chain or shared direct access)
04/16/2010CSCI 315 Operating Systems Design1 I/O Systems Notice: The slides for this lecture have been largely based on those accompanying an earlier edition.
CPU Chips The logical pinout of a generic CPU. The arrows indicate input signals and output signals. The short diagonal lines indicate that multiple pins.
Lecture 7 Lecture 7: Hardware/Software Systems on the XUP Board ECE 412: Microcomputer Laboratory.
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Chapter 8 Input/Output. Busses l Group of electrical conductors suitable for carrying computer signals from one location to another l Each conductor in.
Interconnection Structures
LSU 10/22/2004Serial I/O1 Programming Unit, Lecture 5.
GEM Electronic System Status New releases of electronic boards: ◦ MPD v 4.0 ◦ Backplane 5 slot short (for UVa design) ◦ APV Front-End with Panasonic connector.
DLS Digital Controller Tony Dobbing Head of Power Supplies Group.
(Preliminary) Results of Evaluation of the CCT SB110 Peter Chochula and Svetozár Kapusta 1 1 Comenius University, Bratislava.
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
The MPC Parallel Computer Hardware, Low-level Protocols and Performances University P. & M. Curie (PARIS) LIP6 laboratory Olivier Glück.
I/O Example: Disk Drives To access data: — seek: position head over the proper track (8 to 20 ms. avg.) — rotational latency: wait for desired sector (.5.
GBT Interface Card for a Linux Computer Carson Teale 1.
CS 342 – Operating Systems Spring 2003 © Ibrahim Korpeoglu Bilkent University1 Input/Output CS 342 – Operating Systems Ibrahim Korpeoglu Bilkent University.
LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R.
Markus Joos, EP-ESS 1 “DAQ” at the El. Pool Aim and preconditions Hardware Operating system support Low level software Middle level software High level.
Design and Performance of a PCI Interface with four 2 Gbit/s Serial Optical Links Stefan Haas, Markus Joos CERN Wieslaw Iwanski Henryk Niewodnicznski Institute.
LNL 1 SLOW CONTROLS FOR CMS DRIFT TUBE CHAMBERS M. Bellato, L. Castellani INFN Sezione di Padova.
I/O management is a major component of operating system design and operation Important aspect of computer operation I/O devices vary greatly Various methods.
Dr Mohamed Menacer College of Computer Science and Engineering Taibah University CE-321: Computer.
David Abbott - JLAB DAQ group Embedded-Linux Readout Controllers (Hardware Evaluation)
EUDRB: the data reduction board of the EUDET pixel telescope Lorenzo Chiarelli, Angelo Cotta Ramusino, Livio Piemontese, Davide Spazian Università & INFN.
I/O Computer Organization II 1 Interconnecting Components Need interconnections between – CPU, memory, I/O controllers Bus: shared communication channel.
Hardware proposal for the L2  trigger system detailed description of the architecture mechanical considerations components consideration electro-magnetic.
M. Joos, PH/ESE1 ATCA for ATLAS - status Activities of the working group PH/ESE xTCA evaluation project Next steps.
A Hardware Based Cluster Control and Management System Ralf Panse Kirchhoff Institute of Physics.
Serial Data Link on Advanced TCA Back Plane M. Nomachi and S. Ajimura Osaka University, Japan CAMAC – FASTBUS – VME / Compact PCI What ’ s next?
12/8/20151 Operating Systems Design (CS 423) Elsa L Gunter 2112 SC, UIUC Based on slides by Roy Campbell, Sam King,
Costas Foudas, The Tracker Interface to TCS, The CMS Silicon Tracker FED Crates What goes in the FED Crates ? What do we do about the VME controller.
IPHC - DRS Gilles CLAUS 04/04/20061/20 EUDET JRA1 Meeting, April 2006 MAPS Test & DAQ Strasbourg OUTLINE Summary of MimoStar 2 Workshop CCMOS DAQ Status.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
Chapter 13 – I/O Systems (Pgs ). Devices  Two conflicting properties A. Growing uniformity in interfaces (both h/w and s/w): e.g., USB, TWAIN.
June 17th, 2002Gustaaf Brooijmans - All Experimenter's Meeting 1 DØ DAQ Status June 17th, 2002 S. Snyder (BNL), D. Chapin, M. Clements, D. Cutts, S. Mattingly.
Input/Output Problems Wide variety of peripherals —Delivering different amounts of data —At different speeds —In different formats All slower than CPU.
JRA-1 Meeting, Jan 25th 2007 A. Cotta Ramusino, INFN Ferrara 1 EUDRB: A VME-64x based DAQ card for MAPS sensors. STATUS REPORT.
Giovanni Grieco Marketing Division 9th Topical Seminar on Innovative Particle and Radiation Detectors May 2004 Siena,
LIGO-G9900XX-00-M LIGO II1 Why are we here and what are we trying to accomplish? The existing system of cross connects based on terminal blocks and discrete.
Bob Hirosky L2  eta Review 26-APR-01 L2  eta Introduction L2  etas – a stable source of processing power for DØ Level2 Goals: Commercial (replaceable)
بسم الله الرحمن الرحيم MEMORY AND I/O.
1 Device Controller I/O units typically consist of A mechanical component: the device itself An electronic component: the device controller or adapter.
B. Hirosky 12/14/00 FPGA + FIFO replaces: DMA P/IO buffers TSI device Keep ECL drivers BUY THIS! Same Basic Concept as L2Alpha, but with simplified implementation.
Rutherford Appleton Laboratory September 1999Fifth Workshop on Electronics for LHC Presented by S. Quinton.
KM3NeT Offshore Readout System On Chip A highly integrated system using FPGA COTS S. Anvar, H. Le Provost, F. Louis, B.Vallage – CEA Saclay IRFU – Amsterdam/NIKHEF,
The ALICE Data-Acquisition Read-out Receiver Card C. Soós et al. (for the ALICE collaboration) LECC September 2004, Boston.
Chapter 13: I/O Systems Modified by Dr. Neerja Mhaskar for CS 3SH3.
The Software Framework available at the ATLAS ROD Crate
HCAL Data Concentrator Production Status
D.Cobas, G. Daniluk, M. Suminski
Evolution of S-LINK to PCI interfaces
The PCI bus (Peripheral Component Interconnect ) is the most commonly used peripheral bus on desktops and bigger computers. higher-level bus architectures.
CS703 - Advanced Operating Systems
CSCI 315 Operating Systems Design
I/O Systems I/O Hardware Application I/O Interface
I/O BUSES.
PC Buses & Standards Bus = Pathway across which data can travel. Can be established between two or more computer elements. PC has a hierarchy of different.
Network Processors for a 1 MHz Trigger-DAQ System
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Chapter 13: I/O Systems.
Cluster Computers.
Chapter 13: I/O Systems “The two main jobs of a computer are I/O and [CPU] processing. In many cases, the main job is I/O, and the [CPU] processing is.
Presentation transcript:

LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop on Electronics for LHC and future Experiments, September 2005,Heidelberg, Germany

LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 2 Talk outline  VMEbus systems in ATLAS  VMEbus controller H/W  Basic requirements to the H/W  Lessons learnt by evaluating SBCs  Final choice of VMEbus controller  VMEbus S/W  Analysis of third partyVMEbus I/O packages  The ATLAS VMEbus I/O package  Status  Conclusions

LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 3 VMEbus in ATLAS front-end DAQ LvL1/2ROS PCs TTC and auxiliary modules, some RODs Event data, Timing and trigger signals ~100 * 9U ~55 * 6U RODs and some other modules (e.g. ROD-busy) TTC Event data ATLAS (LHC) VMEbus crates:  VME64x  6U or 9U (with 6U section)  Air or water cooled  Local or remote Power supply  Connected to DCS via CAN interface  Initialisation of slave modules  Status monitoring  Event data read-out for  Monitoring  Commissioning VMEbus used (by ROD-crate DAQ) for: More on TTC: Talk by S. Baron

LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 4 VMEbus controller: basic requirements  Controllers have to be purchased by the sub-detector groups  Decision to standardise on one type of controller  To bring down price by economy of scale  To ease maintenance and provision of spares  To avoid S/W incompatibilities  Keep technology evolution in mind Main technical requirements  Mechanical: 6U, 1 slot, VME64x mechanics  VMEbus protocol: Support for single cycles, chained DMA and interrupts  VMEbus performance: At least 50% of the theoretically possible bus transfer rates  Software: Compatibility with CERN Linux releases Not required  2eVME & 2eSST

LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 5 Basic choice: Embedded SBC or Link to PC SBC  Space conservative (important in ATLAS underground area)  Typically better VMEbus performance (especially single cycles)  Cheaper (system price) Link to PC  Computing performance can be increased by faster host PC  Possibility to control several VMEbus crates from one PC  Vendors usually provide both C library and LabView VIs SBC better suited for the requirements of ATLAS The basic requirements can be met by both an embedded SBC and an interface to a PC system

LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 6 Finding an appropriate SBC Type of CPU Intel  Higher clock rates  Better support for (CERN) Linux PowerPC  Big endian byte ordering (matches VMEbus)  Vector processing capability Other technical requirements (just the most important ones, formulated in 2002) Let the market decide  300 MHz (PowerPC) / 600 MHz (Intel) CPU  256 MB main memory  One 10/100 Mbit/s Ethernet interface  One PMC site (e.g. for additional network interfaces)  VME64 compliant VMEbus interface  8 MB flash for Linux image (network independent booting)  Support for diskless operation

LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 7 Issues with the Universe chip  Lack of built-in endian conversion  Intel based SBCs have to have extra logic to re-order the bytes  Performance  Single cycles: ~ 1 μs (arbitration for and coupling of CPU bus, PCI and VMEbus)  Posted write cycles: ~500 ns  Block transfers: ~60% of theoretical maximum (i.e. 25 MB/s for D32 BLT, 50 MB/s for D64 MBLT)  VMEbus bus error handling  Errors from posted write cycles are not converted to an interrupt  Lack of constant address block transfers for reading out FIFOs  May be required to read out a FIFO-based memory  It is possible to have the Universe chip do it with (slow) single cycles (~13 MB/s)  Danger of loosing last data word on BERR* terminated transfers Many of today’s SBCs are based on the Tundra Universe chip which was designed in ~1995. Evaluations of several SBCs have identified a few shortcomings that still apply to the current revision of the device (Universe II D):

LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 8 Other issues  Concurrent master and slave access  If a SBC is VMEbus master and slave at the same time deadlock situations on PCI are possible. Some host bridges resolve them by terminating the inbound VMEbus cycles with BERR  Remote control and monitoring  Most SBCs do not support IPMI  Some vendors put temperature or voltage sensors on their SBCs but there is no standard way of reading this information  Remote reset: Via 2-wire connector (in the front panel) or SYSRESET*  Mechanics  VME64x alignment pin and P0 are incompatible with certain crates. Most vendors provide alternative front panels or handles.  EMC gaskets can be “dangerous” for solder side components on neighboring card. Use solder side covers

LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 9 Choice of the operating system Requirements:  Unix compatible environment (to leverage existing experience)  No hard real time  Interrupts are used in some applications but their latency is not crucial  SBC has to run under LynxOS (just in case..)  Full local developing and debugging environment  Cheap (ideally no) license costs Linux is the obvious choice  The ATLAS default SBC has to work with the SLC3 release  Only minor modifications to the kernel configuration to support diskless operation  “Look and feel” as on a Linux desktop PC (X11, AFS, etc.)

LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 10 Final choice of VMEbus SBC  In 2002 a competitive Call for Tender was carried out  Non-technical requirements:  Controlled technical evolution  3 years warranty  10 years support  Fixed price for repair or replacement  All major suppliers contacted  ~10 bids received  1 Supplier selected  Features of the default SBC  800 MHz Pentium III  512 MB RAM  Tundra Universe VMEbus interface  ServerWorks ServerSet III LE host bridge  Since 2005 a (software compatible) more powerful SBC is available as an alternative  1.8 GHz Pentium M  1 GB RAM  KVM ports  Two GB Ethernet ports  Intel 855 GME host bridge

LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 11 VMEbus S/W  Analysis of third party VMEbus I/O libraries  Design  There is (despite VISION, ANSI/VITA ) no standard API for VMEbus access  Board manufacturers define their own APIs  Some libraries (unnecessarily) expose details of the underlying hardware to the user  Implementation  Tradeoff between speed and functionality may not suit user requirements  Completeness  Sometimes less frequently used features are not supported  Support  Not always guaranteed (especially for freeware) We decided to specify our own API and to implement a driver & library

LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 12 The ATLAS VMEbus I/O package  Linux allows the development of an I/O library (almost) fully in user space  Low S/W overhead (no context switches)  Multi tasking and interrupt handling difficult  Our solution:  Linux device driver  Support for multi tasking  SMP not an issue (SBC has one CPU, no Hyper-threading)  Interrupt handling  Block transfers  User library  C–language API  Optional C++ wrapper  Comprehensive S/W tools for configuration, testing, debugging and as coding examples  Independent package for the allocation of contiguous memory (for block transfers)

LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 13 Main features of the ATLAS VMEbus I/O package  Single cycles  Static programming of the PCI to VMEbus mapping  Fast access (programmed I/O from user code)  Safe functions (full synchronous BERR* checking via driver)  Special functions for geographical addressing (CR/CSR space access)  Block transfers  Support for chained block transfers  Fixed address single cycle block transfers for FIFO readout  Interrupts  Support for ROAK and RORA  Synchronous or asynchronous handling  Grouping of interrupts  Bus error handling  Synchronous or on request  Performance  Fast single cycles: 0.5 to 1 μs  Safe single cycles: 10 to 15 μs  Block transfers (S/W overhead): 10 to 15 μs

LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 14 Status  Supply contract for ATLAS established in 2003  So far ~170 SBCs purchased  Successfully used in 2004 ATLAS combined test-beam  Many small test-benches and laboratory systems  Installation of SBCs in final ATLAS DAQ system started  The CERN Electronics Pool is phasing out the previous generation of PowerPC / LynxOS based SBCs in favor of the ATLAS model  The ALICE Experiment will use the same model in the VMEbus based part of the L1 trigger system

LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 15 Conclusions  ATLAS had no special requirements (such as e.g. 2eSST support) on the SBC  The time spent on the evaluation of SBCs was well invested. Some important lessons have been learnt and helped with the Call for Tender  Specifying our own VMEbus API and implementing the software from scratch paid out in terms of flexibility and performance  Both the SBC and the software have been used successfully in a large number of systems

LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 16 Acknowledgements I would like to thank:  Chris Parkman, Jorgen Petersen and Ralf Spiwoks for their contribution to the technical specification of the ATLAS SBC and VMEbus API  Jorgen Petersen for his assistance with the implementation of the software  The members of the ATLAS TDAQ team for their contributions and feed back