Realtime Computing and Dark Energy Experiments Klaus Honscheid The Ohio State University Real Time 07, Fermilab.

Slides:



Advertisements
Similar presentations
System Area Network Abhiram Shandilya 12/06/01. Overview Introduction to System Area Networks SAN Design and Examples SAN Applications.
Advertisements

Chapter VI Data Communication: Delivering Information Anywhere and Anytime By: AP CHEN P. JOVER BSIT - III.
Development of an Active Pixel Sensor Vertex Detector H. Matis, F. Bieser, G. Rai, F. Retiere, S. Wurzel, H. Wieman, E. Yamamato, LBNL S. Kleinfelder,
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/27 A Control Software for the ALICE High Level Trigger Timm.
CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Timm Morten Steinbeck, Computer Science.
VC Sept 2005Jean-Sébastien Graulich Report on DAQ Workshop Jean-Sebastien Graulich, Univ. Genève o Introduction o Monitoring and Control o Detector DAQ.
SLAC June Gpix Camera Camera DAQ/Control System SLAC Program Review T. Schalk CCS team.
1 Large Synoptic Survey Telescope Review Kirk Gilmore - SLAC DOE Review June 15, 2005.
SNAP ICU ProjectDOE HEP Program Review June 2-4, SLAC Participation in The Supernova Acceleration Probe (SNAP) Presentation to the DOE High Energy.
- Software block schemes & diagrams - Communications protocols & data format - Conclusions EUSO-BALLOON DESIGN REVIEW, , CNES TOULOUSE F. S.
A. Homs, BLISS Day Out – 15 Jan 2007 CCD detectors: spying with the Espia D. Fernandez A. Homs M. Perez C. Guilloud M. Papillon V. Rey V. A. Sole.
A Serializer ASIC for High Speed Data Transmission in Cryogenic and HiRel Environment Tiankuan Liu On behalf of the ATLAS Liquid Argon Calorimeter Group.
REAL-TIME SOFTWARE SYSTEMS DEVELOPMENT Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
ZTF Server Architecture Roger Smith Caltech
Input/OUTPUT [I/O Module structure].
National Center for Supercomputing Applications Observational Astronomy NCSA projects radio astronomy: CARMA & SKA optical astronomy: DES & LSST access:
ACCESS CONTROL ExpansE - Distributed Access Control.
AMICA Antarctic Multiband Infrared CAmera Favio Bortoletto INAF – Osservatorio Astronomico di Padova on behalf of the AMICA collaboration Telescope and.
1.Overview 2. Hardware 3. Software Interface 4. Triggering 5. Installation 6. Configuring.
Particle Physics & Astrophysics Representing: Mark Freytag Gunther Haller Ryan Herbst Michael Huffer Chris O’Grady Amedeo Perazzo Leonid Sapozhnikov Eric.
SLAC Particle Physics & Astrophysics The Cluster Interconnect Module (CIM) – Networking RCEs RCE Training Workshop Matt Weaver,
MWA Data Capture and Archiving Dave Pallot MWA Conference Melbourne Australia 7 th December 2011.
Understanding Data Acquisition System for N- XYTER.
Data and Computer Communications Circuit Switching and Packet Switching.
Leo Greiner TC_Int1 Sensor and Readout Status of the PIXEL Detector.
Evaluation of the Optical Link Card for the Phase II Upgrade of TileCal Detector F. Carrió 1, V. Castillo 2, A. Ferrer 2, V. González 1, E. Higón 2, C.
LSST: Preparing for the Data Avalanche through Partitioning, Parallelization, and Provenance Kirk Borne (Perot Systems Corporation / NASA GSFC and George.
6-10 Oct 2002GREX 2002, Pisa D. Verkindt, LAPP 1 Virgo Data Acquisition D. Verkindt, LAPP DAQ Purpose DAQ Architecture Data Acquisition examples Connection.
Detectors for Light Sources Contribution to the eXtreme Data Workshop of Nicola Tartoni Diamond Light Source.
EScience May 2007 From Photons to Petabytes: Astronomy in the Era of Large Scale Surveys and Virtual Observatories R. Chris Smith NOAO/CTIO, LSST.
Frank Lemke DPG Frühjahrstagung 2010 Time synchronization and measurements of a hierarchical DAQ network DPG Conference Bonn 2010 Session: HK 70.3 University.
REAL-TIME SOFTWARE SYSTEMS DEVELOPMENT Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
Plethora: A Wide-Area Read-Write Storage Repository Design Goals, Objectives, and Applications Suresh Jagannathan, Christoph Hoffmann, Ananth Grama Computer.
A Collaborative Framework for Scientific Data Analysis and Visualization Jaliya Ekanayake, Shrideep Pallickara, and Geoffrey Fox Department of Computer.
ICALEPCS’ GenevaACS in ALMA1 Allen Farris National Radio Astronomy Observatory Lead, ALMA Control System.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
Henry Heetderks Space Sciences Laboratory, UCB
UV - Visible Systems Peter Moore AURA / NOAO / ETS.
SNAP Collaboration meeting / Paris 10/20071 SPECTROMETER DETECTORS From science requirements to data storage.
Large Area Surveys - I Large area surveys can answer fundamental questions about the distribution of gas in galaxy clusters, how gas cycles in and out.
High Speed Detectors at Diamond Nick Rees. A few words about HDF5 PSI and Dectris held a workshop in May 2012 which identified issues with HDF5: –HDF5.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
A real-time software backend for the GMRT : towards hybrid backends CASPER meeting Capetown 30th September 2009 Collaborators : Jayanta Roy (NCRA) Yashwant.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
Disk Towards a conceptual design M. de Jong  Introduction  Design considerations  Design concepts  Summary.
Greg Alkire/Brian Smith 197 MAPLD An Ultra Low Power Reconfigurable Task Processor for Space Brian Smith, Greg Alkire – PicoDyne Inc. Wes Powell.
On a eRHIC silicon detector: studies/ideas BNL EIC Task Force Meeting May 16 th 2013 Benedetto Di Ruzza.
NDDS: The Real-Time Publish Subscribe Middleware Network Data Delivery Service An Efficient Real-Time Application Communications Platform Presented By:
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
CCU25 Communication and Control Unit ASIC in CMOS 0.25 μm Ch.Paillard
Mountaintop Software for the Dark Energy Camera Jon Thaler 1, T. Abbott 2, I. Karliner 1, T. Qian 1, K. Honscheid 3, W. Merritt 4, L. Buckley-Geer 4 1.
V3 SLAC DOE Program Review Gunther Haller SLAC June 13, 07 (650) SNAP Electronics.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Standard electronics for CLIC module. Sébastien Vilalte CTC
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Metadata for the SKA - Niruj Mohan Ramanujam, NCRA.
AMSA TO 4 Advanced Technology for Sensor Clouds 09 May 2012 Anabas Inc. Indiana University.
The ALICE Data-Acquisition Read-out Receiver Card C. Soós et al. (for the ALICE collaboration) LECC September 2004, Boston.
Terry Smith June 28, 2001 Command and Data Handling System SuperNova / Acceleration Probe (SNAP)
MicroTCA Development and Status
DAQ ACQUISITION FOR THE dE/dX DETECTOR
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
WP18, High-speed data recording Krzysztof Wrona, European XFEL
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
ProtoDUNE SP DAQ assumptions, interfaces & constraints
SLAC DOE Program Review
Turning photons into bits in the cold
Chapter 13: I/O Systems.
Presentation transcript:

Realtime Computing and Dark Energy Experiments Klaus Honscheid The Ohio State University Real Time 07, Fermilab

Klaus Honscheid, Ohio State University, RT07, Fermilab A few words on Dark Energy The subtle slowing down and speeding up of the expansion, of distances with time: a(t), maps out cosmic history like tree rings map out the Earth’s climate history. (Courtesy of E. Linder)

Klaus Honscheid, Ohio State University, RT07, Fermilab 95% of the Universe is Unknown Dark Energy Survey SNAP LSST Pan Starrs

Klaus Honscheid, Ohio State University, RT07, Fermilab LSST and SNAP 201 CCDs (4kx4k) for a total of 3.2 Giga-Pixels Spectrograph port 36 CCDs NIR (~443 Megapixels) (~151 Megapixels)

Klaus Honscheid, Ohio State University, RT07, Fermilab Outline DE DAQ vs. HE DAQ Front End Electronics Dataflow Control and Communication Data Management Summary

Klaus Honscheid, Ohio State University, RT07, Fermilab DE vs. HE Architecture OCS UIs, Coordination, Planning, Services DHS Science data collection Quick look TCS Enclosure, Thermal, Optics Mount, GIS ICS Virtual Instruments Observatory Control System Data HandlingTelescope ControlCamera Control Telescope Detector (Alice)

Klaus Honscheid, Ohio State University, RT07, Fermilab Dataflow Architecture DES CameraCLEO III Detector

Klaus Honscheid, Ohio State University, RT07, Fermilab Typical Data Rates Telescope Data Rate Data size (per night) Detector Data Rate (raw) Data Size (per day) DES0.7 Gbs0.3 TBBaBar5 Gbs0.3 TB SNAP 300 Mbs 0.3 TBCDF/D0100 Gbs~0.8 TB LSST26 Gbs~5 TBLHC800 Gbs~8 TB Zero Suppression Trigger

Klaus Honscheid, Ohio State University, RT07, Fermilab

Front End Electronics 4 SNAP CCDs (LBNL) 3.5kx3.5k, 10.5 μm pixels DES Focal Plane with simulated image (62 LBNL CCDs) Multiple output ports for faster Read-out (LSST) Fully depleted, high resistivity, back illuminated CCD with extended QE (1000  m) (LBNL)

Klaus Honscheid, Ohio State University, RT07, Fermilab Orthogonal Transfer CCDs Orthogonal Transfer CCD remove image motion high speed (~10 usec) Better yield, less expensive Pan Starrs, Wiyn Normal guiding (0.73”) OT tracking (0.50”)

Klaus Honscheid, Ohio State University, RT07, Fermilab DECam Monsoon Block Diagram 12 Typical readout rate: <10e- rms noise Controller Clocks Bias, Readout Developed by NOAO Data Link

Klaus Honscheid, Ohio State University, RT07, Fermilab  3 x 3 array of 16 MP sensors 16 output ports each  144 video channels  FEE package  Dual Slope Integrator ASICs,  Clock Driver ASICs  support circuitry  Flex cable interconnect  Cold plate  Cool plate  BEE package  144 ADC channels  local data formatting  optical fiber to DAQ SENSOR RAFT Si CCD Sensor Sensor Packages CCD Carrier Thermal Strap(s) LSST CCDs & Electronics 201 CCDs with 3.2 GPixel Readout time < 2 seconds, data rate 3.2 GBytes/s

Klaus Honscheid, Ohio State University, RT07, Fermilab It’s even harder in Space (SNAP) Custom chips for clocks, readout, bias Multi range, dual slope integrator Correlated Double Sampling 14 bit ADC Radiation tolerant: 10 kRad ionization (well shielded) Low power: ≈ 200mJ/image/channel ≈ 10mW/channel Space qualified Compact

Klaus Honscheid, Ohio State University, RT07, Fermilab Data Flow Considerations Data Volume The instrument contains 36 visible CCDs, 36 NIR detectors Each with 3.5k x 3.5k pixels, 16 bits Approx. 1 GByte per exposure 300 second exposure time, 30 second readout time Approx 300 exposures or 300 GBytes per day Uptime (for down link) About 1-2 hours contact out every 24 Random disturbances (e.g. weather) Compressed data are sent to ground Compression can reduce the data by a factor of 2 Down-link bandwidth ~300M bits/second Let’s look at SNAP because space makes it different

Klaus Honscheid, Ohio State University, RT07, Fermilab K a xmt Primary K a xmt Redundant Instrument Control Independent Electronics (Slice) Persistent Storage Down Link ICU Redundant ICU Primary Slice 1 Slice 2 Slice 3 Slice 80 SNAP Observatory After H. Heetderks Calibration CD&H ACS S-band Transponder Primary S-band Transponder Redundant NIR CCD Shutter Focus Thermal Power distribution 1553 Bus Spectrograph Guide Visible CCD Guide CCD Power Redundancy Avoid single point of failure Backup units for critical systems Uptime of data link Onboard persistent storage Data compression Error correction Power, Size Radiation Hardness

Klaus Honscheid, Ohio State University, RT07, Fermilab Flash radiation studies Radiation studies (200 MeV p) suggest flash may be appropriate Implementation concept suggests requirements can be met Future work includes finishing radiation analysis and work towards other aspects of space qualification (see A. Prosser’s talk this afternoon) Two beam exposures to 200 MeV protons at Indiana Univ cyclotron Krads explored Quiescent current rises after ~50Krads. Failures significant above that dose. SEUs observed at a low rate. original concept

Klaus Honscheid, Ohio State University, RT07, Fermilab LSST Dataflow system To client nodes on DAQ network Services 9 CCDs 281 MB/s PGP (300 m fiber) Gb/s PGP core & interface SDS Crate sCIM Configurable Cluster Module (CCE) 10 Gb1Gb Raft Readout System (1) Configurable Cluster Module (CCE) 10 Gb1Gb Raft Readout System (25) fCIM Fast Cluster Interconnect Module From CCS network Slow Cluster Interconnect Module In Cryostat Courtesy of A. Perazzo MFD VIRTEX 4 ATCA backplane Power Memory Chips MGT Lanes Reset Bootstrap Selection

Klaus Honscheid, Ohio State University, RT07, Fermilab Controls and Communications OCS Scheduler TCS Typical Architecture (LSST)

Klaus Honscheid, Ohio State University, RT07, Fermilab Expert Control: The Scheduler

Klaus Honscheid, Ohio State University, RT07, Fermilab Common Services Software Infrastructure provides Common Services Logging Persistent Storage Alarm and Error Handling Messages, Commands, Events Configuration, Connection Messaging Layer Component/Container Model (Wampler et al, ATST)

Klaus Honscheid, Ohio State University, RT07, Fermilab Implementations NetworkTransportInterfaceGUI ALMA EthernetCORBAIDLJava/Swing Gemini EthernetEpics Tcl/Tk SOAR EthernetSocketsSCLNLabview DES EthernetSocketsSML/SCLNLabview LSST EthernetDDSDDS/IDLTbd SNAP SPI likeVxWorksn.a. (adapted from S. Wampler)

Klaus Honscheid, Ohio State University, RT07, Fermilab Data Distribution Service Search for technology to handle control requirements and telemetry requirements, in a distributed environment. Publish-Subscribe paradigm. Subscribe to a topic and publish or receive data. No need to make a special request for every piece of data. OMG DDS standard released. Implemented by RTI as NDDS. Topic Data Reader Data Writer Data Reader Data Writer Subscriber Publisher Subscriber Data Domain Domain Participant

Klaus Honscheid, Ohio State University, RT07, Fermilab subscribers publishers Summary of key DDS PS features Efficient Publish/Subscribe interfaces QoS suitable for real-time systems deadlines, levels of reliability, latency, resource usage, time-based filter Listener & wait-based data access suits different application styles Support for content-based subscriptions Support for data-ownership Support for history & persistence of data-modifications DDS Information consumer subscribe to information of interest Information producer publish information DDS matches & routes relevant info to interested subscribers Courtesy of D. Schmidt

Klaus Honscheid, Ohio State University, RT07, Fermilab Comparing CORBA with DDS Distributed object Client/server Remote method calls Reliable transport Best for Remote command processing File transfer Synchronous transactions Distributed data Publish/subscribe Multicast data Configurable QoS Best for Quick dissemination to many nodes Dynamic nets Flexible delivery requirements DDS & CORBA address different needs Complex systems often need both… Courtesy of D. Schmidt

Klaus Honscheid, Ohio State University, RT07, Fermilab Data Management Courtesy of T. Schalk Long-Haul Communications Base to Archive and Archive to Data Centers Networks are 10 gigabits/second protected clear channel fiber optics, with protocols optimized for bulk data transfer Base Facility [In Chile] Nightly Data Pipelines and Products are hosted here on 25 teraflops class supercomputers to provide primary data reduction and transient alert generation in under 60 seconds. Mountain Site [In Chile] Data acquisition from the Camera Subsystem and the Observatory Control System, with read-out 6 GB image in 2 seconds and data transfer to the Base at 10 gigabits/second. Archive/Data Access Centers [In the United States] Nightly Data Pipelines and Data Products and the Science Data Archive are hosted here. Supercomputers capable of 60 teraflops provide analytical processing, re-processing, and community data access via Virtual Observatory interfaces to a 7 petabytes/year archive.

Klaus Honscheid, Ohio State University, RT07, Fermilab Data Management Challenges Three tiers On the mountain Base camp (tens of km away) Processing Center (thousands of km away) At the telescope (on the summit) Acquire a 6 Gigabyte image in 2 seconds (DAQ) Format and buffer the images to disk (Image Builder) Transmit images to base camp 80 km away in less than 15 seconds

Klaus Honscheid, Ohio State University, RT07, Fermilab Base Camp Data Management Teraflop system for image analysis Disk IO rates could exceed 50 Gbytes/sec Generates scientific alerts within 36 seconds of image acquisition Software alert generation must be % correct

Klaus Honscheid, Ohio State University, RT07, Fermilab Archive Center Data Management Reprocess each nights’ images within 24 hours to produce best science results 3x - 4x processing power of base camp system All non-time critical analysis done here Supports all external user inquiries Ultimate repository of 10 years of images

Klaus Honscheid, Ohio State University, RT07, Fermilab Summary and Conclusion Dark Energy science requires large scale surveys Large scale surveys require Dedicated instruments (Giga-pixel CCD cameras) Large(r) collaborations State of the art computing and software technology Good people Post Doc Positions Available PhD in physics or astronomy, interest in DAQ and RT software required. Special thanks to A. Prosser G. Schumacher M. Huffer G. Haller N. Kaiser