Report on CHEP ‘06 David Lawrence. Conference had many participants, but was clearly dominated by LHC LHC has 4 major experiments: ALICE, ATLAS, CMS,

Slides:



Advertisements
Similar presentations
Storage Workshop Summary Wahid Bhimji University Of Edinburgh On behalf all of the participants…
Advertisements

1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
GSIAF "CAF" experience at GSI Kilian Schwarz. GSIAF Present status Present status installation and configuration installation and configuration usage.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
O. Stézowski IPN Lyon AGATA Week September 2003 Legnaro Data Analysis – Team #3 ROOT as a framework for AGATA.
The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
Software Performance Tuning Project – Final Presentation Prepared By: Eyal Segal Koren Shoval Advisors: Liat Atsmon Koby Gottlieb.
Lars Arge 1/12 Lars Arge. 2/12  Pervasive use of computers and sensors  Increased ability to acquire/store/process data → Massive data collected everywhere.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Offline Software Status Jan. 30, 2009 David Lawrence JLab 1.
Computing at COSM by Lawrence Sorrillo COSM Center.
Operating Systems Who’s in charge in there?. Types of Software Application Software : Does things we want to do System Software : Does things we need.
Multi-threaded Event Processing with JANA David Lawrence – Jefferson Lab Nov. 3, /3/08 Multi-threaded Event Processing with JANA - D. Lawrence JLab.
Filesytems and file access Wahid Bhimji University of Edinburgh, Sam Skipsey, Chris Walker …. Apr-101Wahid Bhimji – Files access.
Feb. 19, 2015 David Lawrence JLab Counting House Operations.
Emlyn Corrin, DPNC, University of Geneva EUDAQ Status of the EUDET JRA1 DAQ software Emlyn Corrin, University of Geneva 1.
ALICE Electronic Logbook MEST-CT Vasco Barroso PH/AID.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
(Preliminary) Results of Evaluation of the CCT SB110 Peter Chochula and Svetozár Kapusta 1 1 Comenius University, Bratislava.
Work Package 5 Data Acquisition and High Level Triggering System Jean-Christophe Garnier 3/08/2010.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
Parallel Programming on the SGI Origin2000 With thanks to Igor Zacharov / Benoit Marchand, SGI Taub Computer Center Technion Moshe Goldberg,
MINER A Software The Goals Software being developed have to be portable maintainable over the expected lifetime of the experiment extensible accessible.
ROOT Application Area Internal Review September 2006.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
LC Software Workshop, May 2009, CERN P. Mato /CERN.
HepNT - January 15, 1997 : PCSF Frederic Hemmer IT/PDP 1 PCSF - A Pentium ® /Windows NT ® Based simulation farm Frederic Hemmer CERN IT/PDP.
1 The PHENIX Experiment in the RHIC Run 7 Martin L. Purschke, Brookhaven National Laboratory for the PHENIX Collaboration RHIC from space Long Island,
1 LHCC RRB SG 16 Sep P. Vande Vyvre CERN-PH On-line Computing M&O LHCC RRB SG 16 Sep 2004 P. Vande Vyvre CERN/PH for 4 LHC DAQ project leaders.
June 17th, 2002Gustaaf Brooijmans - All Experimenter's Meeting 1 DØ DAQ Status June 17th, 2002 S. Snyder (BNL), D. Chapin, M. Clements, D. Cutts, S. Mattingly.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
PROOF tests at BNL Sergey Panitkin, Robert Petkus, Ofer Rind BNL May 28, 2008 Ann Arbor, MI.
Data Placement Intro Dirk Duellmann WLCG TEG Workshop Amsterdam 24. Jan 2012.
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Summary of persistence discussions with LHCb and LCG/IT POOL team David Malon Argonne National Laboratory Joint ATLAS, LHCb, LCG/IT meeting.
Predrag Buncic (CERN/PH-SFT) Software Packaging: Can Virtualization help?
D0 Farms 1 D0 Run II Farms M. Diesburg, B.Alcorn, J.Bakken, R. Brock,T.Dawson, D.Fagan, J.Fromm, K.Genser, L.Giacchetti, D.Holmgren, T.Jones, T.Levshina,
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
PSM, Database requirements for POOL (File catalog performance requirements) Maria Girone, IT-DB Strongly based on input from experiments: subject.
AMS02 Data Volume, Staging and Archiving Issues AMS Computing Meeting CERN April 8, 2002 Alexei Klimentov.
ALICE Computing Model A pictorial guide. ALICE Computing Model External T1 CERN T0 During pp run i (7 months): P2: data taking T0: first reconstruction.
October 21, 2010 David Lawrence JLab Oct. 21, 20101RootSpy -- CHEP10, Taipei -- David Lawrence, JLab Parallel Session 53: Software Engineering, Data Stores,
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
October 19, 2010 David Lawrence JLab Oct. 19, 20101RootSpy -- CHEP10, Taipei -- David Lawrence, JLab Parallel Session 18: Software Engineering, Data Stores,
1 Performance and Upgrade Plans for the PHENIX Data Acquisition System Martin L. Purschke, Brookhaven National Laboratory for the PHENIX Collaboration.
24/06/20161 Hardware Processor components & ROM. 224/06/2016 Learning Objectives Describe the function and purpose of the control unit, memory unit and.
Modularization of Geant4 Dynamic loading of modules Configurable build using CMake Pere Mato Witek Pokorski
Barthélémy von Haller CERN PH/AID For the ALICE Collaboration The ALICE data quality monitoring system.
29/04/2008ALICE-FAIR Computing Meeting1 Resulting Figures of Performance Tests on I/O Intensive ALICE Analysis Jobs.
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Workshop Concluding Remarks
(on behalf of the POOL team)
Solid State Disks Testing with PROOF
LHC experiments Requirements and Concepts ALICE
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Luca dell’Agnello INFN-CNAF
ALICE Computing Model in Run3
CernVM Status Report Predrag Buncic (CERN/PH-SFT).
The LHCb Event Building Strategy
Who’s in charge in there?
OO-Design in PHENIX PHENIX, a BIG Collaboration A Liberal Data Model
ALICE Data Challenges Fons Rademakers Click to add notes.
Lecture Topics: 11/1 Hand back midterms
Presentation transcript:

Report on CHEP ‘06 David Lawrence

Conference had many participants, but was clearly dominated by LHC LHC has 4 major experiments: ALICE, ATLAS, CMS, and LHCb Collaboration sizes: ~500, ~1000, ~1000, ~2000

LHC Event and Data Rates Event Size (bytes) L1 trigger rate Data rate Atlas1.6 x kHz120 GB/s CMS1.0 x kHz100 GB/s Alice1.0 x kHz20 GB/s LHCb3.5 x MHz3.5 GB/s

PHENIX DAQ system used LZO compression to compress raw data after the event builder –Reduced bandwidth requirement –Reduced disk space requirement –Extended time data allowed to linger in buffer boxes

PHENIX Buffer boxes: The longer you can buffer data the better you can take advantage of breaks in data flow. 6 reasonably cheap Linux-based buffer boxes (~20k each) allow 40 hours of data taking without tape access. L3 trigger can be run on disk data so it only needs to keep up with the average event rate, not the peak rate.

PHENIX Analysis Trains: Improve overall use of resources by limiting random access of data files. About every 3 weeks a new train is started. Jobs register with train ahead of time. Tapes are read in tape-order and all jobs run on the data at once.

PHENIX 270 TB of data was shipped to computing center in Japan for processing This was done by non-experts manning shifts

ROOT ROOT is now 11 years old and represents ~500 man-years of effort

ROOT The 2016 laptop: 32 processors, 16GB RAM, 16TB of disk Multi-core processors are at the doorstep and better multi-threading support is needed. PROOF I/O (read-ahead) Fitting/Minimization …

ROOT STL and templates take significantly longer to compile. Shared libraries become less efficient as the number of packages implemented in them grows The new ROOT will be BOOT

GDML Witold Pokotski of the GDML development team Repeated structures are now supported AGDD guys seem to think they have a better product and are still a bit bitter I suggested GDML implement the ability to apply tweaks in a separate file from the default geometry (they’ll think about it)

What do you know, people do develop and run physics analysis software on MS Windows!

Object Persistence Some LHC experiments use POOL which uses Reflex outside of ROOT and then defines TTrees using the Reflex information.

Object Persistence ROOT I/O has many new features making it more viable candidate for general use ROOT I/O is based on the Reflex project which uses gccxml

IEEE Transactions in Nuclear Physics Refereed Journal pushing for publications from CHEP type work (software) “Hardware guys are very good at publishing. Software guys need to do better then they are.”

Conclusions An abstract really, really should have been submitted for DANA We need to publish (e.g. IEEE TNS) ROOT is very strongly supported and will continue to be developed over at least the next 6 years CHEP 2007 will be in Victoria, BC. I will be there