05/14/04Larry Dennis, FSU1 Scale of Hall D Computing CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental.

Slides:



Advertisements
Similar presentations
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Advertisements

Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
LIGO- GXXXXXX-XX-X Advanced LIGO Data & Computing Material for the breakout session NSF review of the Advanced LIGO proposal Albert Lazzarini Caltech,
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Feb. 26, 2001L. Dennis, FSU The Search for Exotic Mesons – The Critical Role of Computing in Hall D.
Offline Software Status Jan. 30, 2009 David Lawrence JLab 1.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
BESIII computing 王贻芳. Peak Data volume/year Peak data rate at 3000 Hz Events/year: 1*10 10 Total data of BESIII is about 2*640 TB Event size(KB)Data volume(TB)
Online Data Challenges David Lawrence, JLab Feb. 20, /20/14Online Data Challenges.
The GlueX Collaboration Meeting October 4-6, 2012 Jefferson Lab Curtis Meyer.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Workfest Goals Develop the Tools for CDR Simulations HDFast HDGEANT Geometry Definitions Remote Access Education of the rest of the collaboration Needs.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Dr. M.-C. Sawley IPP-ETH Zurich Nachhaltige Begegnungen Standing at the crossing point between data analysis and simulation Knowledge Discovery Panel.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
DV/dt - Accelerating the Rate of Progress towards Extreme Scale Collaborative Science DOE: Scientific Collaborations at Extreme-Scales:
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
Feb. 26, 2001L. Dennis, FSU The Search for Exotic Mesons – The Critical Role of Computing in Hall D.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
November 2013 Review Talks Morning Plenary Talk – CLAS12 Software Overview and Progress ( ) Current Status with Emphasis on Past Year’s Progress:
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
Hall-D/GlueX Software Status 12 GeV Software Review III February 11[?], 2015 Mark Ito.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
Large Area Surveys - I Large area surveys can answer fundamental questions about the distribution of gas in galaxy clusters, how gas cycles in and out.
LIGO-G E LIGO Scientific Collaboration Data Grid Status Albert Lazzarini Caltech LIGO Laboratory Trillium Steering Committee Meeting 20 May 2004.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
Thomas Jefferson National Accelerator Facility Page 1 CLAS12 Computing Requirements G.P.Gilfoyle University of Richmond.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
CBM Computing Model First Thoughts CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
Digital Library Storage Strategies Robert Cartolano, Director Library Information Technology Office November 14, 2008.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
GlueX Computing GlueX Collaboration Meeting – JLab Edward Brash – University of Regina December 11 th -13th, 2003.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
PCAP Close Out Feb 2, 2004 BNL. Overall  Good progress in all areas  Good accomplishments in DC-2 (and CTB) –Late, but good.
Tackling I/O Issues 1 David Race 16 March 2010.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
16 September 2014 Ian Bird; SPC1. General ALICE and LHCb detector upgrades during LS2  Plans for changing computing strategies more advanced CMS and.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
Hall D Computing Facilities Ian Bird 16 March 2001.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Ian Bird WLCG Workshop San Francisco, 8th October 2016
WP18, High-speed data recording Krzysztof Wrona, European XFEL
for the Offline and Computing groups
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
Cognitus: A Science Case for HPC in the Nordic Region
Scientific Computing At Jefferson Lab
Preparations for the CMS-HI Computing Workshop in Bologna
TeraScale Supernova Initiative
Heavy Ion Physics Program of CMS Proposal for Offline Computing
Presentation transcript:

05/14/04Larry Dennis, FSU1 Scale of Hall D Computing CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 100 MB/s 1 GB/s

05/14/04Larry Dennis, FSU1 Requirements Summary

05/14/04Larry Dennis, FSU1 Some comparisons: Hall D vs. other HENP Not just an issue of equipment. These experiments all have the support of large dedicated computing groups within the experiments well defined computing models Hall D ~240 (CLAS) JLAB– current ~500300BaBar ~ D0/CDF Run II ~ > STAR ~500 (?) US Atlas (Tier 1) ~ (total)CMS PeopleCPU SI95/year Disk Cache TB Data rates MB/s Data Volumes (tape) TB/year

05/14/04Larry Dennis, FSU1 Hall D Computing Tasks First Pass Analysis Data Mining Physics Analysis Partial Wave Analysis Physics Analysis Acquisition Monitoring Slow Controls Data Archival Planning Simulation Publication Calibrations

05/14/04Larry Dennis, FSU1 Hall D Software We must accomplish all of the tasks other experiments must also accomplish. Data Acquisition, Slow Controls, Calibration, Reconstruction, Simulation, Physics Analysis, etc. Need to make a dramatic improvement in what we can do with limited resources.

05/14/04Larry Dennis, FSU1 Goals for the Computing Environment 1. The experiment must be easy to conduct (coded for software people  system must automated sufficiently so that only two people are required to run experiment). 2. Everyone can participate in solving experimental problems – no matter where they are located. 3. Calibration effort can keep up with data acquisition. 4. Offline analysis can more than keep up with the online acquisition (factor of 2?). continued…

05/14/04Larry Dennis, FSU1 Goals for the Computing Environment 1. Simulations can more than keep up with the online acquisition (factor of 10?). 2. Production of tracks/clusters from raw data and simulations can be planned, conducted, monitored, validated and used by a group. 3. Production of tracks/clusters from raw data and simulations can be conducted automatically with group monitoring. 4. Subsequent analysis can be done automatically, with group participation if individuals so choose.

05/14/04Larry Dennis, FSU1 Meeting the Hall D Computational Challenges Moore’s law: Computer performance increases by a factor of 2 every 18 months. Gilder’s Law: Network bandwidth triples every 12 months. Solving the information management problems requires people working on the software and developing a workable computing environment. Dennis’ Law: Neither Moore’s Law nor Gilder’s Law will solve our computing problems.

05/14/04Larry Dennis, FSU1 Limiting Resources? A. Money. B. Data Acquisition. C. CPU Speed. D. Network Speed. E. Storage Capacity. F. People. Which of the following resources is (are) likely to be the rate limiting resource(s)?

05/14/04Larry Dennis, FSU1 How Can We Significantly Advance Computing Technology Provide an extremely efficient working environment. Focus on making physicists more efficient. Workflow and organization. Getting them the information they need when they need it. Automate, Automate, Automate! Do we need bleeding-edge software engineering? Do we need the most optimized codes? We need to try to make the case for more efficient performance later on by software innovations now.

05/14/04Larry Dennis, FSU1 Critical Role for Computing in Hall D The quality of Hall D science depends critically upon the collaboration’s ability to conduct it’s computing tasks.

05/14/04Larry Dennis, FSU1 Efficient Information Access is Key to HallD Success Data Acquisition  Raw Data,  Experimental Conditions CalibrationsSimulationsData ReductionPhysics AnalysisPWA Information From Researchers Hall D Experimental Information