Run II Review Closeout 15 Sept., 2004 FNAL. Thanks! …all the hard work from the reviewees –And all the speakers …hospitality of our hosts Good progress.

Slides:



Advertisements
Similar presentations
Summary of Report to IATI Steering Committee, Paris 9 February 2011 Richard Manning.
Advertisements

CLIC-ILC collaborations on detectors F. Richard LAL/Orsay PAC Valencia May
Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
Metadata in Research Information Conclusions / Take Home Messages Ed Simons, President of euroCRIS eurocris Seminar – Brussels – September 10, 2013.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
“The Other Issues” II Moving Forward in Uncertain Times: Clearly, with reduced/loss of funding in the US and the UK, a timeline of 2010 for an EDR is unrealistic.
Oxford Jan 2005 RAL Computing 1 RAL Computing Implementing the computing model: SAM and the Grid Nick West.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
Update on the California Dairy Future Task Force and moving forward December 5, 2012 CONFIDENTIAL AND PROPRIETARY Any use of this material without specific.
F Run II Experiments and the Grid Amber Boehnlein Fermilab September 16, 2005.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
The D0 Monte Carlo Challenge Gregory E. Graham University of Maryland (for the D0 Collaboration) February 8, 2000 CHEP 2000.
HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
SAM and D0 Grid Computing Igor Terekhov, FNAL/CD.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
GridPP18 Glasgow Mar 07 DØ – SAMGrid Where’ve we come from, and where are we going? Evolution of a ‘long’ established plan Gavin Davies Imperial College.
DØ Computing Model & Monte Carlo & Data Reprocessing Gavin Davies Imperial College London DOSAR Workshop, Sao Paulo, September 2005.
9 December 2005 Toward Robust European Air Pollution Policies Workshop, Göteborg, October 5-7, 2005.
Grid Computing Status Report Jeff Templon PDP Group, NIKHEF NIKHEF Scientific Advisory Committee 20 May 2005.
US CMS/D0/CDF Jet/Met Workshop – Jan. 28, The CMS Physics Analysis Center - PAC Dan Green US CMS Program Manager Jan. 28, 2004.
16 September GridPP 5 th Collaboration Meeting D0&CDF SAM and The Grid Act I: Grid, Sam and Run II Rick St. Denis – Glasgow University Act II: Sam4CDF.
Security Vulnerabilities Linda Cornwall, GridPP15, RAL, 11 th January 2006
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
Stephen Wolbers CHEP2000 February 7-11, 2000 OO Expertise for Fermilab Experiments Stephen Wolbers CHEP2000 February 7-11, 2000.
GridPP11 Liverpool Sept04 SAMGrid GridPP11 Liverpool Sept 2004 Gavin Davies Imperial College London.
A B A B AR InterGrid Testbed Proposal for discussion Robin Middleton/Roger Barlow Rome: October 2001.
Run II Review Closeout 15 Sept., 2005 FNAL. Thanks! …all the hard work from the reviewees –And all the speakers …hospitality of our hosts Good progress.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Storage and Data Movement at FNAL D. Petravick CHEP 2003.
CIWQS Review Phase II: Evaluation and Final Recommendations March 14, 2008.
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Welcome and CNAP News Thanks to Gareth Smith and RAL for hosting the meeting.
1 Future Circular Collider Study Preparatory Collaboration Board Meeting September 2014 R-D Heuer Global Future Circular Collider (FCC) Study Goals and.
LHC Computing, CERN, & Federated Identities
Data Preservation in HEP Use Cases, Business Cases, Costs & Cost Models Grid Deployment Board International Collaboration for Data.
Remote Institute Tasks Frank Filthaut 11 February 2002  Monte Carlo production  Algorithm development  Alignment, calibration  Data analysis  Data.
Computing Division FY03 Budget and budget outlook for FY04 + CDF International Finance Committee April 4, 2003 Vicky White Head, Computing Division.
CERN IT Department CH-1211 Genève 23 Switzerland t Migration from ELFMs to Agile Infrastructure CERN, IT Department.
Principal Student Achievement Meeting PLC Visioning and Beyond.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Victoria A. White Head, Computing Division, Fermilab Fermilab Grid Computing – CDF, D0 and more..
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
A Data Handling System for Modern and Future Fermilab Experiments Robert Illingworth Fermilab Scientific Computing Division.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Data Preservation in HEP Use Cases, Business Cases, Costs & Cost Models Grid Deployment Board International Collaboration for Data.
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
DØ Computing Model and Operational Status Gavin Davies Imperial College London Run II Computing Review, September 2005.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
DØ Grid Computing Gavin Davies, Frédéric Villeneuve-Séguier Imperial College London On behalf of the DØ Collaboration and the SAMGrid team The 2007 Europhysics.
LHCbComputing Update of LHC experiments Computing & Software Models Selection of slides from last week’s GDB
Synergies Between Other IEA DSM Tasks and DSB
Production Resources & Issues p20.09 MC-data Regeneration
Christos Markou Institute of Nuclear Physics NCSR ‘Demokritos’
for the Offline and Computing groups
DØ MC and Data Processing on the Grid
Using an Object Oriented Database to Store BaBar's Terabytes
Preliminary Project Execution Plan
Presentation transcript:

Run II Review Closeout 15 Sept., 2004 FNAL

Thanks! …all the hard work from the reviewees –And all the speakers …hospitality of our hosts Good progress since the last review. Things ARE working.

Data Handling CD has an impressive data archive with a few PB of RunII data. Enstore seems to be quite mature and work very well. There is no agreed-upon strategy for data dispersal or duplication or the need for such a strategy. CDF and D0 have demonstrated high-throughput data access through the use of dCache and SAM, respectively. CD has successfully managed past instances of tape technology evolution and seems aware of steps need for bridging future evolution in this technology. While the use of SAM in D0 is ubiquitous, CDF should accelerate its buy-in to this technology. The CDF decision to place its new data only in SAM is commendable. We haven't seen particular indications of the non-scalability of SAM, though the need of widely distributed processes to update a database may present technical challenges. CDF, in particular, seems to be reaping the benefit of the substantial effort invested by CD to work with DESY and CMS to develop and deploy dCache.

Reconstruction Farms Factor 2 gain in D0 reconstruction speed was impressive Due to increased luminosity this year further speed up is necessary in d0reco –We applaud the fast response of CD to the needs of the experiment by supplying people to the task force D0 plans to use SAMGrid in p17 reprocessing, which is a step forward CDF appears to have enough head room in reconstruction capacity given the move to 1-pass processing and limited reprocessing. CDF is planning to merge of CAF and reconstruction resources which would be beneficial to manage any fluctuating processing needs –In line with CD’s overall plan? Probably needs closer consultation with CD

Monte Carlo Production Farms We congratulate both experiments in producing most of the MC samples offsite CDF needs to solve the problem of manual intervention in concatenation –Encouraged to use SAM to automate book- keeping and increase operation efficiency

Remote Analysis/Production 1.Congratulate on developing and using outside computing, 50% in the case of CDF next year, significant resources in D0. 2.The Run II experiments have clearly bought into the fact that they need to be grid-enabled to continue do analysis in the LHC era. 3.CDF has a clearly defined migration path which eventually leads to SAMGrid (SAM/JIM). In a year or two there will be no other options. 4.We are looking forward to seeing D0 do full raw data reprocessing using SAMGrid on remote sites this winter. This requires access to the database which is a new functionality for SAMGrid. 5.We are happy to see the formation of the Run II computing group and hope that it leads to increased communication and joint development of SAMGrid and work on interoperability with the various HEP Grids. 6.Event size is an important aspect of data handling and movement on Grids, and the experiments seem to be converging on an useful event size of KBytes. They should continue to work on this and seek further improvements.

Networking Overall plan seems good Plan to keep EsNet as the main provider is good. Need to quickly get the MAN Involvement in StarLight R&D also good

Planning/Management/Funding CD seems to have solved space/cooling problems with moves to NMF and HDCF. CD Common projects and collaborating with other experiments –Scientific Linux e.g. we encourage bringing in the larger scientific community. Common projects of CDF/D0 and CD are progressing… – need to improve communications between the 2 experiments –CD should continue to pressure the 2 exp. towards these common projects Grid area –Need to get the 3 exp. (CMS/D0/CDF) working on a common solution for sharing FNAL compute resources Interoperate CAB/CAF/…all FNAL resources –CD needs to clarify their overall strategy for building this FNAL synergy This includes “interoperating” on global grids SAM and CDF –CDF Management needs to push this Find manpower to make it happen Planning for computing resources needs to be more formal –Particularly for CDF we have a hard time understanding how they establish priorities Funding –FNAL needs to be clearer on the budget numbers Is there going to be tax (like the UPS from last year)? –Exp. need to maintain contingency in their budgets –We (the committee) are still unclear how tight the budget situation is…