Status of PRS/Muon Activities D. Acosta University of Florida Work toward the “June” HLT milestones US analysis environment.

Slides:



Advertisements
Similar presentations
September 5, 2000D. Acosta -- USCMS Physics Meeting1 Muon PRS Group Darin Acosta (“Muon Guy”) University of Florida.
Advertisements

ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
Giuseppe Roselli (CMS-RPC) Università degli Studi di Bari – INFN RPC Efficiency with Track Reconstruction Giuseppe Roselli.
Programming Types of Testing.
GRID DATA MANAGEMENT PILOT (GDMP) Asad Samar (Caltech) ACAT 2000, Fermilab October , 2000.
PRS/Muon Activities & Status of the Track-Finder Prototype D. Acosta University of Florida PRS/Muon Group Activities Track-Finder test software status.
SCT Offline Monitor Measuring Module Hit Efficiencies Helen Hayward University of Liverpool.
A tool to enable CMS Distributed Analysis
Hands-On Microsoft Windows Server 2008 Chapter 11 Server and Network Monitoring.
CH 13 Server and Network Monitoring. Hands-On Microsoft Windows Server Objectives Understand the importance of server monitoring Monitor server.
Windows Server 2008 Chapter 11 Last Update
11 The Ultimate Upgrade Nicholas Garcia Bell Helicopter Textron.
Analysis of Simulation Results Andy Wang CIS Computer Systems Performance Analysis.
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
1 Lecture 19 Configuration Management Software Engineering.
1 Software Development Configuration management. \ 2 Software Configuration  Items that comprise all information produced as part of the software development.
Requirements Review – July 21, Requirements for CMS Patricia McBride July 21, 2005.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
11 December 2000 Paolo Capiluppi - DataGrid Testbed Workshop CMS Applications Requirements DataGrid Testbed Workshop Milano, 11 December 2000 Paolo Capiluppi,
Grid Workload Management Massimo Sgaravatto INFN Padova.
Event Data History David Adams BNL Atlas Software Week December 2001.
DEBUGGING. BUG A software bug is an error, flaw, failure, or fault in a computer program or system that causes it to produce an incorrect or unexpected.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Recent Software Issues L3 Review of SM Software, 28 Oct Recent Software Issues Occasional runs had large numbers of single-event files. INIT message.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
CS 3500 L Performance l Code Complete 2 – Chapters 25/26 and Chapter 7 of K&P l Compare today to 44 years ago – The Burroughs B1700 – circa 1974.
Outline: Tasks and Goals The analysis (physics) Resources Needed (Tier1) A. Sidoti INFN Pisa.
HLT DT Calibration (on Data Challenge Dedicated Stream) G. Cerminara N. Amapane M. Giunta CMS Muon Meeting.
June 29, 2000DOE/NSF USCMS Computing and Software Report. HLT Studies D. Acosta1 High-Level Trigger Studies Darin Acosta University of Florida DOE/NSF.
Debugging of # P. Hristov 04/03/2013. Introduction Difficult problem – The behavior is “random” and depends on the “history” – The debugger doesn’t.
Sep 13, 2006 Scientific Computing 1 Managing Scientific Computing Projects Erik Deumens QTP and HPC Center.
Post-DC2/Rome Production Kaushik De, Mark Sosebee University of Texas at Arlington U.S. Grid Phone Meeting July 13, 2005.
2-Dec Offline Report Matthias Schröder Topics: Scientific Linux Fatmen Monte Carlo Production.
CSC High Pileup Sumulations Vadim Khotilovich Alexei Safonov Texas A&M University June 15, 2009.
CSC 480 Software Engineering Test Planning. Test Cases and Test Plans A test case is an explicit set of instructions designed to detect a particular class.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
ATLAS Trigger Development
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Software Tools for Layout Optimization (Fermilab) Software Tools for Layout Optimization Harry Cheung (Fermilab) For the Tracker Upgrade Simulations Working.
USCMS Physics, May 2001Darin Acosta1 Status Report of PRS/  D.Acosta University of Florida Current U.S. activities PRS/  Activities New PRS organization.
CERN IT Department CH-1211 Genève 23 Switzerland t Migration from ELFMs to Agile Infrastructure CERN, IT Department.
SRM-2 Road Map and CASTOR Certification Shaun de Witt 3/3/08.
Predrag Buncic CERN ALICE Status Report LHCC Referee Meeting 01/12/2015.
LHCf Detectors Sampling Calorimeter W 44 r.l, 1.6λ I Scintilator x 16 Layers Position Detector Scifi x 4 (Arm#1) Scilicon Tracker x 4(Arm#2) Detector size.
Object Oriented reconstruction of the CMS muon chambers CHEP February, Padova Annalina Vitelli - INFN Torino.
Enabling Grids for E-sciencE CMS/ARDA activity within the CMS distributed system Julia Andreeva, CERN On behalf of ARDA group CHEP06.
TeV Muon Reconstruction Vladimir Palichik JINR, Dubna NEC’2007 Varna, September 10-17, 2007.
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
USCMS May 2002Jim Branson 1 Physics in US CMS US CMS Annual Collaboration Meeting May 2002 FSU Jin Branson.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
1 CMS Virtual Data Overview Koen Holtman Caltech/CMS GriPhyN all-hands meeting, Marina del Rey April 9, 2001.
D. Duellmann, IT-DB POOL Status1 POOL Persistency Framework - Status after a first year of development Dirk Düllmann, IT-DB.
1 Plans for the Muon Trigger CSC Note. 2 Muon Trigger CSC Studies General performance studies and trigger rate evalution for the full slice Evaluation.
WLCG November Plan for shutdown and 2009 data-taking Kors Bos.
DGAS Distributed Grid Accounting System INFN Workshop /05/1009, Palau Giuseppe Patania Andrea Guarise 6/18/20161.
Analysis Model Zhengyun You University of California Irvine Mu2e Computing Review March 5-6, 2015 Mu2e-doc-5227.
CDF ICRB Meeting January 24, 2002 Italy Analysis Plans Stefano Belforte - INFN Trieste1 Strategy and present hardware Combine scattered Italian institutions.
Software Development. The Software Life Cycle Encompasses all activities from initial analysis until obsolescence Analysis of problem or request Analysis.
MAUS Status A. Dobbs CM43 29 th October Contents MAUS Overview Infrastructure Geometry and CDB Detector Updates CKOV EMR KL TOF Tracker Global Tracking.
1 ALICE Summary LHCC Computing Manpower Review September 3, 2003.
HOW TO FIX MSVCR100. DLL IS MISSING ERROR? missing-error.
Kevin Thaddeus Flood University of Wisconsin
How to Contribute to System Testing and Extract Results
US ATLAS Physics & Computing
Outline Chapter 2 (cont) OS Design OS structure
System calls….. C-program->POSIX call
Presentation transcript:

Status of PRS/Muon Activities D. Acosta University of Florida Work toward the “June” HLT milestones US analysis environment

US CMS S&C and Physics Meeting, July 26, 2002Darin Acosta2 HLT Milestones The June HLT milestones are: è Complete HLT selection for high-lumi scenario è HLT results on B physics è CPU analysis for high lumi selection è Repeat on line selection for low-lumi Must have results in DAQ TDR by September! We don’t have these results yet, but the current status and L1 results were reported at the June CMS Week è HLT Muon code had severe crashes, infinite loops, and memory leaks that prohibited collecting any statistics on our HLT algorithms è After monumental debugging effort, crashes traced to incorrect use of “ReferenceCounted” objects p User must never delete, even if performed new! è In L1 results, rate spike appeared at  =1.6 p New holes in geometry? è Problems were “fixed” for ORCA_6_2_0 release

US CMS S&C and Physics Meeting, July 26, 2002Darin Acosta3 L1 CSC Rate Spike è Contributes ~1 kHz to L1 rate è Spike occurs in  and  ! è Region of crack between barrel/endcap è Traced to ambiguity in p T assignment for low p T muons (or punch-through) è Fixed in CSC Track-Finder (but not sure why this is a problem only now) 

US CMS S&C and Physics Meeting, July 26, 2002Darin Acosta4 Not out of the woods… Tony reports that the Muon RootTreeMaker has a massive memory leak (200–500kB/event) è Analysis stopped at CERN (batch nodes were dying) è But muon HLT code alone was shown to have a leak of “only” 16 kB/event when released p So is it because events have more occupancy with pile-up, or is it because of jet/tau/pixel code? è Still under investigation p At FNAL, I find a leak of 800 kB/event for Z , and it is lumi dependent (600 kB/event for 2*10 33 ) p Nicola Amapane promises some results(fix?) this evening Moreover, the DT reconstruction is known to have some deficiencies So we have a two-fold plan: è Continue with current Root Tree production at remote sites to get us a set of baseline results for HLT milestone p We already had Pt1, Pt10, and Pt4 (low lumi) done before CERN shut down. p Can Fermilab help? Run ~1000 short jobs on PT4 if leak not fixed è Push hard to get new HLT reconstruction code written to solve remaining problems in time for September deadline

US CMS S&C and Physics Meeting, July 26, 2002Darin Acosta5 Status of New Reconstruction Code Stefano Lacaprara has authored a new version of DT reconstruction code è Corrects some naïve handling of the DT hits, incorrect pulls, new code organization,… p Turns out DT reconstruction must know drift velocity to ~1% This code has been examined (some bugs fixed) and cleaned up by Bart Van de Vyver (and also by Norbert and Nicola) Aiming to get new results in August, hopefully with new reconstruction code

US CMS S&C and Physics Meeting, July 26, 2002Darin Acosta6 Muon Analysis at Fermilab Request made to get the Muon Federations copied from CERN è Pt4 single muon sample at highest priority p Shafqat predicts copied by Monday è Pt1, Pt10, W, Z, and t-tbar to follow è Z  (on-peak and above) already available Root Trees will be copied as well, when available US users thus have at least one local choice for an analysis center, in addition to CERN Mechanism to obtain FNAL visitor id and computer accounts remotely works well (Thanks Hans…) When Pt4 sample ready, PRS/Muon group is interested in running large number of RootTreeMaker jobs at FNAL è INFN still trying to copy PT4 tracker digis (0.7TB)

US CMS S&C and Physics Meeting, July 26, 2002Darin Acosta7 Florida Prototype Tier-2 Center Currently host the Z  samples in Florida è but only for local accounts, I think, at the moment. Eventually should be accessible world-wide. Limited by available disk space è Several TB of RAID ordered Problematic analysis environment è Although the Production environment is working quite well with DAR distributions, the analysis environment (where users can compile and run jobs) is a little unstable p Some difficulties building code in ORCA6, perhaps traced to using RH6.2 vs. RH6.1 (loader version) è Need a more standard way to set up analysis environment p I think INFN also had some initial difficulties getting ORCA_6_2_0 installed and working Should be solved once key people come back from vacation

US CMS S&C and Physics Meeting, July 26, 2002Darin Acosta8 Side Note… For better or worse, we are working with a complex set of software for CMS è Definitely not easy for newcomers to contribute to development or to debugging (or to create a DB) p Case in point: how can a summer student plug a new L2 module into ORCA? è Many layers to ORCA software, difficult to navigate, little documentation of “common” classes p Sometimes counterintuitive rules must be followed è Complexity probably partly intrinsic to ORCA/COBRA, and partly due to inexperienced physicists working in this environment That being the case, we MUST have professional tools for development and debugging è Must be able to debug and profile the code, check for memory leaks, corruption, etc. è This is standard for CDF, and reliability of production code has increased dramatically è Requires analysis workstations with enough memory to handle these tools Should start defining a set of validation plots to show problems early in production