Data-coordinator report Lian-You SHAN ACC15 Jan 9, 2010, IHEP.

Slides:



Advertisements
Similar presentations
CMS Grid Batch Analysis Framework
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Releases & validation Simon George & Ricardo Goncalo Royal Holloway University of London HLT UK – RAL – 13 July 2009.
ATLAS Databases: An Overview, Athena use of Geometry/Conditions DB, and Conditions Metadata Elizabeth Gallas - Oxford ATLAS-UK Distributed Computing Tutorial.
ATLAS Analysis Model. Introduction On Feb 11, 2008 the Analysis Model Forum published a report (D. Costanzo, I. Hinchliffe, S. Menke, ATL- GEN-INT )
ATLAS Analysis Overview Eric Torrence University of Oregon/CERN 10 February 2010 Atlas Offline Software Tutorial.
Wahid Bhimji University of Edinburgh J. Cranshaw, P. van Gemmeren, D. Malon, R. D. Schaffer, and I. Vukotic On behalf of the ATLAS collaboration CHEP 2012.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
ATLAS Data Periods in COMA Elizabeth Gallas - Oxford ATLAS Software and Computing Week CERN – April 4-8, 2011.
Computing and LHCb Raja Nandakumar. The LHCb experiment  Universe is made of matter  Still not clear why  Andrei Sakharov’s theory of cp-violation.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
Introduction: Distributed POOL File Access Elizabeth Gallas - Oxford – September 16, 2009 Offline Database Meeting.
11/10/2015S.A.1 Searches for data using AMI October 2010 Solveig Albrand.
Steps toward ttH (H->bb) in the Atlas ‘PhysicsAnalysis’ Environment. Chris Collins-Tooth Christian Shaw.
LHCb computing in Russia Ivan Korolko (ITEP Moscow) Russia-CERN JWGC, October 2005.
LHC: ATLAS Experiment meeting “Conditions” data challenge Elizabeth Gallas - Oxford - August 29, 2009 XLDB3.
30 Jan 2009Elizabeth Gallas1 Introduction to TAGs Elizabeth Gallas Oxford ATLAS-UK Distributed Computing Tutorial January 2009.
ATLAS Database Operations Invited talk at the XXI International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, September 2007 Alexandre.
ATLAS Detector Description Database Vakho Tsulaia University of Pittsburgh 3D workshop, CERN 14-Dec-2004.
US ATLAS T2/T3 Workshop at UTANovember 11, Profiling Analysis at Startup Jim Cochran Iowa State Outline: - Computing model status - Expected Early.
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
H->bb Weekly Meeting Ricardo Gonçalo (RHUL) HSG5 H->bb Weekly Meeting, 9 March 2011.
Introduction CMS database workshop 23 rd to 25 th of February 2004 Frank Glege.
3rd November Richard Hawkings Luminosity, detector status and trigger - conditions database and meta-data issues  How we might apply the conditions.
Stefano Belforte INFN Trieste 1 CMS Simulation at Tier2 June 12, 2006 Simulation (Monte Carlo) Production for CMS Stefano Belforte WLCG-Tier2 workshop.
PWG3 Analysis: status, experience, requests Andrea Dainese on behalf of PWG3 ALICE Offline Week, CERN, Andrea Dainese 1.
Conditions Metadata for TAGs Elizabeth Gallas, (Ryan Buckingham, Jeff Tseng) - Oxford ATLAS Software & Computing Workshop CERN – April 19-23, 2010.
ATLAS Distributed Analysis Dietrich Liko. Thanks to … pathena/PANDA: T. Maneo, T. Wenaus, K. De DQ2 end user tools: T. Maneo GANGA Core: U. Edege, J.
Artemis School On Calibration and Performance of ATLAS Detectors Jörg Stelzer / David Berge.
A New Tool For Measuring Detector Performance in ATLAS ● Arno Straessner – TU Dresden Matthias Schott – CERN on behalf of the ATLAS Collaboration Computing.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
Online (GNAM) and offline (Express Stream and Tier0) monitoring produced results during cosmic/collision runs (Oct-Dec 2009) Shifter and expert level monitoring.
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
Status report of the KLOE offline G. Venanzoni – LNF LNF Scientific Committee Frascati, 9 November 2004.
Update on WH to 3 lepton Analysis And Electron Trigger Efficiencies with Tag And Probe Nishu 1, Suman B. Beri 1, Guillelmo Gomez Ceballos 2 1 Panjab University,
November 1, 2004 ElizabethGallas -- D0 Luminosity Db 1 D0 Luminosity Database: Checklist for Production Elizabeth Gallas Fermilab Computing Division /
11th November Richard Hawkings Richard Hawkings (CERN) ATLAS reconstruction jobs & conditions DB access  Conditions database basic concepts  Types.
TAGS in the Analysis Model Jack Cranshaw, Argonne National Lab September 10, 2009.
H->bb Weekly Meeting Ricardo Gonçalo (RHUL) HSG5 H->bb Weekly Meeting, 22 February 2011.
Summary of User Requirements for Calibration and Alignment Database Magali Gruwé CERN PH/AIP ALICE Offline Week Alignment and Calibration Workshop February.
Tomasz Malkiewicz T0 trigger and luminosity monitoring luminosity monitoring ALICE meeting at JYFL.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
ATLAS Distributed Analysis Dietrich Liko IT/GD. Overview  Some problems trying to analyze Rome data on the grid Basics Metadata Data  Activities AMI.
ELSSISuite Services QIZHI ZHANG Argonne National Laboratory on behalf of the TAG developers group ATLAS Software and Computing Week, 4~8 April, 2011.
Conditions Metadata for TAGs Elizabeth Gallas, (Ryan Buckingham, Jeff Tseng) - Oxford ATLAS Software & Computing Workshop CERN – April 19-23, 2010.
10 January 2008Neil Collins - University of Birmingham 1 Tau Trigger Performance Neil Collins ATLAS UK Physics Meeting Thursday 10 th January 2008.
Finding Data in ATLAS. May 22, 2009Jack Cranshaw (ANL)2 Starting Point Questions What is the latest reprocessing of cosmics? Are there are any AOD produced.
ATLAS The ConditionDB is accessed by the offline reconstruction framework (ATHENA). COOLCOnditions Objects for LHC The interface is provided by COOL (COnditions.
Using direct photons for L1Calo monitoring + looking at data09 Hardeep Bansil University of Birmingham Birmingham ATLAS Weekly Meeting February 18, 2010.
ATLAS Physics Analysis Framework James R. Catmore Lancaster University.
ATLAS TAGs: Tools from the ELSSI Suite Elizabeth Gallas - Oxford ATLAS-UK Distributed Computing Tutorial Edinburgh, UK – March 21-22, 2011.
Joe Foster 1 Two questions about datasets: –How do you find datasets with the processes, cuts, conditions you need for your analysis? –How do.
Analysis Model Zhengyun You University of California Irvine Mu2e Computing Review March 5-6, 2015 Mu2e-doc-5227.
Muon Week DQ Meeting Dr. Petra Haefner, Bonn1 Data Quality Concept of tolerable vs. intolerable defects.
MAUS Status A. Dobbs CM43 29 th October Contents MAUS Overview Infrastructure Geometry and CDB Detector Updates CKOV EMR KL TOF Tracker Global Tracking.
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
Metadata and Supporting Tools on Day One David Malon Argonne National Laboratory Argonne ATLAS Analysis Jamboree Chicago, Illinois 22 May 2009.
ATLAS Distributed Computing Tutorial Tags: What, Why, When, Where and How? Mike Kenyon University of Glasgow.
DB and Information Flow Issues ● Selecting types of run ● L1Calo databases ● Archiving run parameters ● Tools Murrough Landon 28 April 2009.
Monitoring of L1Calo EM Efficiencies
Database Replication and Monitoring
G. Watts (UW/Seattle) For the Hidden Valley Group
ALICE analysis preservation
W→τν Note & Tau candidates in first data
ATLAS Data Analysis Ontology: ontological representation of investigations DKB Meeting.
Level 1 (Calo) Databases
Conditions Data access using FroNTier Squid cache Server
ATLAS TAGs: Tools from the ELSSI Suite
Status and plans for bookkeeping system and production tools
Presentation transcript:

data-coordinator report Lian-You SHAN ACC15 Jan 9, 2010, IHEP

Outline  T3 resources  Data quality (DQ) aware analyses

T3 resources  Two UI &128 CPU  40 for MC queue & 80 for analyses  Good batch jobs of V on V14.x MC from T2  80 T storage  1T Data0 for frequent backup  Data1 (Calo)& data2 (Muon) public real data  Data3 private real data  Data4 public MonteCarlo samples  Data5 private MC samples  Please DO NOT abuse disks ErMing’s tallk 4 details

Known data on T3  Via T2 ( dpns-ls /dpm/ihep.ac.cn/home/atlas )  Real data in atlasdatadisk  MuonswBeam, L1CaloEM …  data09_900GeV, data09_2TeV AOD (DESD)  Geome/condit tags ?  MonteCarlo samples in atlasmcdisk  topmix electron is still missing  mc09_7TeV for W/Z,top ( & mc08_10TeV samples ? )  AMI check w/ time&energy  Eric Lancon ? Thanks to L.W. YAO

Known data on T3(ctd)  On local disks  data1/public  zhangdl, wanghl, MC…  DY, Zmumu, tau, Wtau…  Haven’t V12 recon || 14TeV samples obsolete ?  data2  baiy, yuanl, zhanghq, ruanxf, zhanzc  data4/data5  data4 for common/big backgrounds (W/Z, top … )  Data5 for your signal/special BCKG  physics channel name for continuance, e.g. WZ, RPV, ttH ? Would you please try DPD ?

DQ aware analyses  A report from Dec’09 Atlas PP meeting  More data than physics/analyses  Readiness papers  DD backgrounds analysis  Sub systems w/ data  Trigger, performance …  Data quality  Analyses aware of data quality How can we involve in these HOT scene ?!

Data quality (offline)  Assigned per lumi-block ( not trigger aware )  situations of main detectors, trigger … in ~60s  Also qualities from combined performance Grp  Physics working Grp can refine their own DQ  Release of DQ flags  5-color traffic light flags  Stored mainly in condition DB (COOL)  Also in (d)AOD via meta-data  Evolution/update with months duration  Controlled by DQ Grp ( good-run list = GRL )

Analyses aware of DQ  Understanding the DQ flags  DQ work/contribution are also welcome  Possible re-define the good-run list  Access DQ flags in AOD/NTP runs  Utilize provided good-run list  Only path sofar  Browse DQ flags via COOL  dumpFileMetaData.py  Quick & simple until match to athena  LumiBlockMetaDataTool  Within athena but tooo complicated/cautionful

official_GRL usage  Given an official list  Official_GRL_XXYYZZ.xml ( possible private_prepared )  Trace its evolustion w/ reconstructions  Interface it to user AOD processing  Configure ones jobOs  See  Check ones output  Ntuple w/ only events in good runs  Full/resonable Luminosity estimation  Possible procedure to merge many GRLs

Proxy servers ?  Frequently updating data  Calibration constants (d)  DCS values (s) & DQ flags (m)  Sometime needed in analyses  Oracle servers ?  Replicated to Oracle servers in each tier1  Frontier or Squid proxy servers on T2 ?  Cache/mirror T1 database dynamically  Prepare & use POOL data locally ?  Effective but chaos inviting

Backup(I)  AOD files in data09_900GeV/2TeV  L1Calo  F174_m268, f179_m283, f181_m293, f191_m315  F185_m298, f186_m298, f190_m310, f193_m320  f189_m304, r988_p62  L1Calo & MuonswBeam  F175_m273, f180_m288, f183_m298  F187_m304, f189_m304, f196_m325, r988_p62

Backup for data4  Wjet  Zjet  topT1  Di-jets  DrYn Minbias  Latest simulation/reconstruction  Complement of T2 MC data sets

Backup for data5  SM  Wjet, Zjet, WW, WZ, Zg (only for special/small prod )  Top  Wpola, Spcol, single, Xsec  Higgs  TtH, WH, Gam2  SUSY  RPVemu, S2top, S4top, ZgMet  XTC  W1wz  DPD  AANT, DAOD, TAG, LOGs