Kanga Tim Adye Rutherford Appleton Laboratory Computing Plenary

Slides:



Advertisements
Similar presentations
B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
Advertisements

11th December 2002Tim Adye1 BaBar UK Grid Work Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting SLAC 11 th December 2002.
12th September 2002Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting Imperial College, London 12 th September 2002.
13th November 2002Tim Adye1 RAL Tier A Status Tim Adye Rutherford Appleton Laboratory BaBar UK Collaboration Meeting University of Bristol 13 th November.
13 December 2000Tim Adye1 New KanGA Export Scheme Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting Data Distribution Session 13 th December.
22nd January 2003Tim Adye1 Summary of Bookkeeping discussions at RAL Workshop Tim Adye Rutherford Appleton Laboratory Kanga Phone Meeting 22 nd January.
The Future of COBOL A Focus on Interactive Programming Appendix C Stern & Stern.
KANGA: ROOT Access to BABAR Data for Physics Analysis David Kirkby, UC Irvine for the BABAR Computing Group CHEP ‘03 - Data Management & Persistency 25.
The B A B AR G RID demonstrator Tim Adye, Roger Barlow, Alessandra Forti, Andrew McNab, David Smith What is BaBar? The BaBar detector is a High Energy.
 Contents 1.Introduction about operating system. 2. What is 32 bit and 64 bit operating system. 3. File systems. 4. Minimum requirement for Windows 7.
Migrating to EPiServer CMS 5 Johan Björnfot -
Troubleshooting Guide for Network Hard Disk. Model - NH-200.
25 February 2000Tim Adye1 Using an Object Oriented Database to Store BaBar's Terabytes Tim Adye Particle Physics Department Rutherford Appleton Laboratory.
2nd April 2001Tim Adye1 Bulk Data Transfer Tools Tim Adye BaBar / Rutherford Appleton Laboratory UK HEP System Managers’ Meeting 2 nd April 2001.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
Nick Brook Current status Future Collaboration Plans Future UK plans.
Data Distribution and Management Tim Adye Rutherford Appleton Laboratory BaBar Computing Review 9 th June 2003.
CE Operating Systems Lecture 3 Overview of OS functions and structure.
Graphing and statistics with Cacti AfNOG 11, Kigali/Rwanda.
Introduction Advantages/ disadvantages Code examples Speed Summary Running on the AOD Analysis Platforms 1/11/2007 Andrew Mehta.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
19th September 2003Tim Adye1 RAL Tier A Status Tim Adye Rutherford Appleton Laboratory BaBar UK Collaboration Meeting Royal Holloway 19 th September 2003.
Analysis Model Migration Plan Accessing the Mini Converting to the New (dual-read) Micro Producing the Skims Cleaning up the Tag Beta Development.
25th October 2006Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar UK Physics Meeting Queen Mary, University of London 25 th October 2006.
SkimData and Replica Catalogue Alessandra Forti BaBar Collaboration Meeting November 13 th 2002 skimData based replica catalogue RLS (Replica Location.
26 September 2000Tim Adye1 Data Distribution Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting 26 th September 2000.
8-Dec-15T.Wildish / Princeton1 CMS analytics A proposal for a pilot project CMS Analytics.
BaBar -Overview of a running (hybrid) system Peter Elmer Princeton University 5 June, 2002.
2-Dec Offline Report Matthias Schröder Topics: Scientific Linux Fatmen Monte Carlo Production.
11th November 2002Tim Adye1 Distributed Analysis in the BaBar Experiment Tim Adye Particle Physics Department Rutherford Appleton Laboratory University.
LHCbComputing LHCC status report. Operations June 2014 to September m Running jobs by activity o Montecarlo simulation continues as main activity.
11th April 2003Tim Adye1 RAL Tier A Status Tim Adye Rutherford Appleton Laboratory BaBar UK Collaboration Meeting Liverpool 11 th April 2003.
BaBar and the GRID Tim Adye CLRC PP GRID Team Meeting 3rd May 2000.
15 December 2000Tim Adye1 Data Distribution Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting 15 th December 2000.
The ATLAS Computing & Analysis Model Roger Jones Lancaster University ATLAS UK 06 IPPP, 20/9/2006.
David Nathan Brown, LBNL New Analysis Model Why Revise BaBar’s Analysis Model? Conceptual Overview of the New Model Requirements Summary Implementation.
MINIX Presented by: Clinton Morse, Joseph Paetz, Theresa Sullivan, and Angela Volk.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
Analysis Model Zhengyun You University of California Irvine Mu2e Computing Review March 5-6, 2015 Mu2e-doc-5227.
11th September 2002Tim Adye1 BaBar Experience Tim Adye Rutherford Appleton Laboratory PPNCG Meeting Brighton 11 th September 2002.
MAUS Status A. Dobbs CM43 29 th October Contents MAUS Overview Infrastructure Geometry and CDB Detector Updates CKOV EMR KL TOF Tracker Global Tracking.
ATLAS Distributed Computing Tutorial Tags: What, Why, When, Where and How? Mike Kenyon University of Glasgow.
Processes and threads.
Process Management Process Concept Why only the global variables?
Chapter 9: Virtual Memory – Part I
Tim Barrass Split ( ?) between BaBar and CMS projects.
Status of the CERN Analysis Facility
Using Shared Libraries
SAM at CCIN2P3 configuration issues
SRM2 Migration Strategy
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Readiness of ATLAS Computing - A personal view
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
Universita’ di Torino and INFN – Torino
Grid Canada Testbed using HEP applications
Processor Management Damian Gordon.
Chapter 2: The Linux System Part 1
Lecture Topics: 11/1 General Operating System Concepts Processes
Using an Object Oriented Database to Store BaBar's Terabytes
Peta-Cache: Electronics Discussion I Gunther Haller
ATLAS DC2 & Continuous production
Processor Management Damian Gordon.
Mr. M. D. Jamadar Assistant Professor
APE EAD3 introduction - DARIAH - Brussels
Running & Testing Programs :: Translators
Chapter 13: I/O Systems “The two main jobs of a computer are I/O and [CPU] processing. In many cases, the main job is I/O, and the [CPU] processing is.
Presentation transcript:

Kanga Tim Adye Rutherford Appleton Laboratory Computing Plenary BaBar Collaboration Meeting 10th July 2002 10th July 2002 Tim Adye

Overview Kanga Production Recent Changes Kanga Development Moving Kanga analysis from SLAC to RAL ROOT3 skimData Kanga Development Pointer skims Ralph Mueller-Pfefferkorn, Kate Mackay, Simon George, Dieter Best Nicole Chevalier Alessandra Forti Fabrizio Salvatore, Nicole Chevalier, Tilmann Colberg, Ulrik Egede, Tim Adye 10th July 2002 Tim Adye

Kanga Production This is the current status of Objy->Kanga conversion All these Kanga files (series-10 processings) are on disk at SLAC (for now) and at RAL. Original tag AllEvents + 19 data streams: all complete SPKanga: up to date with last sweep into simuboot (19 July) Generic MC streams: not done New tag (“reskim”) AllEvents data should be done in about a week Nearly all 2000-1 data available now 15 data streams: to be done next Generic MC AllEvents + 15 streams: hopefully ~2 weeks 1106 runs available so far 10th July 2002 Tim Adye

Moving Kanga analysis to RAL Since 1 July, RAL Tier A has all Kanga data+MC RAL has kept the majority since Dec 99! Newly converted Kanga imported from SLAC within 1-2 days RAL dedicated to Kanga and other non-Objectivity analysis Currently lots of free CPU and disk See Manny’s slides from Monday “RAL Tier A”, E. Olaiya 10th July 2002 Tim Adye

Removing Kanga analysis from SLAC Older (series-8) MC+data removed from SLAC disk in May+June First archived in HPSS RAL copy checked against SLAC (cksum) So far no errors found  Rest will be removed from SLAC disk at the end of July May have to remove some beforehand to allow continued production Keep an eye on Kanga HN just in case 10th July 2002 Tim Adye

ROOT3 Migrated Kanga I/O from ROOT 2 to 3 Needed for secure access to ROOT daemon Also allows us to upgrade to a recent version for other uses eg. fitting, user analysis ROOT 3.02-07 used in releases >=12.0.0 New version can read old Kanga files 10th July 2002 Tim Adye

skimData skimData moved out of releases >=11.12.1 Now called skimDataRel in release and symlinked from $BFROOT/bin/skimData (in user $PATH) Will allow latest version to be used regardless of user’s release Other sites should take care of this with MySQL installation skimData now defines conditions file according to dataset selected Generated tcl file includes RootConditionsFile and FixedFieldStrength specifications (skimData can now be used for Objectivity too! Allows similar selections for Objy and Kanga analyses) 10th July 2002 Tim Adye

Pointer Skims RAL currently has to store 19 streams as well as AllEvents Streams allow faster access to their component skims Tier C sites need only import the streams they want But, entails a factor 3 overhead in disk and network Original-tag series-10 data streams use 2 x AllEvents on disk Fortunately size per event is still smaller than Objy Idea is to keep just AllEvents and generate “pointer collections” for each skim Pointer files use a negligible amount of disk space Could be generated at RAL from AllEvents Analyses can read only their favourite skims Don’t have to skip other events in the stream Tier C sites only need import the required skims 10th July 2002 Tim Adye

Pointer Skim Production Kanga I/O modules and skimData already understand pointer files Pointer skim production now part of standard output module Working on putting this into production 10th July 2002 Tim Adye

Exporting Pointer Skims Pointer skims depend on AllEventsKanga, which is not exported to Tier C sites The plan is to convert from pointer to event-data files as part of export procedure Standalone conversion program now working ROOT application does not use BaBar Framework Therefore faster On-the-fly conversion will be incorporated into skimImport Controlled from Tier C site like current stream imports ssh used to run conversion program at RAL Output file written to Tier C disk via ROOT daemon. 10th July 2002 Tim Adye

Efficiency Pointer skim reading is less efficient (per event) than reading all events, especially for sparse skims. Effect probably negligible compared to physics code Very sparse skims might be stored as event-data files at RAL as well Studying different configurations to optimise disk vs read rate 10th July 2002 Tim Adye

Summary All original-tag data available in Kanga format Some reskimmed Kanga data available Remainder in next few weeks Kanga analysis must move to RAL this month All Kanga data at SLAC is also available at RAL Plus some no longer available at SLAC  RAL is ready and waiting for your jobs! Migration to ROOT3 is done skimData improvements Developing use of pointer skims 10th July 2002 Tim Adye