STAR C OMPUTING Introduction Torre Wenaus BNL May ‘99 STAR Computing Meeting BNL May 24, 1999.

Slides:



Advertisements
Similar presentations
Site Report: The Linux Farm at the RCF HEPIX-HEPNT October 22-25, 2002 Ofer Rind RHIC Computing Facility Brookhaven National Laboratory.
Advertisements

ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
Status of the new CRS software (update) Tomasz Wlodek June 22, 2003.
Systems Analysis and Design in a Changing World, 6th Edition
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
STAR Software Basics Introduction to the working environment Lee Barnby - Kent State University.
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
STAR C OMPUTING Maker and I/O Model in STAR Victor Perevoztchikov.
STAR C OMPUTING ROOT in STAR Torre Wenaus STAR Computing and Software Leader Brookhaven National Laboratory, USA ROOT 2000 Workshop, CERN February 3, 2000.
REVIEW OF NA61 SOFTWRE UPGRADE PROPOSAL. Mandate The NA61 experiment is contemplating to rewrite its fortran software in modern technology and are requesting.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
The GlueX Collaboration Meeting October 4-6, 2012 Jefferson Lab Curtis Meyer.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Scalable Systems Software Center Resource Management and Accounting Working Group Face-to-Face Meeting June 13-14, 2002.
BaBar MC production BaBar MC production software VU (Amsterdam University) A lot of computers EDG testbed (NIKHEF) Jobs Results The simple question:
Central Reconstruction System on the RHIC Linux Farm in Brookhaven Laboratory HEPIX - BNL October 19, 2004 Tomasz Wlodek - BNL.
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
Conditions DB in LHCb LCG Conditions DB Workshop 8-9 December 2003 P. Mato / CERN.
Recall: Three I/O Methods Synchronous: Wait for I/O operation to complete. Asynchronous: Post I/O request and switch to other work. DMA (Direct Memory.
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL U.S. ATLAS Physics and Computing Advisory Panel Review Argonne National Laboratory Oct 30, 2001.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
STAR Analysis Meeting, BNL, Dec 2004 Alexandre A. P. Suaide University of Sao Paulo Slide 1 BEMC software and calibration L3 display 200 GeV February.
PHENIX Simulation System 1 December 7, 1999 Simulation: Status and Milestones Tarun Ghosh, Indrani Ojha, Charles Vanderbilt University.
STAR C OMPUTING STAR Computing Infrastructure Torre Wenaus BNL STAR Collaboration Meeting BNL Jan 31, 1999.
MAGDA Roger Jones UCL 16 th December RWL Jones, Lancaster University MAGDA  Main authors: Wensheng Deng, Torre Wenaus Wensheng DengTorre WenausWensheng.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
F. Rademakers - CERN/EPLinux Certification - FOCUS Linux Certification Fons Rademakers.
CCRC’08 Weekly Update Jamie Shiers ~~~ LCG MB, 1 st April 2008.
Grand Challenge and PHENIX Report post-MDC2 studies of GC software –feasibility for day-1 expectations of data model –simple robustness tests –Comparisons.
1 GCA Application in STAR GCA Collaboration Grand Challenge Architecture and its Interface to STAR Sasha Vaniachine presenting for the Grand Challenge.
STAR C OMPUTING STAR MDC1 Experience and Revised Computing Requirements Torre Wenaus BNL RHIC Computing Advisory Committee Meeting BNL December 3-4, 1998.
November 2013 Review Talks Morning Plenary Talk – CLAS12 Software Overview and Progress ( ) Current Status with Emphasis on Past Year’s Progress:
Y.Fisyak, BNL - STAR Upgrade workshop, 12/2/ Integrated Tracker – STAR tracking framework of the future update on  status and  perspective IT(TF)
1 Threads, SMP, and Microkernels Chapter Multithreading Operating system supports multiple threads of execution within a single process MS-DOS.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
Analysis trains – Status & experience from operation Mihaela Gheata.
GDB Meeting - 10 June 2003 ATLAS Offline Software David R. Quarrie Lawrence Berkeley National Laboratory
February 28, 2003Eric Hjort PDSF Status and Overview Eric Hjort, LBNL STAR Collaboration Meeting February 28, 2003.
STAR C OMPUTING STAR Analysis Operations and Issues Torre Wenaus BNL STAR PWG Videoconference BNL August 13, 1999.
Online Reconstruction 1M.Ellis - CM th October 2008.
TB1: Data analysis Antonio Bulgheroni on behalf of the TB24 team.
PHENIX and the data grid >400 collaborators 3 continents + Israel +Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
STAR C OMPUTING STAR Computing Status and Plans Torre Wenaus BNL STAR Collaboration Meeting BNL Jan 31, 1999.
NOVA A Networked Object-Based EnVironment for Analysis “Framework Components for Distributed Computing” Pavel Nevski, Sasha Vanyashin, Torre Wenaus US.
Moritz Backes, Clemencia Mora-Herrera Département de Physique Nucléaire et Corpusculaire, Université de Genève ATLAS Reconstruction Meeting 8 June 2010.
STAR C OMPUTING Plans for Production Use of Grand Challenge Software in STAR Torre Wenaus BNL Grand Challenge Meeting LBNL 10/23/98.
Linda R. Coney – 5 November 2009 Online Reconstruction Linda R. Coney 5 November 2009.
STAR C OMPUTING The STAR Databases: From Objectivity to ROOT+MySQL Torre Wenaus BNL ATLAS Computing Week CERN August 31, 1999.
1 SICBDST and Brunel Migration status and plans. 2 Migration Step 1: SICBMC/SICBDST split  Last LHCb week: Split done but not tested  Software week.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
European Middleware Initiative (EMI) The Software Engineering Model Alberto Di Meglio (CERN) Interim Project Director.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Simulation Status for Year2 Running Charles F. Maguire Software Meeting May 8, 2001.
Pavel Nevski DDM Workshop BNL, September 27, 2006 JOB DEFINITION as a part of Production.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL DOE/NSF Review of US LHC Software and Computing Fermilab Nov 29, 2001.
Follow-up to SFT Review (2009/2010) Priorities and Organization for 2011 and 2012.
Download Manager software Training Workshop Ostend, Belgium, 20 th May 2014 D.M.A. Schaap - Technical Coordinator.
CDF SAM Deployment Status Doug Benjamin Duke University (for the CDF Data Handling Group)
MAUS Status A. Dobbs CM43 29 th October Contents MAUS Overview Infrastructure Geometry and CDB Detector Updates CKOV EMR KL TOF Tracker Global Tracking.
DCS Status and Amanda News
EGEE Middleware Activities Overview
ALICE analysis preservation
Readiness of ATLAS Computing - A personal view
Remaining Online SW Tasks
Chapter 2: The Linux System Part 1
OO-Design in PHENIX PHENIX, a BIG Collaboration A Liberal Data Model
Star Online System Claude A. Pruneau, Mei-li Chen, Adam Kisiel, and Jeff Porter CHEP 2000, Padova, Italy.
Simulation and Physics
Presentation transcript:

STAR C OMPUTING Introduction Torre Wenaus BNL May ‘99 STAR Computing Meeting BNL May 24, 1999

STAR C OMPUTING Torre Wenaus, BNL STAR Computing Meeting 5/99 Recent & Expected Arrivals New member of the core BNL computing team:  Mei-Li Chen, online developer l From U Maryland (Super-K, Milagro) l Very experienced online/DAQ expert l Has online event pool interface to DAQ working for full events over socket connection Systems support hire (online systems; general computing infrastructure) expected soon (…?)

STAR C OMPUTING Torre Wenaus, BNL STAR Computing Meeting 5/99 Status Infrastructure stabilized!  The train has pulled into the station, so people can catch up!  Core group now in ‘consolidation’ mode and focusing on commissioning run essentials l Bug fixing, documentation, examples, help with online integration, online monitoring needs, developing and responding to QA Online system pre-production release  Run control; subsystem control,configuration, slow control  Essential components of the system now in place Near term production plan  StMcEvent tests, peripheral signal and backgd, strangeness AuAu steady production, spectra AuAu  Ongoing production with simu data to l meet PWG needs; exercise and debug codes; perform QA; keep CRS well oiled (CRS became functional again Friday)  of course second priority to real data when it comes!

STAR C OMPUTING Torre Wenaus, BNL STAR Computing Meeting 5/99 Working with StEvent: doEvents.C doEvents.C  Standard means of reading DST files and feeding them to StEvent based analysis  Has handled XDF files OK, but hasn’t handled ROOT files properly up to now  Needs to transparently handle all filetypes; shield distinctions from user. Victor now has this implemented.  On Linux, all filetypes (MDC1 XDF, MDC2 XDF, MDC2 ROOT, post-MDC2 ROOT) work transparently. Still a problem on Solaris that Victor is working on  Multiple files handled

STAR C OMPUTING Torre Wenaus, BNL STAR Computing Meeting 5/99 Using Persistent StEvent StAnalysisMaker changes:  Remove ‘title’ (second parameter) from constructor  #include StEvent/XXX.hh -> XXX.h  Change method of event pointer retrieval: Former: StEventReaderMaker* evMaker = (StEventReaderMaker*) gStChain->Maker("events"); if (! evMaker->event()) return kStOK; // If no event, we're done StEvent& ev = *(evMaker->event()); New: StEvent* mEvent = (StEvent *) GetInputDS("StEvent"); if (! mEvent) return kStOK; // If no event, we're done StEvent& ev = *mEvent;

STAR C OMPUTING Torre Wenaus, BNL STAR Computing Meeting 5/99 This Meeting Joint online/offline focusing on real data readiness and the immediate needs of the commissioning run  Data flow from DAQ to online, offline  Online system status  Processing of DAQ data in offline  Event I/O, data cataloguing  Conditions database  Online monitoring  Subsystem and global reconstruction for year 1 detectors  RCF and production Current offline software in production, with people doing analysis QA on the results  In this and subsequent weeks try to reproduce some of the very productive environment of MDC2

STAR C OMPUTING Torre Wenaus, BNL STAR Computing Meeting 5/99 Goals for Commissioning run environment - stable offline environment as seen by users - Maker structure and interface stable. Changes affecting the user beyond those already made post-MDC2 are not foreseen. - StEvent as used by analysis codes has been stable; some functionality additions - adding persistency to StEvent while keeping it unchanged as seen by analysis codes has (subject to wider testing) been successful and so can serve as a basis for uDST development and StEvent extension to reconstruction (something we hoped to achieve but did not promise) - needs discussion at this meeting - macros/Makers provided to transparently handle all data file types (XDF, MDC2 root files, new root files)

STAR C OMPUTING Torre Wenaus, BNL STAR Computing Meeting 5/99 Commissioning environment (2) - capability to process raw DAQ data - stably functional data storage - post-MDC2 ROOT I/O improvements implemented in STAR address deficiencies in standard ROOT I/O: robustness against partially completed jobs, forward and backward read compatibility for data files, robust management of multiple event components on different files - schema evolution approach of Victor has to be (if at all possible) fitted into overall ROOT I/O; long discussions hopefully converging - capabilities satisfy commissioning and year 1 needs - populated conditions database in use via stable interface

STAR C OMPUTING Torre Wenaus, BNL STAR Computing Meeting 5/99 RCF Issues CRS working again, as of yesterday, after long absence Manpower: supposed to plateau at 34 in ‘99. Actual: 23 (1 offer is out)  CCD: 4 of 8; Experiments: 4 of 8 MDC2 experience  HPSS: similar level of problems to MDC1 l Stabler after fixes towards end of MDC, but still we see instability l Latest version (4.1) installed; stress testing today.  Disk: major access problems. NFS, AFS translator (Linux) l Move to real AFS 3.5; supported on current Linux  New CRS mgmt software: Major rewrite l Better monitoring and user interaction, improved HPSS interaction; worked well. Adequate for year 1. LSF not needed in CRS.  STAR program size! l Linux 2.2 kernel installed; swap increased to ~2GB l STAR CRS machines will all be 512MB (all CAS machines are 512)

STAR C OMPUTING Torre Wenaus, BNL STAR Computing Meeting 5/99 RCF Issues (2) STAR Executable size 512MB with all makers (tfs), 587MB (trs) Security issues  RCF switched off STAR machine access to RCF disks for new machines l Switching increasingly to AFS l Have partial agreement with RCF to allow us access to data disks; needs to be made official by RCF and implemented Procurements  New Linux (CRS,CAS), Sun (CAS), disk (all RAID), and MDS (drives, processors) procurements underway ADSM batch system June 8  Validate migrated OSM data by then  Will need people with data in OSM to do spot checks  Details due from RCF

STAR C OMPUTING Torre Wenaus, BNL STAR Computing Meeting 5/99 Post Commissioning - extending data management tools (mysql DB + disk file management + HPSS file management + multi-component ROOT files) - complete schema evolution, in collaboration with ROOT team - management of CAS processing and data distribution both for mining and individual physicist level analysis - completion of the DB: integration of slow control as data source, completion of online integration, implementation of DB as Geant geometry data source, populating the DB for detectors other than the TPC - extend and apply OO data model (StEvent) to reconstruction, so new reconstruction development can work with StEvent - completion of online and the online processing farm for monitoring - improving QA tools and procedures - improving and better integrating visualization tools

STAR C OMPUTING Torre Wenaus, BNL STAR Computing Meeting 5/99 Post Commissioning (2) - reconstruction and analysis code development directed at addressing QA results and year 1 code completeness and quality  following the priorities emerging from this meeting  responding to the real-data outcomes of the commissioning run

STAR C OMPUTING Torre Wenaus, BNL STAR Computing Meeting 5/99 New Package Creation It is important to check in your code; submit your Makers as packages in the repository  When bugs or changes affect existing code, we can identify the affected code  Code sharing, collaboration, example for others, integration into QA, etc. etc. When HyperNews is working again, will set up a forum for requesting and creating new packages  Awareness and consultation on new package proposals

STAR C OMPUTING Torre Wenaus, BNL STAR Computing Meeting 5/99 Tutorials Wed (and Thurs?) am: presentation/discussion sessions as a group Followed by individuals or workgroups trying things out, with core computing people and experienced users sitting in and roaming the halls to help Wed am:  Kathy on bfcread.C, histogram integration, general usage  Writing analysis code and using StEvent l Gene (and others..? Torre? Victor? Thomas? Craig Ogilvie?) Wed or Thurs am:  Pavel on using GSTAR Fri am:  Thomas on using UML and Rational Rose Fri pm:  Issues that come up in the week?