BES III Computing at The University of Minnesota Dr. Alexander Scott.

Slides:



Advertisements
Similar presentations
Configuration management
Advertisements

Status of BESIII Distributed Computing BESIII Workshop, Mar 2015 Xianghu Zhao On Behalf of the BESIII Distributed Computing Group.
McFarm: first attempt to build a practical, large scale distributed HEP computing cluster using Globus technology Anand Balasubramanian Karthik Gopalratnam.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
David Adams ATLAS DIAL Distributed Interactive Analysis of Large datasets David Adams BNL March 25, 2003 CHEP 2003 Data Analysis Environment and Visualization.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
Cross Cluster Migration Remote access support Adianto Wibisono supervised by : Dr. Dick van Albada Kamil Iskra, M. Sc.
Distributed Computing for CEPC YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep ,
M. Taimoor Khan * Java Server Pages (JSP) is a server-side programming technology that enables the creation of dynamic,
Introduction to HP LoadRunner Getting Familiar with LoadRunner >>>>>>>>>>>>>>>>>>>>>>
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
The D0 Monte Carlo Challenge Gregory E. Graham University of Maryland (for the D0 Collaboration) February 8, 2000 CHEP 2000.
IRODS performance test and SRB system at KEK Yoshimi KEK Building data grids with iRODS 27 May 2008.
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
Computing on the Cloud Jason Detchevery March 4 th 2009.
Profiling Grid Data Transfer Protocols and Servers George Kola, Tevfik Kosar and Miron Livny University of Wisconsin-Madison USA.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
D0 Farms 1 D0 Run II Farms M. Diesburg, B.Alcorn, J.Bakken, T.Dawson, D.Fagan, J.Fromm, K.Genser, L.Giacchetti, D.Holmgren, T.Jones, T.Levshina, L.Lueking,
Configuration Management (CM)
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
CHEP'07 September D0 data reprocessing on OSG Authors Andrew Baranovski (Fermilab) for B. Abbot, M. Diesburg, G. Garzoglio, T. Kurca, P. Mhashilkar.
MiniBooNE Computing Description: Support MiniBooNE online and offline computing by coordinating the use of, and occasionally managing, CD resources. Participants:
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
CMAQ Runtime Performance as Affected by Number of Processors and NFS Writes Patricia A. Bresnahan, a * Ahmed Ibrahim b, Jesse Bash a and David Miller a.
1 Chapter Overview Preparing to Upgrade Performing a Version Upgrade from Microsoft SQL Server 7.0 Performing an Online Database Upgrade from SQL Server.
Porto, 4-5 March, 1999 The COST250 Speaker Recognition Reference System H. Melin, A.M. Ariyaeeinia, M. Falcone.
05/29/2002Flavia Donno, INFN-Pisa1 Packaging and distribution issues Flavia Donno, INFN-Pisa EDG/WP8 EDT/WP4 joint meeting, 29 May 2002.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
Postgraduate Computing Lectures Applications I: Overview 1 Applications: Overview Symbiosis: Theory v. Experiment Theory –Build models to explain existing.
Distributed Computing for CEPC YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep , 2014 Draft.
An Investigation of Xen and PTLsim for Exploring Latency Constraints of Co-Processing Units Grant Jenks UCLA.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Status of BESIII Distributed Computing BESIII Workshop, Sep 2014 Xianghu Zhao On Behalf of the BESIII Distributed Computing Group.
May07-02: Parking Meter Clint Hertz: Team Leader Austyn Trace: Communications Nick Hollander Christian Baldus.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
BESIII Offline Software Development Environment Ma qiumei * Development environment * Configuration & management tool * Software development.
INFSO-RI Enabling Grids for E-sciencE Using of GANGA interface for Athena applications A. Zalite / PNPI.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
FroNtier Stress Tests at Tier-0 Status report Luis Ramos LCG3D Workshop – September 13, 2006.
DZero Monte Carlo Production Ideas for CMS Greg Graham Fermilab CD/CMS 1/16/01 CMS Production Meeting.
Automating Installations by Using the Microsoft Windows 2000 Setup Manager Create setup scripts simply and easily. Create and modify answer files and UDFs.
Virtual Cluster Computing in IHEPCloud Haibo Li, Yaodong Cheng, Jingyan Shi, Tao Cui Computer Center, IHEP HEPIX Spring 2016.
Active-HDL Server Farm Course 11. All materials updated on: September 30, 2004 Outline 1.Introduction 2.Advantages 3.Requirements 4.Installation 5.Architecture.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
Creating Grid Resources for Undergraduate Coursework John N. Huffman Brown University Richard Repasky Indiana University Joseph Rinkovsky Indiana University.
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
Enterprise Vitrualization by Ernest de León. Brief Overview.
ANL T3g infrastructure S.Chekanov (HEP Division, ANL) ANL ASC Jamboree September 2009.
Status of BESIII Distributed Computing
Create setup scripts simply and easily.
U.S. ATLAS Grid Production Experience
Belle II Physics Analysis Center at TIFR
Work report Xianghu Zhao Nov 11, 2014.
Simulation use cases for T2 in ALICE
LCG Monte-Carlo Events Data Base: current status and plans
X in [Integration, Delivery, Deployment]
Working With The EPISD Gregory McChesney.
ALICE-Grid Activities in Bologna
Transarc AFS Client for NT
UM D0RACE STATION Status Report Chunhui Han June 20, 2002
Laura Bright David Maier Portland State University
Level 1 Processing Pipeline
Presentation transcript:

BES III Computing at The University of Minnesota Dr. Alexander Scott

6/13/2008BES III software at MN2 Acknowledgements  Cheng Ping Shen (Hawaii) –Traveled to Minnesota to help set up BESIII environment  Alexey Zhemchugov (Dubna) –Created pacman installation of BOSS, advised on setup  Pete Zweber, Nick Howell (Minnesota) –Extensive testing of local installation

6/13/2008BES III software at MN3 Talk Outline  Release installation and testing  Monte Carlo farm implementation  Comparison of IHEP and MN Monte Carlo  Analysis support  Future issues

6/13/2008BES III software at MN4 Installing Releases  We use Alexey’s pacman packaging of the BES environmentpacman –Instructions for install located at Alexey’s siteAlexey’s site –We installed both and successfully  release had some version drift  had to manually update packages –Download and installation takes ~5 hours –Directory structure fully built by package  Execute shell scripts to set environment –Release ready for use

6/13/2008BES III software at MN5 Testing Releases  We verified that Alexey install alone sufficient to generate MC –Started over with install from scratch on a workstation isolated from network –From workstation, successfully:  installed release  generated Monte Carlo  ran sample analysis package  We are developing documentation on doing remote installs

6/13/2008BES III software at MN6 Monte Carlo Example (6.3.1)  Generated 1M generic J/ψ decays  Used getacAlg analysis package to reconstruct J/ψ→  (η c )  Selected candidates based on good tracks, showers  Compared variable distributions between IHEP and MN Thanks to Cheng Ping

6/13/2008BES III software at MN7 Example  Generated 1M J/ψ decays  Used ρ  analysis package to reconstruct J/ψ→ ρ   Superimposed MN variable distributions on IHEP  Great agreement Thanks to Nick Howell

6/13/2008BES III software at MN8 Monte Carlo Farm  Used the CLEO-c farm implementation for BES III generationCLEO-c farm implementation  Only cosmetic changes required so far –Both have a “physics” and “reconstruction” step –Easily modifiable job structure –Generic distributed job infrastructure

6/13/2008BES III software at MN9 Physics Generation File #include "$ENV{KKMCROOT}/share/jobOptions_KKMC.txt" KKMC.CMSEnergy = 3.770; KKMC.CMSEnergySpread=0.0014; KKMC.InitializedSeed={400081,1,0}; KKMC.NumberOfEventPrinted=5333; KKMC.GeneratePsi3770=true; EvtDecay.userDecayTableName = “psipp.dec"; #include "$ENV{BESEVTGENROOT}/share/BesEvtGen.txt" BesRndmGenSvc.RndmSeed = ; #include "$ENV{BESSIMROOT}/share/G4Svc_BesSim.txt" G4Svc.RunID = ; #include "$ENV{ROOTIOROOT}/share/jobOptions_Digi2Root.txt" RootCnvSvc.digiRootOutputFile = Test_633_BES3_PsiPP_1_217804_0.rtraw"; MessageSvc.OutputLevel = 5; ApplicationMgr.EvtMax = 5333;

6/13/2008BES III software at MN10 Farm Infrastructure  3 servers + 40 worker nodes –hyper-threaded dual Xeon 2.66 GHz processors –handle 4 jobs apiece  2 dual quad-core machines –2.33 GHz processors –3x as powerful as other nodes –handle 8 jobs apiece  Gigabit connections to all machines  10 TB of storage space  Condor handles job distribution

6/13/2008BES III software at MN11 Monte Carlo Samples  MN farm samples generated using releases and –Statistics  ψ’: 3.5 s, 50 kB (5M/day)  ψ’’: 3.5 s, 50 kB (5M/day)  J/ψ: 2.7 s, 30 kB (6.5M/day)  Monte Carlo generated –Generic samples:  2M J/ψ, 4.5M ψ’, 1.5M ψ’’ –IHEP comparison samples  1M each of J/ψ, ψ’, and ψ’’

6/13/2008BES III software at MN12 MN vs. IHEP Comparison  Attempt to generate identical Monte Carlo to IHEP sample –Use the same scripts, random seeds, and parameter files as IHEP –Generate 1M each of J/ψ, ψ’, ψ’’ –Comparisons made at the track/shower level –Distributions match extremely wellDistributions  IHEP believes discrepancies due to multiple random number engines

6/13/2008BES III software at MN13 Sample Distributions

6/13/2008BES III software at MN14 Analysis Support  Documentation on analysis support for users Documentation  Users can create analysis areas and compile packages –Set up environment and own workarea –Use cmt to create packages within workarea –Check out TestRelease package to run your own packages  Developing our own MC testing package

6/13/2008BES III software at MN15 Issues for the Future

6/13/2008BES III software at MN16 Network Transfers  Transfer speeds differ between IHEP→MN and MN→IHEP –30 vs. 300 Mbps using iperf w/ 5 MB window  Require TCP/IP windows to be opened or transfer limited by latency –Investigating dynamic window sizing for scp operationsdynamic window sizing –Would like better transfer method

6/13/2008BES III software at MN17 Actual Run Simulations  Some questions remain before simulations of data –Based on CLEO-c, some components missing  Random triggers? Constants? Run by run info? –Triggers and runinfo can be handled by existing infrastructure –Manual installation requires constants database setup  Not a part of pacman installation  Already present or not yet integrated?