Preparation for the Di-Jet Tsukuba

Slides:



Advertisements
Similar presentations
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
Advertisements

T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
New Cluster for Heidelberg TRD(?) group. New Cluster OS : Scientific Linux 3.06 (except for alice-n5) Batch processing system : pbs (any advantage rather.
GLAST LAT Project Analysis Meeting March 22, 2004 E. do Couto e Silva 1/8 LAT Instrument Test Analysis Eduardo do Couto e Silva March 23, 2004.
The Difficulties of Distributed Data Douglas Thain Condor Project University of Wisconsin
1 Status of the ALICE CERN Analysis Facility Marco MEONI – CERN/ALICE Jan Fiete GROSSE-OETRINGHAUS - CERN /ALICE CHEP Prague.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
Univ. of Tsukuba & JSPS T. Horaguchi Jan for the ALICE Analysis Workshop 2010/1/21 ALICE Analysis Hirosima Univ. 1.
03/27/'07T. ISGC20071 Computing GRID for ALICE in Japan Hiroshima University Takuma Horaguchi for the ALICE Collaboration
Testing Virtual Machine Performance Running ATLAS Software Yushu Yao Paolo Calafiura LBNL April 15,
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES P. Saiz (IT-ES) AliEn job agents.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
Development of the distributed monitoring system for the NICA cluster Ivan Slepov (LHEP, JINR) Mathematical Modeling and Computational Physics Dubna, Russia,
Monte Carlo Data Production and Analysis at Bologna LHCb Bologna.
5/2/  Online  Offline 5/2/20072  Online  Raw data : within the DAQ monitoring framework  Reconstructed data : with the HLT monitoring framework.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
Thomas Jefferson National Accelerator Facility Page 1 CLAS12 Computing Requirements G.P.Gilfoyle University of Richmond.
The CMS CERN Analysis Facility (CAF) Peter Kreuzer (RWTH Aachen) - Stephen Gowdy (CERN), Jose Afonso Sanches (UERJ Brazil) on behalf.
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
Status report of the KLOE offline G. Venanzoni – LNF LNF Scientific Committee Frascati, 9 November 2004.
Liverpool Experience of MDC 1 MAP (and in our belief any system which attempts to be scaleable to 1000s of nodes) broadcasts the code to all the nodes.
November 10, 1999PHENIX CC-J Updates in Nov.991 PHENIX CC-J Updates in Nov New Hardware - N.Hayashi / RIKEN November 10, 1999 PHENIX Computing Meeting.
+ AliEn site services and monitoring Miguel Martinez Pedreira.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
CMS Computing Model summary UKI Monthly Operations Meeting Olivier van der Aa.
David Stickland CMS Core Software and Computing
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Atlas Software Structure Complicated system maintained at CERN – Framework for Monte Carlo and real data (Athena) MC data generation, simulation and reconstruction.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Alien and GSI Marian Ivanov. Outlook GSI experience Alien experience Proposals for further improvement.
May 23, 2007ALICE DOE Review - Computing1 ALICE-USA Computing Overview of Hard and Soft Computing Resources Needed to Achieve Research Goals 1.Calibration.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
Virtual machines ALICE 2 Experience and use cases Services at CERN Worker nodes at sites – CNAF – GSI Site services (VoBoxes)
ANL T3g infrastructure S.Chekanov (HEP Division, ANL) ANL ASC Jamboree September 2009.
Parrot and ATLAS Connect
The EDG Testbed Deployment Details
Real Time Fake Analysis at PIC
U.S. ATLAS Grid Production Experience
Belle II Physics Analysis Center at TIFR
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
Vanderbilt Tier 2 Project
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Bernd Panzer-Steindel, CERN/IT
Oxford Site Report HEPSYSMAN
STORM & GPFS on Tier-2 Milan
Southwest Tier 2.
Progress with MUON reconstruction
Artem Trunov and EKP team EPK – Uni Karlsruhe
Simulation use cases for T2 in ALICE
Infrastructure for testing accelerators and new
PK-CIIT Grid Operations in Pakistan
FCT Follow-up Meeting 31 March, 2017 Fernando Meireles
Summary of Szilveszter
ALICE Computing Model in Run3
US CMS Testbed.
Blahoslav Pastirčák, IEP SAS Košice Pavol Bobík, IEP SAS Košice
Preparations for the CMS-HI Computing Workshop in Bologna
Preparations for Reconstruction of Run7 Min Bias PRDFs at Vanderbilt’s ACCRE Farm (more substantial update set for next week) Charles Maguire et al. March.
Development of LHCb Computing Model F Harris
Production Manager Tools (New Architecture)
The LHCb Computing Data Challenge DC06
Presentation transcript:

Preparation for the Di-Jet Study @ Tsukuba University of Tsukuba T. Horaguchi April 8 2009 for the ALICE PostQM @ LBNL 2009/4/7 ALICE PostQM09 @ LBNL

Local Cluster@University of Tsukuba Batch Job System @ University of Tsukuba was established on April 6 ! Batch Job System condor http://www.cs.wisc.edu/condor/ Worker Node 2cpu x 4core = 8 core Total: 48 core -> ~80 core ALICE Library Server ALICE Full Simulation & Reconstruction Provided with NFS mount Storage Server ~10TByte Disk Disk space provided with NFS mount Production ALICE Full Simulation Test was done ! Event generation for the di-jet study has been started ! Calibration & Physics Simulation for J-Cal will be done with Local Cluster ! NFS Condor Server (wn001) Storage & Library Server (aa003) wn001-006 2009/4/7 ALICE PostQM09 @ LBNL

Summary & Future Plan Local Cluster @ University of Tsukuba for J-Cal study was established on this April ! This provides a baseline for the study of J-Cal ! Di-Jet & photon-Jet study has been started using this Local Cluster ! This Local Cluster will be strongly helpful to study of physics performance of J-Cal and many calibration tasks ! 2009/4/7 ALICE PostQM09 @ LBNL