Computing Board Report CHIPP Plenary Meeting

Slides:



Advertisements
Similar presentations
WGISS #19 Plenary, CONAE, Cordoba, Argentina, March 2005 Cluster and Grid Project: Status & Update Pakorn Apaphant Geo-Informatics and Space Technology.
Advertisements

UCL HEP Computing Status HEPSYSMAN, RAL,
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Chris Brew RAL PPD Site Report Chris Brew SciTech/PPD.
BEIJING-LCG2 Tire2 Grid. Outline  Introduction about IHEP Grid  Fabric infrastructure of BeiJing-Lcg2  Middleware grid-service and supported VO’s.
Institute for High Energy Physics ( ) NEC’2007 Varna, Bulgaria, September Activities of IHEP in LCG/EGEE.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
SouthGrid Status Pete Gronbech: 4 th September 2008 GridPP 21 Swansea.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
 First, check if Windows Server 2008 minimum hardware requirements matches your computer hardware through link below
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks GRNET SA3 Progress Report Ioannis Liabotis.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
NCPHEP ATLAS/CMS Tier3: status update V.Mossolov, S.Yanush, Dz.Yermak National Centre of Particle and High Energy Physics of Belarusian State University.
INFSO-RI Enabling Grids for E-sciencE Hellas Grid infrastructure update Kostas Koumantaros, Christos Aposkitis EGEE-HellasGrid Coordination.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
Brunel University, School of Engineering and Design, Uxbridge, UB8 3PH, UK Henry Nebrensky (not a systems manager) SIRE Group.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
1 Worker Node Requirements TCO – biggest bang for the buck –Efficiency per $ important (ie cost per unit of work) –Processor speed (faster is not necessarily.
Lofar Information System on GRID A.N.Belikov. Lofar Long Term Archive Prototypes: EGEE Astro-WISE Requirements to data storage Tiers Astro-WISE adaptation.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
GridKa Cloud T1/T2 at Forschungszentrum Karlsruhe (FZK)
The Hungarian ClusterGRID Project Péter Stefán research associate NIIF/HUNGARNET
Scientific Computing in PPD and other odds and ends Chris Brew.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Status of Grid & RPC-Tests Stand DAQ(PU) Sumit Saluja Programmer EHEP Group Deptt. of Physics Panjab University Chandigarh.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
STORAGE EXPERIENCES AT MWT2 (US ATLAS MIDWEST TIER2 CENTER) Aaron van Meerten University of Chicago Sarah Williams Indiana University OSG Storage Forum,
May 27, 2009T.Kurca JP CMS-France1 CMS T2_FR_CCIN2P3 Towards the Analysis Facility (AF) Tibor Kurča Institut de Physique Nucléaire de Lyon JP CMS-France.
Texas Tech University (TTU) – Big Tier 3 Poonam Mane Graduate Assistant TTU, HPCC High Performance Computing Center OSG Site Administrators & CMS Tier.
BeStMan/DFS support in VDT OSG Site Administrators workshop Indianapolis August Tanya Levshina Fermilab.
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
PSI CMS T3 Status & '1[ ] HW Plan March '16 Fabio Martinelli
Brief introduction about “Grid at LNS”
Grid Computing: Running your Jobs around the World
Status of BESIII Distributed Computing
Real Time Fake Analysis at PIC
The Beijing Tier 2: status and plans
Installing Windows Server 2008
Pete Gronbech GridPP Project Manager April 2017
Cluster / Grid Status Update
LCG Deployment in Japan
Southwest Tier 2 Center Status Report
Model (CMS) T2 setup for end users
Experience of Lustre at a Tier-2 site
UTFSM computer cluster
Southwest Tier 2.
Artem Trunov and EKP team EPK – Uni Karlsruhe
Christof Hanke, HEPIX Spring Meeting 2008, CERN
Enabling High Speed Data Transfer in High Energy Physics
Florida Tech Grid Cluster
The CMS Beijing Site: Status and Application
Installation/Configuration
The LHCb Computing Data Challenge DC06
Presentation transcript:

Computing Board Report CHIPP Plenary Meeting Derek Feichtinger, PSI

PSI CMS Tier-3 Hardware 8 Worker Nodes SUN X4150, 2*Xeon E5410, 16 GB RAM, 2*146 GB SAS disk 6 File Servers SUN X4500, 2*Opt 290, 16 GB RAM, 48*500 GB SATA disk 6 Service nodes SUN X4150 servers adapted to services CE, SE, Data Base, UI, NFS, Admin

Tier-3 The Grid PSI CMS T3 Working Style Dcache SE Worker Nodes Transfers (dcap, gsiftp) Transfers (dcap, gsiftp) The Grid Worker Nodes PhEDEx Data Set Transport Grid Job submits submits Batch master (SGE) User Interface connects User

PSI CMS Tier-3