CMS 1 M. Biasotto Test performance DPM Massimo Biasotto – INFN Legnaro.

Slides:



Advertisements
Similar presentations
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Advertisements

Martin Bly RAL CSF Tier 1/A RAL Tier 1/A Status HEPiX-HEPNT NIKHEF, May 2003.
Readout FDDI Online Farm1 Online Farm3 DB server CISCO Fast Eth. channel Giga Eth. channel Server1 Server2 Tape Library 0.25 PBytes 6 driv 6 driv Offline.
Oracle Clustering and Replication Technologies CCR Workshop - Otranto Barbara Martelli Gianluca Peco.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
Status of analysis & MC farms 30/08/2002 Concezio Bozzi INFN Ferrara.
NWfs A ubiquitous, scalable content management system with grid enabled cross site data replication and active storage. R. Scott Studham.
“GRID” Bricks. Components (NARA I brick) AIC RMC4E2-QI-XPSS 4U w/SATA Raid Controllers: 3ware- mirrored root disks (2) Areca- data disks, battery backed.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
National Energy Research Scientific Computing Center (NERSC) The GUPFS Project at NERSC GUPFS Team NERSC Center Division, LBNL November 2003.
Disk Array Performance Estimation AGH University of Science and Technology Department of Computer Science Jacek Marmuszewski Darin Nikołow, Marek Pogoda,
1 RAL Status and Plans Carmine Cioffi Database Administrator and Developer 3D Workshop, CERN, November 2009.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Bernd Panzer-Steindel, CERN/IT 2 * 50 Itanium Server (dual 1.3/1.5 GHz Itanium2, 2 GB mem) High Througput Prototype (openlab + LCG prototype) (specific.
AMS Computing Y2001-Y2002 AMS Technical Interchange Meeting MIT Jan 22-25, 2002 Vitali Choutko, Alexei Klimentov.
Report : Zhen Ming Wu 2008 IEEE 9th Grid Computing Conference.
Technology Expectations in an Aeros Environment October 15, 2014.
Terabyte IDE RAID-5 Disk Arrays David A. Sanders, Lucien M. Cremaldi, Vance Eschenburg, Romulus Godang, Christopher N. Lawrence, Chris Riley, and Donald.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
DATAGRID ConferenceTestbed0 - resources in Italy Luciano Gaido 1 DATAGRID WP6 Testbed0 resources in Italy Amsterdam March,
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
Winnie Lacesso Bristol Storage June DPM LCG Storage lcgse01 = DPM built in 2005 by Yves Coppens & Pete Gronbech SuperMicro X5DPAGG (Streamline.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
Garrison Vaughan, Kyle Nester, Anthony Taliercio.
Hosting on a managed server hosted by TAG  No technical support required  Full backup of database and files  RAID 5 system means that if a hard drive.
Edinburgh Investment in e-Science Infrastructure Dr Arthur Trew.
Wide Area Network Access to CMS Data Using the Lustre Filesystem J. L. Rodriguez †, P. Avery*, T. Brody †, D. Bourilkov *, Y.Fu *, B. Kim *, C. Prescott.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
03/03/09USCMS T2 Workshop1 Future of storage: Lustre Dimitri Bourilkov, Yu Fu, Bockjoo Kim, Craig Prescott, Jorge L. Rodiguez, Yujun Wu.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
Tape logging- SAM perspective Doug Benjamin (for the CDF Offline data handling group)
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Martin Bly RAL Tier1/A Centre Preparations for the LCG Tier1 Centre at RAL LCG CERN 23/24 March 2004.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
RAL Site Report John Gordon HEPiX/HEPNT Catania 17th April 2002.
Status SC3 SARA/Nikhef 20 juli Status & results SC3 throughput phase SARA/Nikhef Mark van de Sanden.
1 M. Biasotto – INFN T1 + T2 cloud workshop, Bologna, November T2 storage issues M. Biasotto – INFN Legnaro.
Computational Research in the Battelle Center for Mathmatical medicine.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Workshop sullo Storage da Small Office a Enterprise Class Presentato da:
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
Tier-2 storage A hardware view. HEP Storage dCache –needs feed and care although setup is now easier. DPM –easier to deploy xrootd (as system) is also.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
GridKa Cloud T1/T2 at Forschungszentrum Karlsruhe (FZK)
Tested, seen, heard… Andrei Maslennikov Rome, April 2006.
Atlas Software Structure Complicated system maintained at CERN – Framework for Monte Carlo and real data (Athena) MC data generation, simulation and reconstruction.
Portuguese Grid Infrastruture(s) Gonçalo Borges Jornadas LIP 2010 Braga, Janeiro 2010.
PSM, Database requirements for POOL (File catalog performance requirements) Maria Girone, IT-DB Strongly based on input from experiments: subject.
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
AMS02 Software and Hardware Evaluation A.Eline. Outline  AMS SOC  AMS POC  AMS Gateway Computer  AMS Servers  AMS ProductionNodes  AMS Backup Solution.
HP Proliant Server  Intel Xeon E3-1220v3 (3.1GHz / 4-core / 8MB / 80W).  HP 4GB Dual Rank x8 PC E (DDR3-1600) Unbuffered Memory Kit.  HP Ethernet.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
RAL Plans for SC2 Andrew Sansum Service Challenge Meeting 24 February 2005.
PASTA 2010 CPU, Disk in 2010 and beyond m. michelotto.
G. Russo, D. Del Prete, S. Pardi Frascati, 2011 april 4th-7th The Naples' testbed for the SuperB computing model: first tests G. Russo, D. Del Prete, S.
STORAGE EXPERIENCES AT MWT2 (US ATLAS MIDWEST TIER2 CENTER) Aaron van Meerten University of Chicago Sarah Williams Indiana University OSG Storage Forum,
Dirk Zimoch, EPICS Collaboration Meeting October SLS Beamline Networks and Data Storage.
12/19/01MODIS Science Team Meeting1 MODAPS Status and Plans Edward Masuoka, Code 922 MODIS Science Data Support Team NASA’s Goddard Space Flight Center.
Andrea Righi – LinuxDay 2007 (Oct 27) Installare GNU/Linux su un ampio numero di client con SystemImager e BitTorrent Andrea Righi
Brief introduction about “Grid at LNS”
Installing Windows Server 2008
LCG 3D Distributed Deployment of Databases
LCG 3D and Oracle Cluster
Experience of Lustre at a Tier-2 site
Your great subtitle in this line
Carlos Solans TileCal Valencia meeting 27/07/2007
Presentation transcript:

CMS 1 M. Biasotto Test performance DPM Massimo Biasotto – INFN Legnaro

CMS 2 M. Biasotto Sistema di test ● 1 DPM host (server DPM, SRM, Database metadata): – Dual PIII 1.26Ghz, 1GB RAM ● 2 disk server uguali: – Dual Xeon 2.8GHz, 4GB RAM – 2 array 3ware 9500-S12, 12 HD Maxtor 250GB, RAID-5 ● Pool DPM costituito da 2 file-system da 2TB, uno da ciascun server

CMS 3 M. Biasotto Configurazione di test GE backbone SWITCH N1SWITCH N14SWITCH BladeCenters Disk Servers DPM DPM Server N1 N14 SWITCH N1 N14 S1S2 DPM Pool

CMS 4 M. Biasotto Operazioni di test ● Test di lettura e scrittura di una sequenza di files tramite rfcp da N client in contemporanea – Sequenza di 20 files ~400MB l'uno (scelti a caso da un campione di 1000) – Numero di client da 1 a 8 (distribuiti in 4 blade centers, ciascuno con uplink a 1Gb/s) – Misura del rate aggregato in DPM

CMS 5 M. Biasotto Performance dei dischi ● Performance di base dei dischi misurata con bonnie – Server: ● Read 90MB/s ● Write 50MB/s – Client: ● Read: 23 MB/s ● Write: 20 MB/s

CMS 6 M. Biasotto DPM Read performance

CMS 7 M. Biasotto DPM Write performance