StoRM+Lustre Performance Test with 10Gbps Network YAN Tian for Distributed Computing Group Meeting Nov. 4th, 2014.

Slides:



Advertisements
Similar presentations
Current Testbed : 100 GE 2 sites (NERSC, ANL) with 3 nodes each. Each node with 4 x 10 GE NICs Measure various overheads from protocols and file sizes.
Advertisements

ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
Status of BESIII Distributed Computing BESIII Workshop, Mar 2015 Xianghu Zhao On Behalf of the BESIII Distributed Computing Group.
Nov COMP60621 Concurrent Programming for Numerical Applications Lecture 6 Chronos – a Dell Multicore Computer Len Freeman, Graham Riley Centre for.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
5 Nov 2001CGW'01 CrossGrid Testbed Node at ACC CYFRONET AGH Andrzej Ozieblo, Krzysztof Gawel, Marek Pogoda 5 Nov 2001.
RDMA vs TCP experiment.
Figure 1.1 Interaction between applications and the operating system.
1 CS 501 Spring 2005 CS 501: Software Engineering Lecture 22 Performance of Computer Systems.
Energy Aware Network Operations Authors: Priya Mahadevan, Puneet Sharma, Sujata Banerjee, Parthasarathy Ranganathan HP Labs IEEE Global Internet Symposium.
Outline Network related issues and thinking for FAX Cost among sites, who has problems Analytics of FAX meta data, what are the problems  The main object.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Distributed Computing for CEPC YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep ,
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
Optimizing Performance of HPC Storage Systems
Multi-Threading and Load Balancing Compiled by Paul TaylorCSE3AGR Stolen mainly from Orion Granatir
Fermi National Accelerator Laboratory 3 Fermi National Accelerator Laboratory Mission Advances the understanding of the fundamental nature of matter.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
 Brittany Robison  Robert Farrow Jr  Chris Denning  Tony Bates WEB 230.
Copyright © 2010, Scryer Analytics, LLC. All rights reserved. Optimizing SAS System Performance − A Platform Perspective Patrick McDonald Scryer Analytics,
Active Network Node in Silicon-Based L3 Gigabit Routing Switch Active Network Node in Silicon-Based L3 Gigabit Routing Switch 1 UC Berkeley Engineering.
YAN, Tian On behalf of distributed computing group Institute of High Energy Physics (IHEP), CAS, China CHEP-2015, Apr th, OIST, Okinawa.
100G R&D at Fermilab Gabriele Garzoglio (for the High Throughput Data Program team) Grid and Cloud Computing Department Computing Sector, Fermilab Overview.
1)Downlad the photoshop brushes from 2)Right click the file at the bottom of the page and press “Show in Folder” OR go.
Status of StoRM+Lustre and Multi-VO Support YAN Tian Distributed Computing Group Meeting Oct. 14, 2014.
DNS Dynamic Update Performance Study The Purpose Dynamic update and XFR is key approach to perform zone data replication and synchronization,
BSP on the Origin2000 Lab for the course: Seminar in Scientific Computing with BSP Dr. Anne Weill –
Distributed Computing for CEPC YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep , 2014 Draft.
Factors affecting ANALY_MWT2 performance MWT2 team August 28, 2012.
High Speed Detectors at Diamond Nick Rees. A few words about HDF5 PSI and Dectris held a workshop in May 2012 which identified issues with HDF5: –HDF5.
Revision - 01 Intel Confidential Page 1 Intel HPC Update Norfolk, VA April 2008.
Gravitational N-body Simulation Major Design Goals -Efficiency -Versatility (ability to use different numerical methods) -Scalability Lesser Design Goals.
PERFORMANCE AND ANALYSIS WORKFLOW ISSUES US ATLAS Distributed Facility Workshop November 2012, Santa Cruz.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
Commodity Flash-Based Systems at 40GbE - FIONA Philip Papadopoulos* Tom Defanti Larry Smarr John Graham Qualcomm Institute, UCSD *Also San Diego Supercomputer.
Accounting for Load Variation in Energy-Efficient Data Centers
T3 data access via BitTorrent Charles G Waldman USATLAS/University of Chicago USATLAS T2/T3 Workshop Aug
Running Mantevo Benchmark on a Bare-metal Server Mohammad H. Mofrad January 28, 2016
University of the Western Cape Chapter 5-6: Router Startup and Setup Aleksandar Radovanovic.
Experimental Perspectives on Lasso-related Algorithms on Parallel Computing Frameworks
Atlas Software Structure Complicated system maintained at CERN – Framework for Monte Carlo and real data (Athena) MC data generation, simulation and reconstruction.
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
StoRM + Lustre Proposal YAN Tian On behalf of Distributed Computing Group
CRISP WP18, High-speed data recording Krzysztof Wrona, European XFEL PSI, 18 March 2013.
Daniel Charlet Fan Orsay 11/02/2010 BAO project in radio CEA/Irfu IN2P3/LAL CMU Fermilab colaboration Fast aquisition system for 3D mapping of cosmological.
G. Russo, D. Del Prete, S. Pardi Frascati, 2011 april 4th-7th The Naples' testbed for the SuperB computing model: first tests G. Russo, D. Del Prete, S.
STORAGE EXPERIENCES AT MWT2 (US ATLAS MIDWEST TIER2 CENTER) Aaron van Meerten University of Chicago Sarah Williams Indiana University OSG Storage Forum,
FILE TRANSFER SPEEDS OVER HTTP AND FTP Yibiao Li 06/01/2009 Christmas Meeting 2008/09.
Progress of Work on SE and DMS YAN Tian April. 16, 2014.
100G R&D at Fermilab Gabriele Garzoglio (for the High Throughput Data Program team) Grid and Cloud Computing Department Computing Sector, Fermilab Overview.
Status of BESIII Distributed Computing BESIII Collaboration Meeting, Nov 2014 Xiaomei Zhang On Behalf of the BESIII Distributed Computing Group.
S. Pardi Computing R&D Workshop Ferrara 2011 – 4 – 7 July SuperB R&D on going on storage and data access R&D Storage Silvio Pardi
29/04/2008ALICE-FAIR Computing Meeting1 Resulting Figures of Performance Tests on I/O Intensive ALICE Analysis Jobs.
NFV Group Report --Network Functions Virtualization LIU XU →
Energy Aware Network Operations
Status of BESIII Distributed Computing
SuperB – INFN-Bari Giacinto DONVITO.
XNAT at Scale June 7, 2016.
By Chris immanuel, Heym Kumar, Sai janani, Susmitha
IM.Grid: A Grid Computing Solution for image processing
Experience of Lustre at a Tier-2 site
Status of Storm+Lustre and Multi-VO Support
Multi-PCIe socket network device
CS 140 Lecture Notes: Technology and Operating Systems
CS 140 Lecture Notes: Technology and Operating Systems
Hybrid Programming with OpenMP and MPI
K computer RIKEN Advanced Institute for Computational Science
K computer RIKEN Advanced Institute for Computational Science
Jakub Yaghob Martin Kruliš
Presentation transcript:

StoRM+Lustre Performance Test with 10Gbps Network YAN Tian for Distributed Computing Group Meeting Nov. 4th, 2014

Testbed with 10Gbps Network StoRM server configuration: ModelDell PowerEdger R620 CPUXeon E GHz CPU Cores8 cores (HT disabled) Memory64 GB HDDscsi 300 GB hostname/IPcream.ihep.ac.cn ( ) Network10 Gbps WebDAV access 10GB/(39.148s) = MB/s

Test 1: Single Thread Download Test time: Nov 2nd, 15:30-15:32, 15:37-15:40 Lustre is a little busy (load 40%, out 800 MB/s) 20 times test of download 1 GB file Download destination: badger01 Average download speed: MB/s with eth5: 10 Gbps load of SE: 0.8~1.4 load, 2~3% wa

Test 2: Multi Threads Download Multi-tread download tool: mytget Test result: – can start multi-thread mode; – But can’t impove performance 1 GB file download with 8-thread: 10 GB file download with 8-thread

Test 3: Multi Processes Download 1 Multi-process wget download: – test time: Nov. 2 nd, 17:48~18:02 – 8 processes, each downlad a 10GB file – peak speed: 250 MB/s – estimate average speed: ~ 200 MB/s – Lustre is very busy

Test 4: Multi Processes Download 2 Multi-process wget download: – test time: Nov. 3 nd, 17:50~18:03 – 8 processes, each downlad a 10GB file – peak speed: 275 MB/s – estimate average speed: ~ 240 MB/s – real gross speed: 98 MB/s – Lustre a little busy, raise 600 M/s

Test 3: srm - Symbolic Link Problem Modify namespace.xml is under trying.

Storm+Lustre Test: To Do Solve symblik link problem Dataset transfer test between IHEPD-USER Open ports 50000:55000 Dataset transfer test between WHU/USTC-USER