Local IBP Use at Vanderbilt University Advanced Computing Center for Research and Education.

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

Concurrent programming: From theory to practice Concurrent Algorithms 2014 Vasileios Trigonakis Georgios Chatzopoulos.
HEPiX, CASPUR, April 3-7, 2006 – Steve McDonald TRIUMF Steven McDonald & Konstantin Olchanski TRIUMF Network & Computing Services
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
Wednesday, June 07, 2006 “Unix is user friendly … it’s just picky about it’s friends”. - Anonymous.
PetaByte Storage Facility at RHIC Razvan Popescu - Brookhaven National Laboratory.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Frangipani: A Scalable Distributed File System C. A. Thekkath, T. Mann, and E. K. Lee Systems Research Center Digital Equipment Corporation.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
Report : Zhen Ming Wu 2008 IEEE 9th Grid Computing Conference.
The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.
Terabyte IDE RAID-5 Disk Arrays David A. Sanders, Lucien M. Cremaldi, Vance Eschenburg, Romulus Godang, Christopher N. Lawrence, Chris Riley, and Donald.
Remote OMNeT++ v2.0 Introduction What is Remote OMNeT++? Remote environment for OMNeT++ Remote simulation execution Remote data storage.
4 1 Operating System Activities  An operating system is a type of system software that acts as the master controller for all activities that take place.
VIRTUALIZATION ACTUALIZATION Balacom Services Daniel R. Bennett, Kyle Campbell, Jimmy Schmalzl Virtual Server Farm.
SEDA: An Architecture for Well-Conditioned, Scalable Internet Services
Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D FIS Distinguished Professor of Computer Science School of.
D0 Taking Stock1 By Anil Kumar CD/CSS/DSG July 10, 2006.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Backup Library Comparison
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
Linux Servers with JASMine K. Edwards, A. Kowalski, S. Philpott HEPiX May 21, 2003.
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
Working together towards a next generation storage element Surya D. Pathak Advanced Computing Center for Research and Education.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
08/30/05GDM Project Presentation Lower Storage Summary of activity on 8/30/2005.
Experience with the Thumper Wei Yang Stanford Linear Accelerator Center May 27-28, 2008 US ATLAS Tier 2/3 workshop University of Michigan, Ann Arbor.
Integrating JASMine and Auger Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
HPSS for Archival Storage Tom Sherwin Storage Group Leader, SDSC
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
Computing Resources at Vilnius Gediminas Technical University Dalius Mažeika Parallel Computing Laboratory Vilnius Gediminas Technical University
Disk Farms at Jefferson Lab Bryan Hess
1 Cluster Development at Fermilab Don Holmgren All-Hands Meeting Jefferson Lab June 1-2, 2005.
Computer Organization. The Five Hardware Units General purpose computers use the "Von Neumann" architecture Also referred to as "stored program" architecture.
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing,
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000.
Batch Software at JLAB Ian Bird Jefferson Lab CHEP February, 2000.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
Patrick Gartung 1 CMS 101 Mar 2007 Introduction to the User Analysis Facility (UAF) Patrick Gartung - Fermilab.
11 October 2000Iain A Bertram - Lancaster University1 Lancaster Computing Facility zStatus yVendor for Facility Chosen: Workstations UK yPurchase Contract.
Running clusters on a Shoestring US Lattice QCD Fermilab SC 2007.
CIT 140: Introduction to ITSlide #1 CSC 140: Introduction to IT Operating Systems.
Multi-Core CPUs Matt Kuehn. Roadmap ► Intel vs AMD ► Early multi-core processors ► Threads vs Physical Cores ► Multithreading and Multi-core processing.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
12/19/01MODIS Science Team Meeting1 MODAPS Status and Plans Edward Masuoka, Code 922 MODIS Science Data Support Team NASA’s Goddard Space Flight Center.
Introduction to Diskless Remote Boot Linux Introduction to Diskless Remote Boot Linux Jazz Wang Yao-Tsung Wang Jazz Wang Yao-Tsung Wang.
Alexander Moibenko File Aggregation in Enstore (Small Files) Status report 2
BY: SALMAN 1.
Compute and Storage For the Farm at Jlab
XenData SX-10 LTO Archive Appliance
Managing Explosive Data Growth
ALICE Computing Data Challenge VI
BY: SALMAN.
SAM at CCIN2P3 configuration issues
Test C : IBM Enterprise Storage Technical Support V5
Diskless Remote Boot Linux
Architecture Background
XenData SX-550 LTO Archive Servers
JDAT Production Hardware
Design Unit 26 Design a small or home office network
Andy Wang COP 5611 Advanced Operating Systems
STORAGE – 3 TIERS Key Revision Points.
Lee Lueking D0RACE January 17, 2002
Presentation transcript:

Local IBP Use at Vanderbilt University Advanced Computing Center for Research and Education

Introduction  Current IBP Installation  IBP Scalability Testing  Current Enstore Installation  Production (Stewart Lab)  Testing  Enstore Goals  Planned IBP / Enstore Installation

Current IBP Installation

 Registered with Public L-Bone  1 IBM x335  Vlad01: ibp.accre.vanderbilt.edu:6714  Vlad02: ibp.accre.vanderbilt.edu:6715  1 Infotrend EonStor  Vlad01: 1.6 TB  Vlad02: 1.4 TB  Two of the three largest individual depots on the public L-Bone

IBP Scalability Testing  Depot Testing  326 Active Depots  Single File (259 MB)  Uploaded to each Depot  Results sorted to determine 50 fastest depots  Multiple Test Runs to see variations over time

IBP Scalability Testing  Campus Capacity  AT&T:45 Mb/s  Qwest:180 Mb/s  Sox:442 Mb/s  One Workstation  267 MB RAM Disk  2 GB Total RAM  Dual CPU (AMD Opteron 246)  Gigabit Ethernet

IBP Scalability Testing  Ten 259 MB Files (multiple uploads)  Test Results  Short Run  Throughput: Mb/s (49.98 MB/s)  Duration: seconds (8.6 minutes)  Used 90% of Internet2  Long Run  Throughput: Mb/s (42.26 MB/s)  Duration: seconds (29.36 minutes)  Used 76% of Internet2

IBP Scalability Testing #! /bin/sh # bwTest.sh for i in `seq 1 10`; do./upload.sh $i &> upload.out.$i & done wait ############ #! /bin/sh # upload.sh time /klm/lors/bin/lors_upload -o ~/bw/test$1.xnd --depot-list \ --xndrc ~/bw/bwTest$1.xndrc --threads=10 --timeout='5m' \ --copies=1 --none --duration=300 /mnt/rd/small.tar rm -f ~/bw/test$1.xnd

Python Integration Layer  Uses Python.h  Limited IBP changes  Called on IBP store / load  Proof of concept to disk  Next step: integration with encp

Current Enstore Installation Production (Stewart Lab)

Current Enstore Installation  Production (Stewart Lab)  2 IBM x335 (1 Enstore Server / 1 Enstore Client)  WhiteBox Linux  2.4 Kernel  1 Infotrend EonStor attached to Enstore client  1 Overland Storage PowerLoader  1 SDLT Drive  15 Tapes  Total Capacity: 2.4 TB Native (4.8 TB Compressed)  Current Usage: 0.81 TB

Current Enstore Installation  Testing  2 IBM x335 (1 Enstore Server / 1 Enstore Client)  CentOS Linux  2.6 Kernel  1 Infotrend EonStor  2 Overland Storage Neo-2000  2 LTO-2 Drives each  30 Tapes each  Total Capacity: 12 TB Native (24 TB Compressed)

Enstore Goals  Obtain latest code base  Simplified installation  Add ability to mark read-only tapes writable  Incorporate Vanderbilt changes into CVS

Planned IBP / Enstore Installation

 Private L-Bone  Multiple IBP Depots around campus  IBP Depots attached to “cache” mechanism  SRM-dCache  LoDN-Cache  Integrated with Enstore for tape archival