Eos Center-wide File Systems Chris Fuson Outline 1 Available Center-wide File Systems 2 New Lustre File System 3 Data Transfer.

Slides:



Advertisements
Similar presentations
ECMWF 1 COM INTRO 2004: Introduction to file systems Introduction to File Systems Computer User Training Course 2004 Carsten Maaß User Support.
Advertisements

Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
File Transfer Protocol. FTP (File Transfer Protocol) is used to transfer programs or other information from one computer to another. This simple tool.
2 Copyright © 2005, Oracle. All rights reserved. Installing the Oracle Database Software.
Rhea Analysis & Post-processing Cluster Robert D. French NCCS User Assistance.
Atlas Status Update Chris Fuson. 2 Atlas Update - Timeline March 06, 2014 –Installed patches that targeted memory contention on the meta data server to.
ManeFrame File Systems Workshop Jan 12-15, 2015 Amit H. Kumar Southern Methodist University.
Basic POSIX component for EMS May 12 th, 2006 Hiro Kishimoto.
CIS 240 Introduction to UNIX Instructor: Sue Sampson.
Linux+ Guide to Linux Certification, Second Edition
Chapter 12 File Management Systems
Chapter 5: Configuring Users and Groups. Windows Vista User Accounts User accounts are the primary means of authentication Built-in Accounts –Administrator:
Introduction to RCC for Intro to MRI 2014 July 25, 2014.
ORNL is managed by UT-Battelle for the US Department of Energy Tools Available for Transferring Large Data Sets Over the WAN Suzanne Parete-Koon Chris.
Magda – Manager for grid-based data Wensheng Deng Physics Applications Software group Brookhaven National Laboratory.
ORNL is managed by UT-Battelle for the US Department of Energy Data Management User Guide Suzanne Parete-Koon Oak Ridge Leadership Computing Facility.
Plans for Exploitation of the ORNL Titan Machine Richard P. Mount ATLAS Distributed Computing Technical Interchange Meeting May 17, 2013.
Illinois Campus Cluster Program User Forum October 24, 2012 Illini Union Room 210 2:00PM – 3:30PM.
Lustre at Dell Overview Jeffrey B. Layton, Ph.D. Dell HPC Solutions |
1 Introduction to Tool chains. 2 Tool chain for the Sitara Family (but it is true for other ARM based devices as well) A tool chain is a collection of.
MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization.
The NERSC Global File System (NGF) Jason Hick Storage Systems Group Lead CAS2K11 September 11-14,
Zeus Users’ Quickstart Training January 27/30, 2012.
Larry Marx and the Project Athena Team. Outline Project Athena Resources Models and Machine Usage Experiments Running Models Initial and Boundary Data.
Introduction to HPC resources for BCB 660 Nirav Merchant
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Linux+ Guide to Linux Certification, Second Edition
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
1 Interface Two most common types of interfaces –SCSI: Small Computer Systems Interface (servers and high-performance desktops) –IDE/ATA: Integrated Drive.
Module 2 Configuring Disks and Device Drivers. Module Overview Partitioning Disks in Windows® 7 Managing Disk Volumes Maintaining Disks in Windows 7 Installing.
Mail Attender for Exchange Technical Overview Presentation Introduction Sherpa Software Group Narrated by Thomas Hand Approximate Time 15 minutes.
Write-through Cache System Policies discussion and A introduction to the system.
03/03/09USCMS T2 Workshop1 Future of storage: Lustre Dimitri Bourilkov, Yu Fu, Bockjoo Kim, Craig Prescott, Jorge L. Rodiguez, Yujun Wu.
Katie Antypas User Services Group Lawrence Berkeley National Lab 17 February 2012 JGI Training Series.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
HPCC Mid-Morning Break File System Tips & Tricks.
Exercise 4 – NFS and NIS Announced Date: 2007/12/11 Due Date: 2007/12/25.
The solarmuri web-data server: Contains general information about our project Allows us to easily exchange data, images,
Marco Cattaneo - DTF - 28th February 2001 File sharing requirements of the physics community  Background  General requirements  Visitors  Laptops 
BlueWaters Storage Solution Michelle Butler NCSA January 19, 2016.
National Energy Research Scientific Computing Center (NERSC) CHOS - CHROOT OS Shane Canon NERSC Center Division, LBNL SC 2004 November 2004.
ORNL is managed by UT-Battelle for the US Department of Energy Best OLCF (or, How to OLCF) Bill Renaud OLCF User Support.
1 LPC Data Organization Robert M. Harris Fermilab With contributions from Jon Bakken, Lothar Bauerdick, Ian Fisk, Dan Green, Dave Mason and Chris Tully.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
AFS Decommissioning Overview Andy Romero Core Computing Division.
ATLAS Computing Wenjing Wu outline Local accounts Tier3 resources Tier2 resources.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
AFS Home Directory Migration Details Andy Romero Core Computing Division.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
VO Experiences with Open Science Grid Storage OSG Storage Forum | Wednesday September 22, 2010 (10:30am)
ORNL is managed by UT-Battelle for the US Department of Energy OLCF HPSS Performance Then and Now Jason Hill HPC Operations Storage Team Lead
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
Advanced Computing Facility Introduction
Compute and Storage For the Farm at Jlab
Specialized Computing Cluster An Introduction
PARADOX Cluster job management
BNL Box Hironori Ito Brookhaven National Laboratory
Architecture & System Overview
CMS analysis job and data transfer test results
Artem Trunov and EKP team EPK – Uni Karlsruhe
Integration of Singularity With Makeflow
Welcome to our Nuclear Physics Computing System
A Web-Based Data Grid Chip Watson, Ian Bird, Jie Chen,
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Welcome to our Nuclear Physics Computing System
Introduction to High Performance Computing Using Sapelo2 at GACRC
Presentation transcript:

Eos Center-wide File Systems Chris Fuson Outline 1 Available Center-wide File Systems 2 New Lustre File System 3 Data Transfer

2 Eos Center-wide File Systems Eos Login Nodes Compute Nodes Spider II /tmp/work /tmp/proj NFS /ccs/home /ccs/proj /ccs/sw Compute nodes can not see NFS areas

3 Eos Center-wide File Systems NFS (/ccs) –OLCF-wide; same as mounted on other OLCF systems –Backed-up –Frequently used items such as source code, binaries, and scripts –Not available from compute nodes /ccs/proj/ 50GB default quota Sharing within project /ccs/home/ 10GB default quota Login directory $HOME /ccs/sw/xc30 3 rd party software Also /sw/xc30

4 Eos Center-wide File Systems Spider –Center-wide scratch space* –Temporary; not backed-up –Available from compute nodes –Fast access to job-related temporary files and for staging large files to and from archival storage –Contains multiple Lustre file systems * Eos only mounts new Spider II file system

5 Spider I v/s Spider II Spider I Widow [1-3] 240 GB/s 10 PB 3 MDS 192 OSS 1,344 OST Current Center-wide Scratch Not Mounted on Eos Decommissioned Early 2014 Atlas [1-2] 1 TB/s 32 PB 2 MDS 288 OSS 2,016 OST Currently Mounted Only on Eos Available on Additional OLCF Systems Soon Spider II

6 Spider II Change Overview 1.New directory structure –Organized by project –Each project given a directory on one of the atlas filesystems –$WORKDIR now within project areas »You may have multiple $WORKDIRs »* Requires Change 2.Quota increases –Increased file system size allows for increased quotas 3.All areas purged –To help ensure space available for all projects Before using Spider II, please note the following:

7 Purpose: Batch job I/O Path: -$MEMBERWORK/ -$WORKDIR/ -/tmp/work/ / 10 TB quota 14 day purge Permissions: -User allowed to change permissions to share within project -No automatic permission changes Spider II Directory Structure ProjectID Member Work World Work Project Work

8 Spider II Directory Structure ProjectID Member Work World Work Project Work Purpose: Data sharing within project Path: -$PROJWORK/ -/tmp/proj/ 100 TB quota 90 day purge Permissions: -Read, Write, Execute access for project members

9 Spider II Directory Structure ProjectID Member Work World Work Project Work Purpose: Data sharing with users who are not members of project Path: -$WORLDWORK/ 10 TB quota 14 day purge Permissions: -Read, Execute for world -Read, Write, Execute for project

10 Spider II Directory Structure New directory structure –Organized by project

11 Before Using New File System on Eos Modify scripts to point to new directory structure /tmp/work/$USER $WORKDIR /tmp/proj/ /tmp/work/$USER/ $WORKDIR/ $MEMBERWORK/ No change required Migrate data Eos does not mount Spider I (widow) You will need to transfer needed data onto Spider II (atlas)

12 Data Migration from Spider I to Spider II Currently only Eos mounts Spider II (atlas) Eos does not mount Spider I (widow) 1.Common file system or archive –NFS (/ccs) Small amounts of data –HPSS Retrieve already archived data Please do not transfer data to HPSS with only purpose to retrieve onto Spider II 2.Remote data transfer –Copy data from remote system that mounts Spider I –scp Small amounts of data –bbcp Multi-streaming Larger amounts of data

13 Data Migration from Spider I to Spider II DTN Spider I /tmp/work/ /file.tar Eos Spider II /tmp/work/ / /file.tar bbcp $ bbcp -P 2 -v -w 8m -s 16 dtn04:/tmp/work/[userid]/file.tar /tmp/work/[userid]/[projid]/file.tar Eos command line: Note: Environment variables are not available through remote copy utilities, should use path or /tmp/work, /tmp/proj links

14 Questions? More information: – –