Katie Antypas User Services Group Lawrence Berkeley National Lab 17 February 2012 JGI Training Series.

Slides:



Advertisements
Similar presentations
ECMWF 1 COM INTRO 2004: Introduction to file systems Introduction to File Systems Computer User Training Course 2004 Carsten Maaß User Support.
Advertisements

Exchange… …and the Vault Anton Lawrence IT Services 6 th March 2007.
Buffers & Spoolers J L Martin Think about it… All I/O is relatively slow. For most of us, input by typing is painfully slow. From the CPUs point.
Rhea Analysis & Post-processing Cluster Robert D. French NCCS User Assistance.
Linux Orientation Computer Systems Lab Computer Sciences Department Room 2350.
Welcome Overview of this Session Introduction The Migration –Active Directory (replacing Novell) – & Calendar – from iPlanet to Outlook –Network.
CCPR Workshop Lexis Cluster Introduction October 19, 2007 David Ash.
Kjiersten Fagnan JGI/NERSC Consultant
Jun-15 1 Management Information Systems Class Web Server Usage Instructions.
Introducing the Command Line CMSC 121 Introduction to UNIX Much of the material in these slides was taken from Dan Hood’s CMSC 121 Lecture Notes.
File Transfer and Use of Clear Text Passwords Update NERSC Users Group Meeting Stephen Lau NERSC June 21, 2015.
What is it? Hierarchical storage software developed in collaboration with five US department of Energy Labs since 1992 Allows storage management of 100s.
Dayu Zhang 9/3/2014 Lab01. Lab Instructor: Dayu Zhang Office Hour Mon/Wed 10:40am – 11:10am Room A201 Lab Website
ORNL is managed by UT-Battelle for the US Department of Energy Tools Available for Transferring Large Data Sets Over the WAN Suzanne Parete-Koon Chris.
Operating Systems: Principles and Practice
A crash course in njit’s Afs
Introduction to UNIX/Linux Exercises Dan Stanzione.
Help session: Unix basics Keith 9/9/2011. Login in Unix lab  User name: ug0xx Password: ece321 (initial)  The password will not be displayed on the.
CNIT 132 Intermediate HTML and CSS Publish Web Page.
Eos Center-wide File Systems Chris Fuson Outline 1 Available Center-wide File Systems 2 New Lustre File System 3 Data Transfer.
CPSC203 Introduction to Computers Lab 69 By Jie Gao.
High Performance Louisiana State University - LONI HPC Enablement Workshop – LaTech University,
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
Logging into the linux machines This series of view charts show how to log into the linux machines from the Windows environment. Machine name IP address.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Computer Labs Orientation September 2003 Prepared by Computer Services.
Katie Antypas NERSC User Services Lawrence Berkeley National Lab 10 February 2012 JGI Compute User Training.
Weekly Report By: Devin Trejo Week of June 14, 2015-> June 20, 2015.
Hepix LAL April 2001 An alternative to ftp : bbftp Gilles Farrache In2p3 Computing Center
Logging into the linux machines This series of view charts show how to log into the linux machines from the Windows environment. Machine name IP address.
RT-LAB Electrical Applications 1 Opal-RT Technologies Use of the “Store Embedded” mode Solution RT-LAB for PC-104.
How to use WS_FTP A Step by Step Guide to File Transfer.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
CSC414 “Introduction to UNIX/ Linux” Lecture 6. Schedule 1. Introduction to Unix/ Linux 2. Kernel Structure and Device Drivers. 3. System and Storage.
Lecture 02 File and File system. Topics Describe the layout of a Linux file system Display and set paths Describe the most important files, including.
Seaborg Decommission James M. Craw Computational Systems Group Lead NERSC User Group Meeting September 17, 2007.
Remote & Collaborative Visualization. TACC Remote Visualization Systems Longhorn – Dell XD Visualization Cluster –256 nodes, each with 48 GB (or 144 GB)
Introduction to Hartree Centre Resources: IBM iDataPlex Cluster and Training Workstations Rob Allan Scientific Computing Department STFC Daresbury Laboratory.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
TNPM v1.3 Flow Control. 2 High Level Instead of each component having flow control settings that govern only its directory, we now have a set of flow.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
ATLAS Computing Wenjing Wu outline Local accounts Tier3 resources Tier2 resources.
Patrick Gartung 1 CMS 101 Mar 2007 Introduction to the User Analysis Facility (UAF) Patrick Gartung - Fermilab.
PuTTY Introduction to Web Programming Kirkwood Continuing Education by Fred McClurg © Copyright 2016, All Rights Reserved ssh client.
ORNL is managed by UT-Battelle for the US Department of Energy OLCF HPSS Performance Then and Now Jason Hill HPC Operations Storage Team Lead
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Gridengine Configuration review ● Gridengine overview ● Our current setup ● The scheduler ● Scheduling policies ● Stats from the clusters.
Advanced Computing Facility Introduction
BY: SALMAN 1.
Compute and Storage For the Farm at Jlab
Workstations & Thin Clients
GRID COMPUTING.
Welcome to Indiana University Clusters
BY: SALMAN.
Welcome to Indiana University Clusters
BIF713 Managing Disk Space.
Architecture & System Overview
CyberShake Study 16.9 Discussion
Logging into the linux machines
Computing Infrastructure for DAQ, DM and SC
Welcome to our Nuclear Physics Computing System
Design Unit 26 Design a small or home office network
Welcome to our Nuclear Physics Computing System
Michael P. McCumber Task Force Meeting April 3, 2006
BACHELOR’S THESIS DEFENSE
Introduction to High Performance Computing Using Sapelo2 at GACRC
Logging into the linux machines
Software - Operating Systems
Presentation transcript:

Katie Antypas User Services Group Lawrence Berkeley National Lab 17 February 2012 JGI Training Series

Until all users are migrated to NERSC we plan to hold weekly Friday sessions More on file and data management Open Office Hours Review of batch system policies Crius Rhea Thei a Kronos? Hyperion Oceanus Iapetus Themis Introduction to NIM

On NIM you can change your password, change your shell and set security questions Login to nim.nersc.gov Look under the actions menu to do the above tasks

File systems best practices Unfortunately disk is still expensive All of the JGI’s data can not be stored on disk within the current budget Archive and delete data you no longer need Disk usage will be controlled through quotas in some cases and purging in others

Only the “house” file system will be available on both JGI and NERSC systems initially JGI Space NERSC Space Compute cluster Some submit hosts Most web servers Netapps “projectb” house If your data needs access to both servers in JGI space and the compute cluster, it MUST go into “house” In other words – move data out of Netapps

But “house” is 90% full…… House 90% File systems above 90% are lower performing and at higher risk of failure We need your help deleting data from “house” and moving data from the netapps to “house”

NERSC has set up 2 fast “data transfer nodes” just for JGI users Login to dtn03.nersc.gov or dtn04.nersc.gov Type >df to see all the mounted file systems Back up data to HPSS (you authenticated at last week’s training don’t remember? Type hsi and then enter your NIM password) > cd /house/path/to/your/data > hsi put Or archive an entire directory > htar –cvf tarname.tar directory/

There are two areas of storage within the “project” layout of the “projectb” file system /projectb/ projectdirs/scratch/ PI/ RD/ fungal/ metagenome/ micro/ plant/ comparative/ user/ Group directories Not purged Subject to quota User directories cd $SCRATCH Purged, 12 weeks 1 TB, 500,000 inode quota Request a projectb directory for your group through the Jira ticket system Request a larger /scratch quota through the Jira ticket system ssh phoebe.nersc.gov

Use the fast data transfer nodes to move data between file systems Login to dtn03.nersc.gov or dtn04.nersc.gov Type >df to see all the mounted file systems You can move data to 3 file systems $HOME “project” “scratch” > mv /old/path/filename /new/path/filename

It is important for every group to come up with a data retention policy How long should we keep the raw data? Can the data be deleted or should it be archived? Can we set up an automated way to archive and delete data?

The JGI compute clusters have been consolidated into Crius with the following shares Crius Rhea Theia Kronos? Hyperion Oceanus Iapetus Themis

Users should submit jobs to the normal queue Jobs running longer than 12 hours or requesting large amounts of memory could see longer wait times

Useful commands