Preparation for the TeraGrid: Account Synchronization and High Performance Storage Bobby House.

Slides:



Advertisements
Similar presentations
IBM Software Group ® Integrated Server and Virtual Storage Management an IT Optimization Infrastructure Solution from IBM Small and Medium Business Software.
Advertisements

Operating System.
1 PlanetLab: A globally distributed testbed for New and Disruptive Services CS441 Mar 15th, 2005 Seungjun Lee
IHEP Site Status Jingyan Shi, Computing Center, IHEP 2015 Spring HEPiX Workshop.
Hotspot Express $ One of the Pioneers of complete WiFi solutions in India $ Hardware to create HOTSPOTs  Software to secure HOTSPOTs & Manage the users.
Click to add text Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 7: Systems.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 7: Systems Management.
Designing Storage Architectures for Preservation Collections Library of Congress, September 17-18, 2007 Preservation and Access Repository Storage Architecture.
Gordon: Using Flash Memory to Build Fast, Power-efficient Clusters for Data-intensive Applications A. Caulfield, L. Grupp, S. Swanson, UCSD, ASPLOS’09.
The University of Texas Research Data Repository : “Corral” A Geographically Replicated Repository for Research Data Chris Jordan.
Final Design and Implementation
Duties of a system administrator. A system administrator's responsibilities typically include:
Core Services I & II David Hart Area Director, UFP/CS TeraGrid Quarterly Meeting December 2008.
Network, Operations and Security Area Tony Rimovsky NOS Area Director
Storage Survey and Recent Acquisition at LAL Michel Jouvin LAL / IN2P3
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
Quantitative Methodologies for the Scientific Computing: An Introductory Sketch Alberto Ciampa, INFN-Pisa Enrico Mazzoni, INFN-Pisa.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Wave Relay System and General Project Details. Wave Relay System Provides seamless multi-hop connectivity Operates at layer 2 of networking stack Seamless.
Green technology used for ATLAS processing Dr. Ing. Fărcaş Felix NATIONAL INSTITUTE FOR RESEARCH AND DEVELOPMENT OF ISOTOPIC AND MOLECULAR.
Version 4.0. Objectives Describe how networks impact our daily lives. Describe the role of data networking in the human network. Identify the key components.
An automated diagnostic system to streamline DSM project maintenance Johan du Plessis 15 August 2012.
Corral: A Texas-scale repository for digital research data Chris Jordan Data Management and Collections Group Texas Advanced Computing Center.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Computer Science Section National Center for Atmospheric Research Department of Computer Science University of Colorado at Boulder Blue Gene Experience.
NCICB Systems Architecture Bill Britton Terrapin Systems LPG/NCICB Dedicated Support.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Scalable Systems Software Center Resource Management and Accounting Working Group Face-to-Face Meeting October 10-11, 2002.
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
Crystal Ball Panel ORNL Heterogeneous Distributed Computing Research Al Geist ORNL March 6, 2003 SOS 7.
Planning for security Microsoft View
FutureGrid Cyberinfrastructure for Computational Research.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Storage and Storage Access 1 Rainer Többicke CERN/IT.
Packet Capture and Analysis: An Introduction to Wireshark 1.
Welcome to the PVFS BOF! Rob Ross, Rob Latham, Neill Miller Argonne National Laboratory Walt Ligon, Phil Carns Clemson University.
Terascala – Lustre for the Rest of Us  Delivering high performance, Lustre-based parallel storage appliances  Simplifies deployment, management and tuning.
CASPUR Site Report Andrei Maslennikov Lead - Systems Rome, April 2006.
Presented by: Tony Rimovsky TeraGrid Account Management Tony Rimovsky, Area Director for Network Operations and Security
11 January 2005 High Performance Computing at NCAR Tom Bettge Deputy Director Scientific Computing Division National Center for Atmospheric Research Boulder,
Panel “Making real large-scale grids for real money-making users: why, how and when?” August 2005 Achim Streit Forschungszentrum Jülich in der.
NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
Lofar Information System on GRID A.N.Belikov. Lofar Long Term Archive Prototypes: EGEE Astro-WISE Requirements to data storage Tiers Astro-WISE adaptation.
Network, Operations and Security Area Tony Rimovsky NOS Area Director
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
TeraGrid User Portal Eric Roberts. Outline Motivation Vision What’s included? Live Demonstration.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
National Energy Research Scientific Computing Center (NERSC) NERSC Site Report Shane Canon NERSC Center Division, LBNL 10/15/2004.
Scientific Computing in PPD and other odds and ends Chris Brew.
Open Science Grid OSG Accounting System Matteo Melani SLAC 9/28/05 Joint OSG and EGEE Operations Workshop.
SOFTWARE DEFINED STORAGE The future of storage.  Tomas Florian  IT Security  Virtualization  Asterisk  Empower people in their own productivity,
Data Infrastructure in the TeraGrid Chris Jordan Campus Champions Presentation May 6, 2009.
An Introduction to GPFS
Monterey HPDC Workshop Experiences with MC-GPFS in DEISA Andreas Schott
RuggedPOD O/S Deployment strategy. Disclaimers The content of this presentation is released under GPL v2 license en Creative Common Attribution-ShareAlike.
The NERSC Global File System and PDSF Tom Langley PDSF Support Group NERSC at Lawrence Berkeley National Laboratory Fall HEPiX October 2006.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
TeraGrid Accounting System Progress and Plans David Hart July 26, 2007.
TeraGrid User Portal and Online Presence David Hart, SDSC Area Director, User-Facing Projects and Core Services TeraGrid Annual Review April 6, 2009.
Pablo Pinés León – FTEC 2016 Program
Статус ГРИД-кластера ИЯФ СО РАН.
OPERATING SYSTEM OVERVIEW
ASM-based storage to scale out the Database Services for Physics
Functions of an operating system
Miami-Dade County Public Schools
Presentation transcript:

Preparation for the TeraGrid: Account Synchronization and High Performance Storage Bobby House

About the TeraGrid

TeraGrid Integration qIntegration Challenges  Accounting  Network  Security  TeraGrid Software qLocal Accounting System Interface  Account Creation  Tracking User Allocations

NCSA/TG Allocations Service Node FEN 4 FEN 1 FEN 2 FEN 3 Frost (IBM BG/L) TGCDB AMIE ACC8 tgactsync ReSET usersync Usage packets Users and projects info User and project packets Job data Users, Projects, DNs /etc/passwd /etc/project grid-mapfile DBST ReSET

High Performance Storage qRedeploying Bluesky storage  Special projects and extra storage for the TeraGrid  Redistributed 5 racks of storage for use with Frost  Re-Architected for performance and high availability qThe FAStT storage system - Capacity: TB Raw (~20TB Usable) - Performance: up to 1 GB/s - Design and Implementation - Full Hardware Redundancy - Full Software Redundancy with Linux multi-path and GPFS

FC Switch A FC Switch B FENsFrost A B AB AB AB AB AB AB AB AB AB

Continuing Work qMPI-IO testing and performance analysis:  GPFS on Frost storage subsystems (FAStT 500 and FAStT 900)  GPFS, Lustre, PVFS2 on dedicated storage system (Maelstrom) qSemi-Automated Performance Analysis  Benchmark Configuration  Data Collection  Plotting and Analysis qIncorporating these efforts into an upcoming paper on High Performance Storage

Questions?