Nick LeRoy & Jeff Weber Computer Sciences Department University of Wisconsin-Madison Managing.

Slides:



Advertisements
Similar presentations
30-31 Jan 2003J G Jensen, RAL/WP5 Storage Elephant Grid Access to Mass Storage.
Advertisements

Dan Bradley Computer Sciences Department University of Wisconsin-Madison Schedd On The Side.
CSF4, SGE and Gfarm Integration Zhaohui Ding Jilin University.
1 Using Stork Barcelona, 2006 Condor Project Computer Sciences Department University of Wisconsin-Madison
Condor Project Computer Sciences Department University of Wisconsin-Madison Stork An Introduction Condor Week 2006 Milan.
Jaime Frey Computer Sciences Department University of Wisconsin-Madison Condor-G: A Case in Distributed.
A Grid Resource Broker Supporting Advance Reservations and Benchmark- Based Resource Selection Erik Elmroth and Johan Tordsson Reporter : S.Y.Chen.
Module 4: Configuring Caching. Overview Cache Overview Configuring Cache Policy Configuring Cache Settings Configuring Scheduled Content Downloads.
PAWN: Producer-Archive Workflow Network University of Maryland Institute for Advanced Computer Studies Joseph JaJa, Mike Smorul, Mike McGann.
Implementing ISA Server Caching. Caching Overview ISA Server supports caching as a way to improve the speed of retrieving information from the Internet.
MCTS Guide to Microsoft Windows Server 2008 Network Infrastructure Configuration Chapter 7 Configuring File Services in Windows Server 2008.
Jim Basney Computer Sciences Department University of Wisconsin-Madison Managing Network Resources in.
Jaeyoung Yoon Computer Sciences Department University of Wisconsin-Madison Virtual Machines in Condor.
Zach Miller Condor Project Computer Sciences Department University of Wisconsin-Madison Flexible Data Placement Mechanisms in Condor.
Zach Miller Computer Sciences Department University of Wisconsin-Madison What’s New in Condor.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
1 Todd Tannenbaum Department of Computer Sciences University of Wisconsin-Madison
SRM at Clemson Michael Fenn. What is a Storage Element? Provides grid-accessible storage space. Is accessible to applications running on OSG through either.
1 HawkEye A Monitoring and Management Tool for Distributed Systems Todd Tannenbaum Department of Computer Sciences University of.
Configuring and Troubleshooting Identity and Access Solutions with Windows Server® 2008 Active Directory®
Hao Wang Computer Sciences Department University of Wisconsin-Madison Security in Condor.
BaBar MC production BaBar MC production software VU (Amsterdam University) A lot of computers EDG testbed (NIKHEF) Jobs Results The simple question:
StoRM Some basics and a comparison with DPM Wahid Bhimji University of Edinburgh GridPP Storage Workshop 31-Mar-101Wahid Bhimji – StoRM.
A. Sim, CRD, L B N L 1 OSG Applications Workshop 6/1/2005 OSG SRM/DRM Readiness and Plan Alex Sim / Jorge Rodriguez Scientific Data Management Group Computational.
Module 5: Configuring Access for Remote Clients and Networks.
1 The Roadmap to New Releases Todd Tannenbaum Department of Computer Sciences University of Wisconsin-Madison
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor-G Operations.
Hao Wang Computer Sciences Department University of Wisconsin-Madison Authentication and Authorization.
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Peter F. Couvares (based on material from Tevfik Kosar, Nick LeRoy, and Jeff Weber) Associate Researcher, Condor Team Computer Sciences Department University.
Author - Title- Date - n° 1 Partner Logo EU DataGrid, Work Package 5 The Storage Element.
Alain Roy Computer Sciences Department University of Wisconsin-Madison ClassAds: Present and Future.
The Roadmap to New Releases Derek Wright Computer Sciences Department University of Wisconsin-Madison
Derek Wright Computer Sciences Department University of Wisconsin-Madison MPI Scheduling in Condor: An.
IS 4506 Establishing Microsoft NNTP Service.  Overview NNTP Service benefits How the NNTP Service works Configuring and managing NNTP Service.
Tevfik Kosar Computer Sciences Department University of Wisconsin-Madison Managing and Scheduling Data.
Flexibility, Manageability and Performance in a Grid Storage Appliance John Bent, Venkateshwaran Venkataramani, Nick Leroy, Alain Roy, Joseph Stanley,
Zach Miller Computer Sciences Department University of Wisconsin-Madison Securing Condor.
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT DPM / LFC and FTS news Ricardo Rocha ( on behalf of the IT/GT/DMS.
Alain Roy Computer Sciences Department University of Wisconsin-Madison Condor & Middleware: NMI & VDT.
Nick LeRoy Computer Sciences Department University of Wisconsin-Madison Hawkeye.
A Managed Object Placement Service (MOPS) using NEST and GridFTP Dr. Dan Fraser John Bresnahan, Nick LeRoy, Mike Link, Miron Livny, Raj Kettimuthu SCIDAC.
Padova, 5 October StoRM Service view Riccardo Zappi INFN-CNAF Bologna.
David Adams ATLAS ATLAS distributed data management David Adams BNL February 22, 2005 Database working group ATLAS software workshop.
Configuring, Managing and Maintaining Windows Server® 2008 Servers Course 6419A.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT Upcoming Features and Roadmap Ricardo Rocha ( on behalf of the.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
EGI-Engage Data Services and Solutions Part 1: Data in the Grid Vincenzo Spinoso EGI.eu/INFN Data Services.
Jaime Frey Computer Sciences Department University of Wisconsin-Madison What’s New in Condor-G.
Dan Bradley Condor Project CS and Physics Departments University of Wisconsin-Madison CCB The Condor Connection Broker.
Status of Globus activities Massimo Sgaravatto INFN Padova for the INFN Globus group
Distributed File Systems Questions answered in this lecture: Why are distributed file systems useful? What is difficult about distributed file systems?
NeST: Network Storage John Bent, Venkateshwaran V Miron Livny, Andrea Arpaci-Dusseau, Remzi Arpaci-Dusseau.
Todd Tannenbaum Computer Sciences Department University of Wisconsin-Madison Condor NT Condor ported.
Jaime Frey Computer Sciences Department University of Wisconsin-Madison Condor and Virtual Machines.
New Development Efforts in GridFTP Raj Kettimuthu Math & Computer Science Division, Argonne National Laboratory, Argonne, IL 60439, U.S.A.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
© 2012 Eucalyptus Systems, Inc. Cloud Computing Introduction Eucalyptus Education Services 2.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT Standard Protocols in DPM Ricardo Rocha.
Condor on Dedicated Clusters Peter Couvares and Derek Wright Computer Sciences Department University of Wisconsin-Madison
Network Attached Storage Overview
DPM Installation Configuration
DUCKS – Distributed User-mode Chirp-Knowledgeable Server
STORK: A Scheduler for Data Placement Activities in Grid
NeST: Network Storage Technologies
INFNGRID Workshop – Bari, Italy, October 2004
Network File System (NFS)
Presentation transcript:

Nick LeRoy & Jeff Weber Computer Sciences Department University of Wisconsin-Madison Managing Storage with NeST

Overview of NeST › NeST: Network Storage Technology › Lightweight: Configuration and installation can be performed in minutes. › Multi-protocol: Supports Chirp, GridFTP, NFS, HTTP  Chirp is NeST’s internal protocol › Secure: GSI authentication › Allocation: NeST negotiates “mini storage contracts” between users and server.

Why storage allocations ? › Users need both temporary storage, and long-term guaranteed storage. › Administrators need a storage solution with configurable limits and policy. › Administrators will benefit from NeST’s autonomous reclamations of expired storage allocations.

Storage allocations in NeST › Lot – abstraction for storage allocation with an associated handle  Handle is used for all subsequent operations on this lot › Client requests lot of a specified size and duration. Server accepts or rejects client request.

Lot types › User / Group  User: single user (user controls ACL)  Group: shared use (users control ACL) › Best effort / Guaranteed  Best effort: server may purge data if necessary. Good fit for derived data.  Guaranteed: server honors request duration. › Hierarchical: Lots with lots (“sublots”)

Lot operations › Create, Delete, Update › MoveFile  Moves files across lots › AddUser, RemoveUser  Lot level access control  List of users allowed to request sub-lots › Attach / Detach  Performs NeST lot to path binding

Functionality: GT4 GridFTP GridFTP Server Disk Module Disk Storage globus-url-copy Sample Application (GSI-FTP)

Functionality: GridFTP +NeST GridFTP Server NeST Module Disk Storage NeST Server NeST Client Chirp Handler globus-url-copy (Lot operations, etc.) (File transfers) (chirp)(GSI-FTP) (File transfer) (chirp)

GridFTP with NeST (Cont) GridFTP Server NeST Module Disk Storage NeST Server Chirp Custom Application (Lot operations, etc.) (chirp) (File transfers) (GSI-FTP) (File transfer) (chirp) GT4NeST

Stork NeST Sample Work DAG NeST AllocateJob Xfer InXfer OutRelease Stork Condor

Release Status › › v0.9.7 expected soon  v0.9.7 Pre 1 just released, and in VDT  Just bug fixes as found -> v0.9.7 › v1.0 expected later this year › Currently supports Linux, will support other O/S’s in the future.

Roadmap › Undergo performance tests with Stork › Continue hardening code base › Expand supported platforms  Solaris & other UNIX-en › Add SRM front-end › Bundle with Condor

Questions ? › Demo on Wednesday  Room 4289, CS building 1:00pm – 4:00pm  More information available at 