Common User Environments - Update Shawn T. Brown, PSC CUE Working Group Lead TG Quartely 1.

Slides:



Advertisements
Similar presentations
TeraGrid Community Software Areas (CSA) JP (John-Paul) Navarro TeraGrid Grid Infrastructure Group Software Integration University of Chicago and Argonne.
Advertisements

Overview of local security issues in Campus Grid environments Bruce Beckles University of Cambridge Computing Service.
Test harness and reporting framework Shava Smallen San Diego Supercomputer Center Grid Performance Workshop 6/22/05.
Ch-11 Project Execution and Termination. System Testing This involves two different phases with two different outputs First phase is system test planning.
Quality Assurance (QA) Working Group Update February 11, 2010 Kate Ericson (SDSC) Shava Smallen (SDSC)
TeraGrid Quarterly Meeting Dec 6-7, 2007 DVS GIG Project Year 4&5 Project List Kelly Gaither, DVS Area Director.
Common User Environments Working Group Shawn T. Brown, PSC CUE Working Group Lead TeraGrid Annual Review 04/7/
User Introduction to the TeraGrid 2007 SDSC NCAR TACC UC/ANL NCSA ORNL PU IU PSC.
Introduction To Java Objectives For Today â Introduction To Java â The Java Platform & The (JVM) Java Virtual Machine â Core Java (API) Application Programming.
® IBM Software Group © 2010 IBM Corporation What’s New in Profiling & Code Coverage RAD V8 April 21, 2011 Kathy Chan
Condor and GridShell How to Execute 1 Million Jobs on the Teragrid Jeffrey P. Gardner - PSC Edward Walker - TACC Miron Livney - U. Wisconsin Todd Tannenbaum.
(e)Science-Driven, Production- Quality, Distributed Grid and Cloud Data Infrastructure for the Transformative, Disruptive, Revolutionary, Next-Generation.
Network Printing. Printer sharing Saves money by only needing one printer Increases efficiency of managing resources.
Simo Niskala Teemu Pasanen
CVMFS: Software Access Anywhere Dan Bradley Any data, Any time, Anywhere Project.
Network, Operations and Security Area Tony Rimovsky NOS Area Director
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
GIG Software Integration: Area Overview TeraGrid Annual Project Review April, 2008.
Servlets Environment Setup. Agenda:  Setting up Java Development Kit  Setting up Web Server: Tomcat  Setting up CLASSPATH.
TeraGrid Information Services December 1, 2006 JP Navarro GIG Software Integration.
GIG Software Integration Project Plan, PY4-PY5 Lee Liming Mary McIlvain John-Paul Navarro.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
Eos Center-wide File Systems Chris Fuson Outline 1 Available Center-wide File Systems 2 New Lustre File System 3 Data Transfer.
TeraGrid Science Gateways: Scaling TeraGrid Access Aaron Shelmire¹, Jim Basney², Jim Marsteller¹, Von Welch²,
Dreamweaver Domain 3 KellerAdobe CS5 ACA Certification Prep Dreamweaver Domain 6 KellerAdobe CS5 ACA Certification Prep Dreamweaver Domain 6: Evaluating.
CTSS 4 Strategy and Status. General Character of CTSSv4 To meet project milestones, CTSS changes must accelerate in the coming years. Process –Process.
October, Scientific Linux INFN/Trieste B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste.
1 PY4 Project Report Summary of incomplete PY4 IPP items.
TeraGrid Quarterly Meeting Dec 5 - 7, 2006 Data, Visualization and Scheduling (DVS) Update Kelly Gaither, DVS Area Director.
TeraGrid Privacy Policy: What is it and why are we doing it… Von Welch TeraGrid Quarterly Meeting March 6, 2008.
TeraGrid CTSS Plans and Status Dane Skow for Lee Liming and JP Navarro OSG Consortium Meeting 22 August, 2006.
Submit for Evaluation Step 1: Create a Project – You can create a new project on the Synapse home page once you are logged in to hold your work for the.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
Evaluating & Maintaining a Site Domain 6. Conduct Technical Tests Dreamweaver provides many tools to assist in finalizing and testing your website for.
Marco Cattaneo - DTF - 28th February 2001 File sharing requirements of the physics community  Background  General requirements  Visitors  Laptops 
Portal Update Plan Ashok Adiga (512)
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
Common User Environments Working Group Shawn T. Brown, PSC CUE Working Group Lead April,
NOS Report Jeff Koerner Feb 10 TG Roundtable. Security-wg In Q a total of 11 user accounts and one login node were compromised. The Security team.
Distributed Data for Science Workflows Data Architecture Progress Report December 2008.
User-Facing Projects Update David Hart, SDSC April 23, 2009.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Network, Operations and Security Area Tony Rimovsky NOS Area Director
Education, Outreach and Training (EOT) Scott Lathrop Area Director for EOT February 2009.
Attribute-based Authentication for Gateways Jim Basney Terry Fleury Stuart Martin JP Navarro Tom Scavo Nancy Wilkins-Diehr.
Gateway Security Summit, January 28-30, 2008 Welcome to the Gateway Security Summit Nancy Wilkins-Diehr Science Gateways Area Director.
TeraGrid QA/INCA Turnover Jeff Koerner Q meeting December 8, 2010.
Initiating Teragrid Sessions Raghu Reddy. Outline Motivation Initial Setup –Certificates –Proxies –Grid-map file entries and DNs Softenv for customizing.
TeraGrid’s Common User Environment: Status, Challenges, Future Annual Project Review April, 2008.
Software Integration Highlights CY2008 Lee Liming, JP Navarro GIG Area Directors for Software Integration University of Chicago, Argonne National Laboratory.
Stephen Burke – Sysman meeting - 22/4/2002 Partner Logo The Testbed – A User View Stephen Burke, PPARC/RAL.
JRA1 Meeting – 09/02/ Software Configuration Management and Integration EGEE is proposed as a project funded by the European Union under contract.
TeraGrid Program Year 5 Overview John Towns Chair, TeraGrid Forum Director, Persistent Infrastructure National Center for Supercomputing Applications University.
TG ’08, June 9-13, State of TeraGrid John Towns Co-Chair, TeraGrid Forum Director, Persistent Infrastructure National Center for Supercomputing.
TeraGrid Capability Discovery John-Paul “JP” Navarro TeraGrid Area Co-Director for Software Integration University of Chicago/Argonne National Laboratory.
Data Infrastructure in the TeraGrid Chris Jordan Campus Champions Presentation May 6, 2009.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
TeraGrid’s Process for Meeting User Needs. Jay Boisseau, Texas Advanced Computing Center Dennis Gannon, Indiana University Ralph Roskies, University of.
Gateways security Aashish Sharma Security Engineer National Center for Supercomputing Applications (NCSA) University of Illinois at Urbana-Champaign.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
TeraGrid Software Integration: Area Overview (detailed in 2007 Annual Report Section 3) Lee Liming, JP Navarro TeraGrid Annual Project Review April, 2008.
TeraGrid User Portal and Online Presence David Hart, SDSC Area Director, User-Facing Projects and Core Services TeraGrid Annual Review April 6, 2009.
Advanced Computing Facility Introduction
Project Center Use Cases Revision 2
Project Center Use Cases
Segments Basic Uses: slides minutes
Project Center Use Cases
Project Center Use Cases Revision 3
Project Center Use Cases Revision 3
Presentation transcript:

Common User Environments - Update Shawn T. Brown, PSC CUE Working Group Lead TG Quartely 1

Team Members S hawn Brown (PSC, Lead) Kevin Colby (Purdue) Dan Lapine (NCSA) David McWilliams (NICS) Derek Simmel (PSC) Rich Raymond (PSC, Managing Lead) Jerry Greenberg (SDSC) Roberto Gomez (PSC) John Lockman (TACC) Jim Lupo (LONI) Diana Diehl (SDSC, TG Documentation, volunteer) 2

Philosophy Create commonality without destroying diversity. Focus on user requirements and experience. We are not developing a gateway. We are not catering to the hero users. 3

TeraGrid Resources CUED CUE - Documentation A centrally located, clearly itemized area for documentation of resources with both web and CLI based access. CUEMS CUE Management System A single common command line system for managing one’s environment, with a single entry to load the CUE. CUETP CUE – Testing Platform Simple program or set of programs that can be compiled and executed through the CUE and will help to illustrate its use. CUBE Common User Build Environment Attempting to make common the tools needed for building usable scientific code across resources CUEVC CUE Variable Collection A set of environment variables that will be common across the TeraGrid, making job submission and resource discovery easier.

TeraGrid Resources CUED CUE - Documentation A centrally located, clearly itemized area for documentation of resources with both web and CLI based access. CUEMS CUE Management System A single common command line system for managing one’s environment, with a single entry to load the CUE. CUEMS CUE Management System A single common command line system for managing one’s environment, with a single entry to load the CUE. CUETP CUE – Testing Platform Simple program or set of programs that can be compiled and executed through the CUE and will help to illustrate its use. CUBE Common User Build Environment Attempting to make common the tools needed for building usable scientific code across resources CUEVC CUE Variable Collection A set of environment variables that will be common across the TeraGrid, making job submission and resource discovery easier.

How did we proceed? Targeted RP “liaisons” to work on implementation. Developed implementation documents outlining the “rules” of the implementation. – Done in consultation with: RP liaisons SW Int working group Campus Champions Worked to implement the CUEMS and CUEVC portions on current TG machines. 6

The Machines We are Working With Abe Queen Bee Steele LonestarRangerKraken PopleDash Future Systems 7

CUEMS – Environment Management –Implementation of the Modules software environment manager on all systems –Five basic modules: cue-login-env Contains the CUEVC definitions for environment variables cue-math A wrapper for the modules cue-mkl cue-fftw cue-lapack cue-scalapack cue-build A wrapper for the module cue-compile cue-comm A wrapper for the default mpi stack cue-tg Contains already defined TG variables for the site –Application Modules cue-namd, cue-gamess, cue-hdf5, etc.. 8

CUEVC – Variable Collection 9 Proposed CUE Variable Collection Environment VariableDefinitionExample Values CUE_HOMEPath to the current user's home directory visible on login nodes and compute nodes /usr/users/0/janedoe /nics/j/home/janedoe /home/ncsa/janedoe /home/janedoe CUE_DOCSURL for documentation specific to the current systemhttp:// en.php CUE_APPSPath to directory on the current system containing common software applications /usr/local/apps /sw/xt5 /usr/local/packages/tg /software/linux-rhel4- ia64 CUE_COMMUNITYPath to directory containing subdirectories for specific user communities in which their applications are installed /usr/projects /usr/local/packages/tg /soft/community CUE_EXAMPLESPath to directory containing example files for user tools/usr/local/packages/tg/examples /usr/local/examples /soft/community/examples CUE_NODE_SCRATCHPath on a compute node to local scratch file space for that node (not necessarily visible to other compute nodes); node scratch filesystems local to the node may be deleted upon job completion. /scr /lustre/scratch/johndoe /bessemer/johndoe CUE_NODE_SCRATCH_TYPEFilesystem type of the node local scratch filesystem.lustre ext3 gpfs posix CUE_SCRATCHPath to the user's scratch directory on a shared filesystem visible to all compute nodes. /gpfs_scratch1/janedoe /lustre/scratch/janedoe /scratcha/janedoe /scratch/gpfs/local/janedoe CUE_SCRATCH_TYPEFilesystem type of the scratch filesystem visible to all compute nodes. lustre ext3 gpfs posix

CUEMS – Environment Management Current Policy – Opt In approach –Provide users a clear and simple procedure for implementing CUE as default..nosoft – tells the system that you want modules as your default environment management.modules – Contains commented out cue modules that can be implemented at login. 10

CUED – Documentation Working with the documentation group to add modules documentation to TG Docs A getting started guide on how to activate modules 11

Rolling out Announce to the TG User Services group at next meeting. –Ask for feedback and testing. Ask Campus Champions to test out the implementation. Incorporate into the QA testing procedures –Already underway –Current implementation…. The Jerry Test Announcement and opening to public. 12

Not stopping… Discussion of common queue names. Continue work on CUED incorporation. Finish fitting this into the TG SW Integration Kits –Derek Simmel (PSC) 13