CTSS 4 Strategy and Status. General Character of CTSSv4 To meet project milestones, CTSS changes must accelerate in the coming years. Process –Process.

Slides:



Advertisements
Similar presentations
Scaling TeraGrid Access A Testbed for Attribute-based Authorization and Leveraging Campus Identity Management
Advertisements

TeraGrid Deployment Test of Grid Software JP Navarro TeraGrid Software Integration University of Chicago OGF 21 October 19, 2007.
The Anatomy of the Grid: An Integrated View of Grid Architecture Carl Kesselman USC/Information Sciences Institute Ian Foster, Steve Tuecke Argonne National.
System Center Configuration Manager Push Software By, Teresa Behm.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Technology Steering Group January 31, 2007 Academic Affairs Technology Steering Group February 13, 2008.
Massimo Cafaro GridLab Review GridLab WP10 Information Services Massimo Cafaro CACT/ISUFI University of Lecce, Italy.
1-2.1 Grid computing infrastructure software Brief introduction to Globus © 2010 B. Wilkinson/Clayton Ferner. Spring 2010 Grid computing course. Modification.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
4b.1 Grid Computing Software Components of Globus 4.0 ITCS 4010 Grid Computing, 2005, UNC-Charlotte, B. Wilkinson, slides 4b.
Chapter 1: Overview of Workflow Management Dr. Shiyong Lu Department of Computer Science Wayne State University.
Technology Steering Group January 31, 2007 Academic Affairs Technology Steering Group February 13, 2008.
TeraGrid Science Gateway AAAA Model: Implementation and Lessons Learned Jim Basney NCSA University of Illinois Von Welch Independent.
Simo Niskala Teemu Pasanen
Globus Computing Infrustructure Software Globus Toolkit 11-2.
Web-based Portal for Discovery, Retrieval and Visualization of Earth Science Datasets in Grid Environment Zhenping (Jane) Liu.
TG QM Arlington: GIG User Support Coordination Plan Sergiu Sanielevici, GIG Area Director for User Support Coordination
Patch Management Strategy
Globus 4 Guy Warner NeSC Training.
Kate Keahey Argonne National Laboratory University of Chicago Globus Toolkit® 4: from common Grid protocols to virtualization.
Release & Deployment ITIL Version 3
Attribute-based Authentication for Gateways Jim Basney Terry Fleury Stuart Martin JP Navarro Tom Scavo Jon Siwek Von Welch Nancy Wilkins-Diehr.
TeraGrid’s Integrated Information Service “IIS” Grid Computing Environments 2009 Lee Liming, JP Navarro, Eric Blau, Jason Brechin, Charlie Catlett, Maytal.
NOS Objectives, YR 4&5 Tony Rimovsky. 4.2 Expanding Secure TeraGrid Access A TeraGrid identity management infrastructure that interoperates with campus.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
GIG Software Integration: Area Overview TeraGrid Annual Project Review April, 2008.
TeraGrid Information Services December 1, 2006 JP Navarro GIG Software Integration.
Scaling Account Creation and Management through the TeraGrid User Portal Contact: Eric Roberts
GIG Software Integration Project Plan, PY4-PY5 Lee Liming Mary McIlvain John-Paul Navarro.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
TeraGrid Information Services John-Paul “JP” Navarro TeraGrid Grid Infrastructure Group “GIG” Area Co-Director for Software Integration and Information.
TeraGrid Information Services JP Navarro, Lee Liming University of Chicago TeraGrid Architecture Meeting September 20, 2007.
GRAM: Software Provider Forum Stuart Martin Computational Institute, University of Chicago & Argonne National Lab TeraGrid 2007 Madison, WI.
Active Monitoring in GRID environments using Mobile Agent technology Orazio Tomarchio Andrea Calvagna Dipartimento di Ingegneria Informatica e delle Telecomunicazioni.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
Certification and Accreditation CS Phase-1: Definition Atif Sultanuddin Raja Chawat Raja Chawat.
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
Extreme/Agile Programming Prabhaker Mateti. ACK These slides are collected from many authors along with a few of mine. Many thanks to all these authors.
Chapter 1: Overview of Workflow Management Dr. Shiyong Lu Department of Computer Science Wayne State University.
1 PY4 Project Report Summary of incomplete PY4 IPP items.
Kelly Gaither Visualization Area Report. Efforts in 2008 Focused on providing production visualization capabilities (software and hardware) Focused on.
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
TeraGrid CTSS Plans and Status Dane Skow for Lee Liming and JP Navarro OSG Consortium Meeting 22 August, 2006.
SAN DIEGO SUPERCOMPUTER CENTER Inca TeraGrid Status Kate Ericson November 2, 2006.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Leveraging the InCommon Federation to access the NSF TeraGrid Jim Basney Senior Research Scientist National Center for Supercomputing Applications University.
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
Portal Update Plan Ashok Adiga (512)
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Alain Roy Computer Sciences Department University of Wisconsin-Madison Condor & Middleware: NMI & VDT.
Biomedical and Bioscience Gateway to National Cyberinfrastructure John McGee Renaissance Computing Institute
2005 GRIDS Community Workshop1 Learning From Cyberinfrastructure Initiatives Grid Research Integration Development & Support
Data, Visualization and Scheduling (DVS) TeraGrid Annual Meeting, April 2008 Kelly Gaither, GIG Area Director DVS.
Security Solutions Rachana Ananthakrishnan University of Chicago.
Network, Operations and Security Area Tony Rimovsky NOS Area Director
State of Georgia Release Management Training
CTSS Rollout update Mike Showerman JP Navarro April
TeraGrid’s Common User Environment: Status, Challenges, Future Annual Project Review April, 2008.
CTSSv3 Cutover Checklist Draft 3 JP Navarro, Lee Liming May 25, 2006.
Software Integration Highlights CY2008 Lee Liming, JP Navarro GIG Area Directors for Software Integration University of Chicago, Argonne National Laboratory.
Configuration Control (Aliases: change control, change management )
TeraGrid Capability Discovery John-Paul “JP” Navarro TeraGrid Area Co-Director for Software Integration University of Chicago/Argonne National Laboratory.
Grid Deployment Technical Working Groups: Middleware selection AAA,security Resource scheduling Operations User Support GDB Grid Deployment Resource planning,
TeraGrid’s Process for Meeting User Needs. Jay Boisseau, Texas Advanced Computing Center Dennis Gannon, Indiana University Ralph Roskies, University of.
TeraGrid Software Integration: Area Overview (detailed in 2007 Annual Report Section 3) Lee Liming, JP Navarro TeraGrid Annual Project Review April, 2008.
TeraGrid Information Services
Information Services Discussion TeraGrid ‘08
Adding Computational Resources to SURAgrid (the document) September 27, 2007 Mary Trauner SURA Consultant.
Presentation transcript:

CTSS 4 Strategy and Status

General Character of CTSSv4 To meet project milestones, CTSS changes must accelerate in the coming years. Process –Process will be the focus of CTSSv4. –Significant changes in who and how, not so much in what. –Process changes now will enable us to more effectively manage content changes in the future. Content –Newer component versions that include features we need –More allowable versions –Support for more platforms

CTSS 4 Process Goals Change the focus from software packages to capabilities. –Software should be deployed to meet user capability requirements, not to satisfy abstract agreements. –Which capabilities ought to be coordinated, and why? Be explicit about which capabilities are expected to be on which systems. –The CTSS core (mandatory capabilities) is radically smaller. –Each RP explicitly decides which additional capabilities it will provide, based on the intended purpose of each system. Make the process of defining CTSS more open and inclusive and more reflective of the TeraGrid project structure (GIG + multiple RPs, working groups, RATs, etc.). –GIG/RP working groups and areas have an open mechanism for defining, designing, and delivering CTSS capabilities. –Expertise is distributed, so the process should be distributed also. Improve coordination significantly. –Changes are coordinated more explicitly with more TeraGrid sub-teams. –Each sub-team has a part in change planning.

CTSS 4 Strategy Break the CTSS monolith into multiple capability modules. Employ a formal change planning process.

CTSS “Kits” Reorganize CTSS into a series of kits. –A kit provides a small set of closely related capabilities. (job execution service, dataset hosting service, high-performance data movement, global filesystem, etc.) –A kit is (as much as possible) independent from other kits. Each kit includes: –a definition of the kit that focuses on purpose, requirements, and capabilities, including a problem statement and a design; –a set of software packages that RP administrators can install on their system(s) in order to implement the design; –documentation for RP administrators on how to install and test the software; –inca tests that test whether a given system satisfies the stated requirements; –softenv definitions that allow users to access the software.

The “Core” Kit Provides the capabilities that are absolutely necessary for a resource to meet the most basic integrative requirements of the TeraGrid. –Common Authentication, Authorization, Auditing, and Accounting capabilities –A system-wide registry of capabilities and service information –A Verification & Validation mechanism for capabilities –System-wide Usage Reporting capabilities This is much smaller than the current set of “required” CTSSv3 components. Unlike other capability kits, the Core Kit is focused on TeraGrid operations, as opposed to user capabilities.

Core Kit Provides Integrative Services Authentication, Authorization, Auditing, Accounting Mechanisms –Supports TeraGrid allocation processes –Allows coordinated use of multiple systems –Supports TeraGrid security policies –Goal: Forge a useful link between campus authentication systems, science gateway authentication systems, and TeraGrid resources Service Registry –Goal: Provide a focal point for registering presence of services and capabilities on TeraGrid resources –Goal: Support documentation, testing, automatic discovery, and automated configuration for distributed services (tgcp) Verification & Validation –Independently verifies the availability of capabilities on each resource –Goal: Focus more clearly on the specific capabilities each resource is intended to offer Usage Reporting –Goal: Support the need to monitor and track usage of TeraGrid capabilities

CTSS Capability Kits Each CTSS capability kit is an opportunity for resource providers to deploy a specific capability in coordination with other RPs. –Focal point for collecting and clarifying user requirements (via a RAT) –Focal point for designing, documenting, and implementing a capability (via a WG) –Focal point for deploying the capability (via the software WG) RPs can explicitly decide and declare which capabilities they intend to provide on each resource. –What is appropriate for each resource? –What is the RP’s strategy for delivering service to the community? TeraGrid’s system-wide registry tracks which CTSS capabilities are provided by each resource. –By declaration, by registration, and by verification

CTSS Capability Kits Kits may be defined, designed, implemented, packaged, documented, and supported by a broad range of people. –RATs –Working groups –GIG areas –Resource providers –Other communities The key feature of a CTSS capability is that its deployment is coordinated among RPs.

CTSS 4 Capability Kits Led by GIG SI team and Software Working Group –TeraGrid Core Capabilities –Application Development & Runtime –Remote Compute –Remote Login –Science Workflow Support Led by GIG DIVS team and Data Working Group –Data Management –Data Movement –Wide Area Filesystem

CTSS 3 Mapped to Capability Kits Led by GIG SI Team and Software WG TeraGrid Core AMIE Resource Toolkit, gx-map, Inca, Pacman, MDS4 Index, tg-policy, tgresid App Devel & Runtime Ant, BLAS, gcc, Globus clients & libs, gsissh client, HDF4, HDF5, HIS, Intel compiler & MKI, Java, MPICH-G2, MPIs - local, PHDF5, Python, SoftEnv, SRB Client, TCL, TGCP, XLF Remote Compute Pre-WS GRAM, WS GRAM Remote Login Myproxy client, SSH/GSISSH, tgusage, UberFTP Science Workflow Support Condor, GridShell, Condor-G Led by GIG DIVS Team and Data WG Data Management RLS Data Movement GridFTP, GridFTP SRB, RFT Wide Area Filesystem GPFS

Change Coordination Process Motivation –New CTSS kit structure results in more potential sources of changes. –Scaling of resources (both number and diversity) results in more potential points of confusion/coord failure. –In general, we’d like to do this better. Goals –Clarity of purpose for changes –Help for documentation –Help in identifying points requiring coordination –Tracking deployment steps and progress –Easy to use for small changes, helpful for large changes.

Change Coordination Structure Change Description Data Sheet –Collects the basic facts about a proposed change –Provides everyone with the information needed to understand what is planned and who is involved and to identify potential risks –Will provide a record of changes Change Planning Checklist –Helps the planning team brainstorm about what the necessary points of coordination are –Provides opportunities for recording the coordination plans for each sub-team (docs, user services, RP admins, security, etc.) –Helps get coordination started early

(Introduce Examples)

A Note on Deployment Schedules Each kit can have its own schedule. –Software WG will serve as coordination point for schedules (load management, schedule conflicts, etc.) –Software WG will also manage dependency issues (between kits) Expectation is that each kit rollout (updates, etc.) will follow the change management process. –Coordination with RPs, docs team, user services, operations, etc. –Staff & friendly user testing

A Note on Documentation The “kit” design results in early documentation. –A coherent story about the kit’s capabilities –What is it and why do we have it? –Who is it for? Documentation team can use this to build plans for user documentation. User services can use this to build plans for testing. –Test plans can be created before the software is deployed. –Work can begin on identifying friendly user candidates. Change coordination process reinforces this.

A Note on Inca Inca tests only the capabilities that are provided by each resource. –Which capabilities are declared for each resource? –Which capabilities are registered on each resource? Inca can provide a piece of the system-wide registry. –Which capabilities have been verified on each resource? Inca tests are provided by kit owners, so are more closely linked to intended capabilities.

GIG Software Integration Team Role Changing from Sole Integrator to Integrator + Integration Service Provider GIG SI still has ownership of several kits. For other kit owners, we provide services. –Consultation –Publishing guidelines/recommendations –Relationships with software developers (Globus, VDT, NMI, etc.) –Multiplatform software builds (and testing) service –Multiplatform packaging service –Software clearinghouse (website, CVS, Pacman cache, etc.)

CTSS 4 Timeline - June Develop the list of capability kits that spans CTSS 3. –What distinct capabilities are provided by CTSS? –Which kit should each CTSS 3 component belong to? This work has been done.

CTSS 4 Timeline - July Assign each capability kit to a team. –Done (see prev. slide) Define capability kit purposes –What capabilities does each kit provide? –In progress (delayed) Identify kits that will be updated sooner rather than later (new versions, etc.). –In progress Draft change coordination process and test it. –In progress

CTSS 4 Timeline - August Draft change plans for kits requiring updates. –Coordination documentation delivered Begin executing change plans for updated kits. –Begin implementing packages –Begin test plans, documentation plans, security reviews… –Decide which resources will have which capabilities –Schedule deployment activities with each RP