TeraGrid’s Common User Environment: Status, Challenges, Future Annual Project Review April, 2008.

Slides:



Advertisements
Similar presentations
1 The IT Service Management Performance Challenge IT Service Management in the Federal Sector – A Case Study.
Advertisements

TeraGrid Community Software Areas (CSA) JP (John-Paul) Navarro TeraGrid Grid Infrastructure Group Software Integration University of Chicago and Argonne.
Extreme Scalability RAT Report title=RATs#Extreme_Scalability Sergiu Sanielevici,
User Support Coordination Objectives for Plan Years 4 and 5 (8/1/2008 to 3/1/2010) Sergiu Sanielevici, GIG Area Director for User Support Coordination.
Cloud Management Mechanisms
System Center Configuration Manager Push Software By, Teresa Behm.
Systems Architecture, Fourth Edition1 Internet and Distributed Application Services Chapter 13.
Chapter 8: Network Operating Systems and Windows Server 2003-Based Networking Network+ Guide to Networks Third Edition.
Enterprise SharePoint Service (ESPS) 17 August 2011 A Combat Support Agency Defense Information Systems Agency.
TG QM Arlington: GIG User Support Coordination Plan Sergiu Sanielevici, GIG Area Director for User Support Coordination
#acquia Commons The Open Alternative for Social Business Software Name Title Acquia Month XXth, 2011.
Corporate Efficiency Meeting Improving Your Business Processes Using SharePoint and Beyond.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 18 Slide 1 Software Reuse 2.
Network, Operations and Security Area Tony Rimovsky NOS Area Director
Chapter 5 Roles and features. objectives Performing management tasks using the Server Manager console Understanding the Windows Server 2008 roles Understanding.
TeraGrid’s Integrated Information Service “IIS” Grid Computing Environments 2009 Lee Liming, JP Navarro, Eric Blau, Jason Brechin, Charlie Catlett, Maytal.
Page  1 SaaS – BUSINESS MODEL Debmalya Khan DEBMALYA KHAN.
Client Logo November 2006 GMLoB Pilot Experience and Lessons Learned Grants Applications Status Pilot National Science Foundation (NSF) USDA/Cooperative.
DYNAMICS CRM AS AN xRM DEVELOPMENT PLATFORM Jim Novak Solution Architect Celedon Partners, LLC
Hydra: future development A Hydra roadmap… Hydra Europe Symposium – Dublin – 7/8 April 2014 Richard Green.
GIG Software Integration: Area Overview TeraGrid Annual Project Review April, 2008.
TeraGrid Information Services December 1, 2006 JP Navarro GIG Software Integration.
GIG Software Integration Project Plan, PY4-PY5 Lee Liming Mary McIlvain John-Paul Navarro.
TeraGrid Information Services John-Paul “JP” Navarro TeraGrid Grid Infrastructure Group “GIG” Area Co-Director for Software Integration and Information.
©Kwan Sai Kit, All Rights Reserved Windows Small Business Server 2003 Features.
Effective User Services for High Performance Computing A White Paper by the TeraGrid Science Advisory Board May 2009.
CALIFORNIA DEPARTMENT OF WATER RESOURCES GEOSPATIAL TECHNICAL SUPPORT MODULE 2 ARCHITECTURE OVERVIEW AND DATA PROMOTION FEBRUARY 20, 2013.
TeraGrid Information Services JP Navarro, Lee Liming University of Chicago TeraGrid Architecture Meeting September 20, 2007.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
Development of Management Information System for the Forestry Sector in Viet Nam F O R M I S II Annual Work Plan 2014 Tapio Leppänen Chief Technical Adviser.
CTSS 4 Strategy and Status. General Character of CTSSv4 To meet project milestones, CTSS changes must accelerate in the coming years. Process –Process.
User Driven Innovation in a technology driven project Anastasius Gavras Eurescom GmbH
ETICS2 All Hands Meeting VEGA GmbH INFSOM-RI Uwe Mueller-Wilm Palermo, Oct ETICS Service Management Framework Business Objectives and “Best.
Set of priorities per WBS level 3 elements: (current numbering need to be mapped to new WBS version from Tim) (AD = member of wheels with oversight responsibility)
Geospatial Technical Support Module 2 California Department of Water Resources Geospatial Technical Support Module 2 Architecture overview and Data Promotion.
Publication and Protection of Site Sensitive Information in Grids Shreyas Cholia NERSC Division, Lawrence Berkeley Lab Open Source Grid.
Web Services based e-Commerce System Sandy Liu Jodrey School of Computer Science Acadia University July, 2002.
How eNet4S can benefit your project? eNet4S Software Solution Business Team Chief Technology Officer July 11, 2006.
Coordinating the TeraGrid’s User Interface Areas Dave Hart, Amit Majumdar, Tony Rimovsky, Sergiu Sanielevici.
TeraGrid Quarterly Meeting Dec 5 - 7, 2006 Data, Visualization and Scheduling (DVS) Update Kelly Gaither, DVS Area Director.
CLARIN work packages. Conference Place yyyy-mm-dd
Continuous Integration and Code Review: how IT can help Alex Lossent – IT/PES – Version Control Systems 29-Sep st Forum1.
Grow your business... Protect their investment. Property Technologies International, LLC Presentation of PTI’s suite of web-based applications for real.
TeraGrid CTSS Plans and Status Dane Skow for Lee Liming and JP Navarro OSG Consortium Meeting 22 August, 2006.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
SAN DIEGO SUPERCOMPUTER CENTER Inca TeraGrid Status Kate Ericson November 2, 2006.
Evolving Interfaces to Impacting Technology: The Mobile TeraGrid User Portal Rion Dooley, Stephen Mock, Maytal Dahan, Praveen Nuthulapati, Patrick Hurley.
Joint Meeting of the AUS, US, XS Working Groups TG10 Tuesday August 3, hrs Elwood II.
Leveraging the InCommon Federation to access the NSF TeraGrid Jim Basney Senior Research Scientist National Center for Supercomputing Applications University.
Marco Cattaneo - DTF - 28th February 2001 File sharing requirements of the physics community  Background  General requirements  Visitors  Laptops 
1 Registry Services Overview J. Steven Hughes (Deputy Chair) Principal Computer Scientist NASA/JPL 17 December 2015.
Technical Support to SOA Governance E-Government Conference May 1-2, 2008 John Salasin, Ph.D. DARPA
NOS Report Jeff Koerner Feb 10 TG Roundtable. Security-wg In Q a total of 11 user accounts and one login node were compromised. The Security team.
TeraGrid Program Year 5 Overview John Towns Chair, TeraGrid Forum Director, Persistent Infrastructure National Center for Supercomputing Applications University.
Network, Operations and Security Area Tony Rimovsky NOS Area Director
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
Windows SharePoint Services. Overview Windows SharePoint Services (WSS) Information Worker Infrastructure component delivered in Windows Server 2003 Enables.
Software Integration Highlights CY2008 Lee Liming, JP Navarro GIG Area Directors for Software Integration University of Chicago, Argonne National Laboratory.
Cloud-based e-science drivers for ESAs Sentinel Collaborative Ground Segment Kostas Koumandaros Greek Research & Technology Network Open Science retreat.
Building PetaScale Applications and Tools on the TeraGrid Workshop December 11-12, 2007 Scott Lathrop and Sergiu Sanielevici.
TeraGrid Capability Discovery John-Paul “JP” Navarro TeraGrid Area Co-Director for Software Integration University of Chicago/Argonne National Laboratory.
TeraGrid’s Process for Meeting User Needs. Jay Boisseau, Texas Advanced Computing Center Dennis Gannon, Indiana University Ralph Roskies, University of.
TeraGrid Software Integration: Area Overview (detailed in 2007 Annual Report Section 3) Lee Liming, JP Navarro TeraGrid Annual Project Review April, 2008.
 1- Definition  2- Helpdesk  3- Asset management  4- Analytics  5- Tools.
TeraGrid User Portal and Online Presence David Hart, SDSC Area Director, User-Facing Projects and Core Services TeraGrid Annual Review April 6, 2009.
XSEDE Value Added and Financial Economies
NSF TeraGrid Review January 10, 2006
TeraGrid Information Services Developer Introduction
Information Services Discussion TeraGrid ‘08
Analysis models and design models
Presentation transcript:

TeraGrid’s Common User Environment: Status, Challenges, Future Annual Project Review April, 2008

The Pros and Cons of Uniformity We want uniformity for… –Things that are visible to users and that users expect to be uniform (e.g., allocations process, remote compute interfaces, user portal) –Things that make cross-site operations more efficient (e.g., accounting, usage monitoring) We do not want uniformity for… –Things that inhibit specialization of service offerings (e.g., system architectures, operational models) –Things where uniformity doesn’t benefit users These categories are not mutually exclusive. Our solution is by necessity a balance between uniformity and diversity.

A Taxonomy of Commonality Adjectives –Common - we make it the same everywhere (aka uniform) –Coordinated - we define a common method, establish goals, and communicate where the common method is available –Uncoordinated - we make little or no attempt to define, enforce, or communicate commonalities and differences Nouns –User Experience - allocations process, single sign-on, accounting, ticket system, CTSS, user portal, knowledge base, user documentation –Execution/Runtime Environment - compilers, processors, libraries, command-line tools, shells, file systems –Environment Discovery/Manipulation Mechanism - capability registry (and other TG-wide catalogs), softenv/modules –Operational Practices - behind-the-scenes operation of TeraGrid as a distributed system

Community Capabilities Common definitions –Allows the TeraGrid community to define standard capabilities For example: compilers, remote login mechanism, remote compute mechanism, data movement mechanism, shared filesystems, data collections, metascheduling mechanism –Provides a terminology for use in directories (what’s available where) and agreements (what ought to be available where) –Also useful in discussions with other service providers (e.g., OSG, EGEE), verification/validation, documentation, allocation proposals Common registration –A uniform mechanism for TeraGrid service providers to advertise availability and configuration (what, where, how) of capabilities Supports provider autonomy and automation Mirrors the highly successful WWW/Google publish/index model –Openly accessible to users, providers, third parties High availability service (99.5% uptime) including commercial hosting Supports many access modes (HTML, Web 2.0, Web services)

User Experience What are the administrative elements of the user’s experience as a TeraGrid user? –Obtaining and managing an allocation –Tracking usage, accounting system –User portal features –TeraGrid documentation –TeraGrid knowledge base –TeraGrid-wide credentials and single sign-on –Help desk (800 number, ticket system) –User support (consulting and advanced support) –Education, outreach, training Tremendous consolidation toward common mechanisms in all areas above via the GIG, Core, and HPCOPS programs

Execution/Runtime Environment NSF solicitation process encourages diversity –Competitive environment puts an advantage on specialization and innovation –No standard architectural requirements –TeraGrid includes N processor types, M vendors, L unique platforms –We believe that this is the right approach for TeraGrid, given the high rate of innovation in HPC and the diversity of user requirements TeraGrid does not define or enforce a common (uniform) runtime environment –Shell environment, login ID/password, filesystems, compilers, debuggers/profilers, libraries TeraGrid does provide a coordination mechanism –CTSS application development & runtime support kit focuses on registration (publishing what, where, and how) as opposed to commonality –This information is important for users before, during, and after their allocations –Automation and autonomy are key to making this work, plus a dash of standardization (schema, controlled vocabulary)

CTSS: Coordinated TeraGrid Software and Services In DTF, “Common TeraGrid Software Stack” –ETF’s transition to diverse resources required change of model from software stack to coordinated capabilities One mandatory capability –TeraGrid Core Integration covers the pieces necessary to integrate a resource into the operational environment Many optional capabilities –E.g., data movement, remote login, workflow support –In practice, most capabilities are ubiquitous and heavily used (e.g., remote login) –A small number of capabilities are not widely available or not heavily used (e.g., data management) Open definition process, merit-based deployment process –Anyone can propose a capability definition –Resource providers only deploy what their users need/want –Result is a merit-based selection/evolution process

Manipulating the Runtime Environment Common mechanism –Since the DTF, TeraGrid has offered softenv to users for manipulating their runtime environment –Collaboration is currently exploring an alternate mechanism: “modules” Provides vital flexibility –Resource providers can offer alternate versions of tools, libraries and users can select the ones they need –This is a long-standing best practice among Unix hosts Integration –TeraGrid’s capability registry details defaults and how to access alternate capabilities on each system –Inca system also uses softenv/modules

TeraGrid “Coordination” Definition –What are we making common, and for what purpose? Definitions must focus on users and user capabilities –What are the key technical requirements? The essential elements needed to support the users Registered Participation –Which parts of the system have this commonality? The places designed to support the target users and use patterns –What are the local configuration details? The details needed to use the capability on a specific system

Key Moments - Runtime Environment First DTF software stack (CTSS 1) defined –Common runtime: OS, compilers, libraries, software stack –Common mechanism for runtime environment customization ETF stresses common software stack –Diverse runtime, still attempting common software stack (painfully) Operational TG –Common software stack is characterized by many exceptions 2006/ Transition to coordination model –Software stack model becomes obsolete –First formal capability definitions drafted –Capability registry deployed and populated by RPs

Key Moments - Environment Discovery/Manipulation DTF chooses softenv –Common mechanism for describing options and selecting from alternatives within each system 2006/ Capability registry added –Data about capability availability now accessible system- wide (local registries, central index) 2008? - Is softenv a requirement? –Re-examining user requirements

Key Moments - User Experience DTF –Participation in existing allocation process –Common accounting (with technical issues) –Mostly separate user support, EOT activities ETF –Integrated allocation process –Better central accounting (still issues) Operational TG –Integrated user support –Coordinated user documentation –Coordinated EOT Addition of User Portal, Knowledge Base

Coordinated Operational Practices Usage tracking V&V Issue resolution Incident response Network Education/Outreach/Taining Vulnerability analysis