Contract Year 1 Review Computational Environment (CE) Shirley Moore University of Tennessee-Knoxville May 16, 2002.

Slides:



Advertisements
Similar presentations
Enabling Access to Sound Archives through Integration, Enrichment and Retrieval WP1. Project Management.
Advertisements

PAPI for Blue Gene/Q: The 5 BGPM Components Heike Jagode and Shirley Moore Innovative Computing Laboratory University of Tennessee-Knoxville
BENEFITS OF SUCCESSFUL IT MODERNIZATION
Office of Accountability, Assessment and Intervention 1 Getting Ready for SIP: Developing the Action Sequences FALL 2006.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Cracow Grid Workshop, November 5-6, 2001 Towards the CrossGrid Architecture Marian Bubak, Marek Garbacz, Maciej Malawski, and Katarzyna Zając.
Technology Steering Group January 31, 2007 Academic Affairs Technology Steering Group February 13, 2008.
Problem-Solving Environments: The Next Level in Software Integration David W. Walker Cardiff University.
Robert Bell, Allen D. Malony, Sameer Shende Department of Computer and Information Science Computational Science.
INTRODUCTION OS/2 was initially designed to extend the capabilities of DOS by IBM and Microsoft Corporations. To create a single industry-standard operating.
Software Engineering For Beginners. General Information Lecturer, Patricia O’Byrne. – Times: –See noticeboard outside.
Objectives Explain the purpose and various phases of the traditional systems development life cycle (SDLC) Explain when to use an adaptive approach to.
Center for Health Care Quality Licensing & Certification Program Evaluation 1 August 2014 rev.
National Finance Center’s 2008 Customer Forum EmpowHR 9.0 Billy Dantagnan Teracore.
Web Development Process Description
1 Discussions on the next PAAP workshop, RIKEN. 2 Collaborations toward PAAP Several potential topics : 1.Applications (Wave Propagation, Climate, Reactor.
Self Adaptivity in Grid Computing Reporter : Po - Jen Lo Sathish S. Vadhiyar and Jack J. Dongarra.
PROJECT OBJECTIVES Identify, procure, and implement software that provided a common system for students, faculty, and staff to enter and measure.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
1 Jack Dongarra University of Tennesseehttp://
Chapter 1: Introduction to Systems Analysis and Design
Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team SC 2003, Phoenix, AZ – November.
Center for Programming Models for Scalable Parallel Computing: Project Meeting Report Libraries, Languages, and Execution Models for Terascale Applications.
2010 W EST V IRGINIA GIS C ONFERENCE Wednesday, June 9, 2010.
MD Digital Government Summit, June 26, Maryland Project Management Oversight & System Development Life Cycle (SDLC) Robert Krauss MD Digital Government.
A Compiler-Based Tool for Array Analysis in HPC Applications Presenter: Ahmad Qawasmeh Advisor: Dr. Barbara Chapman 2013 PhD Showcase Event.
© ABB Inc. - USETI All Rights Reserved 10/17/2015 Insert image here An Economic Analysis Development Framework for Distributed Resources Aaron F. Snyder.
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
P.1 LOC Web Strategy  The Library has approved a web strategy that focuses effort on the Library’s three core areas: Legislative Information, National.
The Globus Project: A Status Report Ian Foster Carl Kesselman
Martin Schulz Center for Applied Scientific Computing Lawrence Livermore National Laboratory Lawrence Livermore National Laboratory, P. O. Box 808, Livermore,
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
Get Your "Party" Started: Establishing a Successful Third-party Evaluation Martha Thurlow, Ph.D. & Vitaliy Shyyan, Ph.D.—National Center on Educational.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Presented by An Overview of the Common Component Architecture (CCA) The CCA Forum and the Center for Technology for Advanced Scientific Component Software.
Federal Aviation Administration By: Giles Strickler, UCS Program Manager Procurement Policy (AJA-A11) Date:September 22, 2010 Unified Contracting System.
February 5, 2009 Technical Advisory Committee Meeting Texas Nodal Program Implementation: Program Update Trip Doggett.
The Grid the united computing power Jian He Amit Karnik.
Portable Parallel Performance Tools Shirley Browne, UTK Clay Breshears, CEWES MSRC Jan 27-28, 1998.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
Change with a Purpose Human Resources Division 29 th June 2006.
Summertime Fun Everyone loves performance Shirley Browne, George Ho, Jeff Horner, Kevin London, Philip Mucci, John Thurman.
Lawrence Livermore National Laboratory S&T Principal Directorate - Computation Directorate Tools and Scalable Application Preparation Project Computation.
Improving the Tradecraft in Services Acquisition Services Acquisition Training Lyle Eesley Defense Acquisition University Director Center for Services.
Robert L. Jacobs Over 20 years of solid IT experience Results-oriented, innovative solutions Diverse industry background.
Contract Year 1 Review IMT Tilt Thompkins MOS - NCSA 15 May 2002.
Computer Science Department University of Texas at El Paso PCAT Performance Counter Assessment Team PAPI Development Team UGC 2003, Bellevue, WA – June.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Performance Data Standard and API Shirley Browne, Jack Dongarra, and Philip Mucci University of Tennessee from the Ptools Annual Meeting, May 1998.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
State of Georgia Release Management Training
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
CERES-2012 Deliverables Architecture and system overview 21 November 2011 Updated: 12 February
February 28, 2012 Presented By: Eileen Rohan, Superintendent Sean Maher, Network Manager Katie Frank, White Hill Michael Bessonette, Brookside Upper Ron.
Other Tools HPC Code Development Tools July 29, 2010 Sue Kelly Sandia is a multiprogram laboratory operated by Sandia Corporation, a.
October 18, 2001 LACSI Symposium, Santa Fe, NM1 Towards Scalable Cross-Platform Application Performance Analysis -- Tool Goals and Progress Shirley Moore.
SwCDR (Peer) Review 1 UCB MAVEN Particles and Fields Flight Software Critical Design Review Peter R. Harvey.
Contract Year 1 Review Forces Modeling and Simulation (FMS) David R. Pratt, PhD SAIC 15 May 2002.
TeraGrid’s Process for Meeting User Needs. Jay Boisseau, Texas Advanced Computing Center Dennis Gannon, Indiana University Ralph Roskies, University of.
Shirley Moore Towards Scalable Cross-Platform Application Performance Analysis -- Tool Goals and Progress Shirley Moore
Kai Li, Allen D. Malony, Sameer Shende, Robert Bell
Chapter 1: Introduction to Systems Analysis and Design
GWE Core Grid Wizard Enterprise (
EIN 6133 Enterprise Engineering
Performance Analysis, Tools and Optimization
ESMF Governance Cecelia DeLuca NOAA CIRES / NESII April 7, 2017
Chapter 1: Introduction to Systems Analysis and Design
Agenda Purpose for Project Goals & Objectives Project Process & Status Common Themes Outcomes & Deliverables Next steps.
Chapter 1: Introduction to Systems Analysis and Design
System Analysis and Design:
Presentation transcript:

Contract Year 1 Review Computational Environment (CE) Shirley Moore University of Tennessee-Knoxville May 16, 2002

13-17 May 2002Contract Year 1 Review2 CE Strategy Top priorities:  A computational environment that is consistent, well- documented, and easy-to-use across the SRCs  Debugging and performance analysis tools that are scalable and easy-to-use in the SRC environment Focus on:  Enabling DoD users to determine what performance they are getting and improve that performance on SRC platforms  Parallelization strategies and programming practices that enhance application portability across platforms  Tools and strategies for efficient file management and I/O  Use of COTS and/or freely available tools; interaction with tool developers on improvements and new tool features  Training on HPC architectures, tools & methodologies (Core)

13-17 May 2002Contract Year 1 Review3 CE Efforts for Contract Year 1 Recruitment and hiring of CE onsite Thomas Cortese Establishment of and interaction with CE User Advisory Panel Development of comprehensive CE training curriculum and coordination of CE training Collaborations with tool developers Establishment of contacts and working relationships with SRC systems and user support staff

13-17 May 2002Contract Year 1 Review4 User Contacts and Assistance Assisted EQM code developer Victor Parr with parallel I/O and with use of PAPI (ongoing and phone contacts) Assisted ARL MSRC user Dale Shires with IBM and SGI architecture and MPI questions (Jan 2002) Assisted ARL MSRC user Marshall Cory with ScaLAPACK questions (Feb 2002) Assisting USNA user Reza Malek-Malani with scheduling seminar on cluster computing Installed Vampir-GuideView (VGV) beta version at NAVO MSRC at request of CWO onsite Tim Campbell (March 2002)

13-17 May 2002Contract Year 1 Review5 Tools Introduced Repository in a Box (RIB) toolkit used for ERDC MSRC CTA repositories PAPI cross-platform interface to hardware performance counters Vampir-GuideView (VGV) combined MPI/OpenMP performance analysis tool (beta version for evaluation) MPE Logging/Jumpshot freely available MPI performance analysis tool (under evaluation) TAU for MPI and/or OpenMP program analysis (under evaluation) Vprof basic block profiler (under evaluation)

13-17 May 2002Contract Year 1 Review6 CE Training Advanced MPI (advanced), ERDC MSRC, Sep 2001, David Cronk (UTK) Introduction to MPI (beginning), ASC MSRC, Jan 2002, David Cronk (UTK), 6 attendees, 40 CD’s distributed, Evaluation: 4.2/5.0 MPI Tips and Tricks (intermediate), Fort Monmouth, 30 Jan – 1 Feb 2002, David Cronk (UTK), 10 attendees Compaq AlphaServer SC System, ERDC MSRC, 31 Jan – 1 Feb 2002, David Ennis (OSC) Advanced MPI (advanced), ARL MSRC, Mar 2002, David Cronk (UTK), 13 attendees, Evaluation: 10 Excellent, 3 Good

13-17 May 2002Contract Year 1 Review7 CE Training (cont.) Cross-Platform Performance Analysis Tools (intermediate), ERDC MSRC, Mar 2002, Shirley Moore (UTK), 7 attendees, Evaluation: C Programming, AEDC, Mar 2002, David Ennis (OSC) Performance Optimization for Vector Processors, NAVO MSRC, 3-4 Apr 2002, James Giuliani (OSC) Debugging Parallel Code Using TotalView, NAVO MSRC, 7-9 May 2002, David Cronk and Thomas Cortese (UTK)

13-17 May 2002Contract Year 1 Review8 Conference Presentations David Cronk, “MPI-I/O for EQM Applications”, DoD HPC UGC 2001, Biloxi, MS, June 2001 David Cronk, “Metacomputing Support for the SARA3D Structural Acoustics Application”, DoD HPC UGC 2001, Biloxi, MS, June 2001 Shirley Moore, David Cronk, Kevin London, and Jack Dongarra, “Review of MPI Performance Analysis Tools”, EuroPVM/MPI2001, Santorini, Greece, April 2002 (rescheduled from September 2001) Shirley Moore, “A Comparison of Counting and Sampling Modes of Using Performance Monitoring Hardware”, International Conference on Computational Science (ICCS 2002), Amsterdam, April 2002

13-17 May 2002Contract Year 1 Review9 CE009: A Consistent Well-Documented Computational Environment Collaborate with SRC systems and user support staff to implement, document, and support a consistent computational environment across the SRCs Deliverables – Information in SRC user guides and/or OKC about all installed components of the computational environment –Checklists for testing tool installation –Traveling onsite support to SRCs –Workshops to evaluate new tool technologies (ready to schedule) –Quarterly report on status of the computational environment at the SRCs –Explanation for yellow: change in plans from commercial to freely available tools due to budget constraints: OKC not in production mode yet $207,756; 1 Oct 2001 – 30 Sep 2002 PI: Shirley Moore

13-17 May 2002Contract Year 1 Review10 CE010: PAPI Deployment, Evaluation, and Extensions Deploy and support the PAPI cross- platform interface to hardware performance counters on all SRC platforms Investigate accuracy of hardware counter data Implement memory utilization extensions Deliverables – Installation, testing, and documentation of PAPI and related tools (TAU, VProf) on all MSRC platform (Compaq Alpha substrate in progress) – Microbenchmarks for measuring accuracy of hardware performance data (analyzing data and devising additional benchmarks) –Design of memory utilization extensions and implementation on ASC platforms (in design phase) $235,064 (UTK, UTEP, PSC), 1 Oct Sep 2002 PI: Shirley Moore MSI Participation: Patricia Teller, UTEP

13-17 May 2002Contract Year 1 Review11 CE012: Metacomputing Support for SIP Image Processing Use NetSolve to improve the performance and scalability of Tannenbaum’s image segmentation algorithm. Previous work: SARA3D Also looking at XPatch Deliverables – Implementation of portions of Tannenbaum’s algorithm as NetSolve services –Implementation of coarse-grained parallelism –Persistent storage of intermediate results –Deployment on grid computing testbed (must satisfy security requirements) $70K, 1 Oct Sep 2002 PI: Shirley Moore

13-17 May 2002Contract Year 1 Review12 CE019: SPMD Collective Communication Model To develop an Open Source Fortran 90 module for the most commonly used SPMD collective communication operations Deliverables – An API for SPMD collective communications based on a Fortran 90 module (specification and documentation started) –Open Source reference implementation of the module for MPI (near completion) –Open Source reference implementation of the module for SHMEM (not started) –Test and timing suite (near completion) $74,923, 1 Mar Jan 2003 PI: Timothy Kaiser, Ph.D., Coherent Cognition

13-17 May 2002Contract Year 1 Review13 Core Financial Summary

13-17 May 2002Contract Year 1 Review14 Staffing Onsite CE position at NAVO MSRC filled in February 2002 – Dr. Thomas Cortese

13-17 May 2002Contract Year 1 Review15 Summary Strategic priorities determined by CE user advisory panel (ranked from 1 to 4, 1 being highest): –Performance evaluation (1.7) –Data management and I/O (1.7) –Application portability (1.7) –Consistent computational environment (2.1) Documentation and user support (1.7) –Debugging (2.3) –Dynamic monitoring and control of application execution (2.4) –Grid or “meta”-computing (2.9) Above priorities being addressed by CE core support (incl. training and by current and planned projects)

13-17 May 2002Contract Year 1 Review16 Backup

13-17 May 2002Contract Year 1 Review17 Metrics - Technical FYI – do not include in presentation

13-17 May 2002Contract Year 1 Review18 Metrics – Programmatic FYI – do not include in presentation