Download presentation
Presentation is loading. Please wait.
Published byRalph Curtis Small Modified over 9 years ago
1
Data, Visualization and Scheduling (DVS) TeraGrid Annual Meeting, April 2008 Kelly Gaither, GIG Area Director DVS
2
Data The TeraGrid handles data through a combination of activities that fall into data movement, data management, and data collections. Deployed both the data movement and GPFS-WAN kits. All activities are coordinated through two working groups: –Data Working Group (led by Chris Jordan – was at SDSC, now at TACC) –Data Collections Working Group (led by Natasha Balac – SDSC)
3
Data Movement Deployed new GridFTP services (including instrumentation for improved monitoring – in support of efforts in the Operations group) Significant testing of GridFTP and RFT releases (base functionality and new features) were performed by the data movement performance group at PSC. Worked with Science Gateways/LEAD/Globus on improving GridFTP reliability on striped and non-striped servers.
4
Data Management Completed the expansion of the storage and redesign of the GPFS-WAN service. Expanded GPFS-WAN production availability to include IU BigRed and NCAR Frost. Continued work on Lustre-WAN, including initial testing of cross-site authentication. Explored TG wide licensing structure for GPFS-WAN.
5
Data Collections Creation and deployment of TeraGrid wide allocation policy for long term storage of data on disk (led by IU and SDSC). Initial definition completed of services for Data Collections (primarily interface-oriented, i.e., JDBC).
6
Visualization Activities are coordinated through the Visualization working group (led by Mike Papka - ANL, and Kelly Gaither - TACC). Work specifically focused on the TeraGrid Visualization Gateway: –Improved data management portlet added. –Enabled full access for TeraGrid users. –Progress made on dynamic accounting for community allocations, but not completely finished. –Incorporated volume rendering capability. –Working with RP’s to include software being developed locally.
7
Scheduling All activities are coordinated through the scheduling working group (led by Warren Smith – TACC). General approach and methodology taken by the working group: –Identify capabilities needed by users –Deploy tools that provide these capabilities in test beds for evaluation –Select tools, then document, deploy, and support in production Current areas of work –Urgent or on-demand computing SPRUCE –Advance reservation and co-scheduling GUR, HARC, GARS –Automatic resource selection Moab, Gridway, Condor-G, GRMS, MCP BQPS
8
SPRUCE Achievements –Coordinated a CTWatch edition on Urgent Computing to bring more awareness (http://www.ctwatch.org/quarterly) Roadmap –LEAD/SPRUCE real time spring urgent runs (April 14 -June 6, 2008) on TeraGrid –Made SPRUCE WS-GRAM compatible on all resources –Moved SPRUCE into production as a CTSS component Challenges –Working on experimenting with differentiated pricing: Need more active participation from the resource providers with regards to urgency policies and differentiated pricing
9
Advanced Reservation and Co-Scheduling Reserve resources on one or more systems at specific times Evaluating 3 tools –GUR (SDSC) supports reservation and co-scheduling –HARC (LSU) supports reservation and co-scheduling –GARS (UC/ANL) supports reservation Major accomplishments in 2007 –GUR deployed in production at SDSC and NCSA and in test bed form at UC/ANL –HARC deployed in production at LSU and in test bed form at NCSA and SDSC Plans for 2008 –Deploy tools on additional systems as a test bed –Evaluate and select tools (input from users and sys admins) –Deploy selected tools in production
10
Automatic Resource Selection Select which system to use for a job in an automatic manner Evaluating several tools –Moab, Gridway, GRMS, Condor-G, MCP Major accomplishments in 2007 –Moab deployed across systems at NCSA –Initial Gridway and Condor-G deployments on a GIG server Plans for 2008 –Deploy more of the tools and deploy them on additional systems –Evaluate and select tools (input from users and sys admins) –Deploy selected tools in production Challenges –Obtaining the information needed to make good resource selections (e.g. static resource descriptions, dynamic load information) Working with software working group to make more information available about TeraGrid systems
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.