Gayathri Namala Center for Computation & Technology Louisiana State University Representing the SURA Coastal Ocean Observing and Prediction Program (SCOOP)

Slides:



Advertisements
Similar presentations
LEAD Portal: a TeraGrid Gateway and Application Service Architecture Marcus Christie and Suresh Marru Indiana University LEAD Project (
Advertisements

Grid Resource Allocation Management (GRAM) GRAM provides the user to access the grid in order to run, terminate and monitor jobs remotely. The job request.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
A Grid Resource Broker Supporting Advance Reservations and Benchmark- Based Resource Selection Erik Elmroth and Johan Tordsson Reporter : S.Y.Chen.
The LHC Computing Grid Project Tomi Kauppi Timo Larjo.
Assignment 3 Using GRAM to Submit a Job to the Grid James Ruff Senior Western Carolina University Department of Mathematics and Computer Science.
High Throughput Urgent Computing Jason Cope Condor Week 2008.
Workload Management Massimo Sgaravatto INFN Padova.
Progress Towards a Regional Coastal Ocean Observing System for the Southeast (SEACOOS) Harvey Seim / University of North Carolina at Chapel Hill University.
SURA Coastal Ocean Observing and Prediction (SCOOP) Program Philip Bogden CEO, GoMOOS Director, SCOOP Program at SURA Southeastern Universities Research.
Team Leader- Hydrology
SPRUCE Special PRiority and Urgent Computing Environment Nick Trebon Argonne National Laboratory University of Chicago
An Instrumented Coastal Process Modeling Test Bed US Army Corps of Engineers BUILDING STRONG ® Jeff Hanson U. S. Army Engineer Research and Development.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations.
Apache Airavata GSOC Knowledge and Expertise Computational Resources Scientific Instruments Algorithms and Models Archived Data and Metadata Advanced.
Grid for Coupled Ensemble Prediction (GCEP) Keith Haines, William Connolley, Rowan Sutton, Alan Iwi University of Reading, British Antarctic Survey, CCLRC.
TeraGrid Science Gateways: Scaling TeraGrid Access Aaron Shelmire¹, Jim Basney², Jim Marsteller¹, Von Welch²,
Grid Computing - AAU 14/ Grid Computing Josva Kleist Danish Center for Grid Computing
Using Partnerships to Meet NOAA’s Needs for its Next Generation Storm Surge System NOS/OCS/CSDL J. Feyen F. Aikman M. Erickson NWS/NCEP/EMC H. Tolman NWS/OST/MDL.
Flexibility and user-friendliness of grid portals: the PROGRESS approach Michal Kosiedowski
1 Addressing Critical Skills Shortages at the NWS Environmental Modeling Center S. Lord and EMC Staff OFCM Workshop 23 April 2009.
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
High Throughput Parallel Computing (HTPC) Dan Fraser, UChicago Greg Thain, UWisc Condor Week April 13, 2010.
Coastal Research SURA Coastal Ocean Observing and Prediction (SCOOP) Program Overview November 4, 2004.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Mathematics and Computer Science & Environmental Research Divisions ARGONNE NATIONAL LABORATORY Regional Climate Simulation Analysis & Vizualization John.
DataGrid WP1 Massimo Sgaravatto INFN Padova. WP1 (Grid Workload Management) Objective of the first DataGrid workpackage is (according to the project "Technical.
The Science Requirements for Coastal and Marine Spatial Planning Dr. Robert B. Gagosian President and CEO Ocean Studies Board November 10, 2009.
CSF4 Meta-Scheduler Name: Zhaohui Ding, Xiaohui Wei
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
Office of Coast Survey Using Partnerships to Improve NOAA’s Storm Surge Products and Forecasts Jesse C. Feyen Storm Surge Roadmap Portfolio Manager National.
SI2-SSE: Pipeline Framework for Ensemble Runs on the Cloud Beth Plale (PI), Indiana University | Craig Mattocks (Co-PI), University of Miami Figure: Scheduling.
Tutorial: Building Science Gateways TeraGrid 08 Tom Scavo, Jim Basney, Terry Fleury, Von Welch National Center for Supercomputing.
SOA-39: Securing Your SOA Francois Martel Principal Solution Engineer Mitigating Security Risks of a De-coupled Infrastructure.
SPRUCE Special PRiority and Urgent Computing Environment Pete Beckman Argonne National Laboratory University of Chicago
Advisor: Resource Selection 11/15/2007 Nick Trebon University of Chicago.
TeraGrid Advanced Scheduling Tools Warren Smith Texas Advanced Computing Center wsmith at tacc.utexas.edu.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
July 11-15, 2005Lecture3: Grid Job Management1 Grid Compute Resources and Job Management.
3 rd Annual WRF Users Workshop Promote closer ties between research and operations Develop an advanced mesoscale forecast and assimilation system   Design.
SPRUCE Special PRiority and Urgent Computing Environment Advisor Demo Nick Trebon University of Chicago Argonne National Laboratory
Review of Condor,SGE,LSF,PBS
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Marcin Płóciennik Poznan Supercomputing and Networking Center OGF23, Barcelona, Spain, June 3rd, 2008 Use case of NMR spectrometry in Virtual Laboratory.
SPRUCE Special PRiority and Urgent Computing Environment User Demo Nick Trebon Argonne National Laboratory University of Chicago
1 THETIS:A DATA MANAGEMENT AND DATA VISUALIZATION SYSTEM FOR SUPPORTING COASTAL ZONE MANAGEMENT OF THE MEDITERRANEAN SEA (F0069: Telematics on Research)
The Lake Pontchartrain Forecast System Rick Luettich Jason G. Fleming Institute of Marine Sciences University of North Carolina.
AOOS Modeling Strategy Workshop 22 January 2014 Anchorage, AK Seth Danielson, UAF
Grid Compute Resources and Job Management. 2 Grid middleware - “glues” all pieces together Offers services that couple users with remote resources through.
64th Interdepartmental Hurricane Conference NOAA Tropical Program Delivering on the Promise of Partnerships Jack Hayes NOAA Assistant Administrator & Director,
TIGGE Archive Access at NCAR Steven Worley Doug Schuster Dave Stepaniak Hannah Wilcox.
Introduction Contain two or more CPU share common memory and peripherals. Provide greater system throughput. Multiple processor executing simultaneous.
SCEC: An NSF + USGS Research Center Focus on Forecasts Motivation.
D.Spiga, L.Servoli, L.Faina INFN & University of Perugia CRAB WorkFlow : CRAB: CMS Remote Analysis Builder A CMS specific tool written in python and developed.
AT LOUISIANA STATE UNIVERSITY CCT: Center for Computation & LSU Condor in Louisiana Tevfik Kosar Center for Computation & Technology Louisiana.
LEAD Project Discussion Presented by: Emma Buneci for CPS 296.2: Self-Managing Systems Source for many slides: Kelvin Droegemeier, Year 2 site visit presentation.
HTCondor’s Grid Universe Jaime Frey Center for High Throughput Computing Department of Computer Sciences University of Wisconsin-Madison.
Probabilistic Upper Bounds for Urgent Applications Nick Trebon and Pete Beckman University of Chicago and Argonne National Lab, USA Case Study Elevated.
Illustrating NOAA’s Geospatial Role in Resilient Coastal Zones Joseph Klimavicz, NOAA CIO and Director of High Performance Computing and Communications.
HTCondor-CE. 2 The Open Science Grid OSG is a consortium of software, service and resource providers and researchers, from universities, national laboratories.
1 Benefits of ITS During Hurricane Evacuations Hurricane Evacuation Transportation Workshop April 5, 2003.
B. Estrade Storm Surge Forecasting Efforts Brett D. Estrade.
IP Review.
Dynamic Deployment of VO Specific Condor Scheduler using GT4
THE STEPS TO MANAGE THE GRID
Pegasus and Condor Gaurang Mehta, Ewa Deelman, Carl Kesselman, Karan Vahi Center For Grid Technologies USC/ISI.
Chapter 10: Supporting and Maintaining Desktop Applications
Presentation transcript:

Gayathri Namala Center for Computation & Technology Louisiana State University Representing the SURA Coastal Ocean Observing and Prediction Program (SCOOP) NSF DynaCode An Urgent Computing Workflow for Realtime Hurricane Forecasts

SURA Coastal Ocean Observing and Prediction (SCOOP) Integrating data from regional observing systems for realtime coastal forecasts in SE Coastal modelers working closely with computer scientists to couple models, provide data solutions, deploy ensembles of models on the Grid, assemble realtime results with GIS technologies. Three scenarios: event-driven ensemble prediction, retrospective analysis, 24/7 forecasts

3 Urgent Coastal Scenarios Emergency preparedness Oilspill behaviour Sea rescue Military operations Hypoxia “Dead Zone” Algae blooms Hurricane forecasts

4 Hurricane tracks

5

6 SCOOP Ensemble Modeling Regional Archives Ensemble wind fields from varied and distributed sources ADCirc ElCirc WAM Ensemble of models run across distributed resources Archive Verification Visualization Analysis, storage, cataloging, visualization of output Select region and time range Transform and transport data Wind Forcing Wave and/or Surge Models Result Dissemination Synthetic Wind Ensembles NCEP MM5 NCAR or SWAN

7 Configure Ensembles

8 Ensemble Description File (EDF)

9 Urgency & Priority Urgency Level: –Emergency: run on- demand (e.g. preemption) –Urgent: run in priority queue (e.g. next to run) –Normal: best effort, e.g. guess best queue Priority: –Order in which jobs should be completed

10 What is SPRUCE SPRUCE is a specialized software system that provides computational resources quickly for time critical emergency decision support applications. Developed by University of Chicago and ANL. The system provides the users with the “right of way” tokens, which when activated provides users with higher priority to access the resource.

11 SPRUCE User Workflow

12 Handling Urgent Job Submission

13 User Requests/ Scheduler Su bmit file GRAM Dagman Job Manager Local Job Queues SCOOP Job Submission Process Condor-g SCOOP Workflow Resource

14 SCOOP Work Flow with SPRUCE User Requests Submit file RSL parameter GRAM Dagman Authenticate Token with a filter SPRUCE Job Manager Job Queues Resource SCOOP Job Submission Process policies Condor-g

15 Why use SPRUCE? Token based authentication is simple and less time consuming when compared to signed certificates or proxies. Access control can be given dynamically to any user during an urgent situation with the tokens. The resources on which the user is given access can be controlled by the administrator by specifying the list of resources during the token activation. The queues on which the job is being submitted need not be selected manually. The appropriate queue is selected depending on the policies implemented at the resource.

16 Credits Entire SCOOP team. Special thanks to Suman Nadella for her support all over the project. For More Information: –