The impacts of climate change on global hydrology and water resources Simon Gosling and Nigel Arnell, Walker Institute for Climate System Research, University.

Slides:



Advertisements
Similar presentations
A case study of avoiding the heat- related mortality impacts of climate change under mitigation scenarios Simon N. Gosling 1 and Jason A. Lowe 2 1 Walker.
Advertisements

CamGrid Mark Calleja Cambridge eScience Centre. What is it? A number of like minded groups and departments (10), each running their own Condor pool(s),
Current methods for negotiating firewalls for the Condor ® system Bruce Beckles (University of Cambridge Computing Service) Se-Chang Son (University of.
Global Hydrology Modelling and Uncertainty: Running Multiple Ensembles with the University of Reading Campus Grid Simon Gosling 1, Dan Bretherton 2, Nigel.
UK Campus Grid Special Interest Group Dr. David Wallom University of Oxford.
Building Campus HTC Sharing Infrastructures Derek Weitzel University of Nebraska – Lincoln (Open Science Grid Hat)
Limits to adaptation: implications of global temperature changes beyond 4°C for water supply in southern England. Matthew Charlton and Nigel Arnell Walker.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Global Hydrology Modelling: Running Multiple Ensembles with the Campus Grid Simon Gosling Walker Institute for Climate System Research, University of Reading.
Climate Change, Biofuels, and Land Use Legacy: Trusting Computer Models to Guide Water Resources Management Trajectories Anthony Kendall Geological Sciences,
The NERC Cluster Grid Dan Bretherton, Jon Blower and Keith Haines Reading e-Science Centre Environmental Systems Science Centre.
Background Info The UK Mirror Service provides mirror copies of data and programs from many sources all over the world. This enables users in the UK to.
3D Computer Rendering Kevin Ginty Centre for Internet Technologies
Understanding the relevance of climate model simulations to informing policy: An example of the application of MAGICC to greenhouse gas mitigation policy.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
The Difficulties of Distributed Data Douglas Thain Condor Project University of Wisconsin
CVMFS: Software Access Anywhere Dan Bradley Any data, Any time, Anywhere Project.
1 Lecture 15: Projections of Future Climate Change Global Mean Temperature.
Zach Miller Condor Project Computer Sciences Department University of Wisconsin-Madison Flexible Data Placement Mechanisms in Condor.
Minerva Infrastructure Meeting – October 04, 2011.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Distributed Systems Early Examples. Projects NOW – a Network Of Workstations University of California, Berkely Terminated about 1997 after demonstrating.
Track 1: Cluster and Grid Computing NBCR Summer Institute Session 2.2: Cluster and Grid Computing: Case studies Condor introduction August 9, 2006 Nadya.
OUCE Oxford University Centre for the Environment “Applying probabilistic climate change information to strategic resource assessment and planning” Funded.
Introduction and Overview Questions answered in this lecture: What is an operating system? How have operating systems evolved? Why study operating systems?
Grid Data Management A network of computers forming prototype grids currently operate across Britain and the rest of the world, working on the data challenges.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
Running Climate Models On The NERC Cluster Grid Using G-Rex Dan Bretherton, Jon Blower and Keith Haines Reading e-Science Centre Environmental.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
© Crown copyright Met Office Providing High-Resolution Regional Climates for Vulnerability Assessment and Adaptation Planning Joseph Intsiful, African.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
Operating Systems David Goldschmidt, Ph.D. Computer Science The College of Saint Rose CIS 432.
ROOT and Federated Data Stores What Features We Would Like Fons Rademakers CERN CC-IN2P3, Nov, 2011, Lyon, France.
Wellcome Trust Sanger Institute Informatics Systems Group Ensembl Compute Grid issues James Cuff Informatics Systems Group Wellcome Trust Sanger Institute.
A significant amount of climate data are available: DETERMINING CLIMATE CHANGE SCENARIOS AND PROJECTIONS It can be time-consuming to manage and interpret.
Jacob Schewe, PIK ISI-MIP water sector: Data & results.
Tevfik Kosar Computer Sciences Department University of Wisconsin-Madison Managing and Scheduling Data.
Extracting valuable information from a multimodel ensemble Similarly, number of RCMs are used to generate fine scales features from GCM coarse resolution.
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 14 February 2006.
Having a Blast! on DiaGrid Carol Song Rosen Center for Advanced Computing December 9, 2011.
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
Monitoring and Modeling Climate Change Are oceans getting warmer? Are sea levels rising? To answer questions such as these, scientists need to collect.
Climate Change Impact on Water Availability in NYC Water Supply Adao Matonse 1, Allan Frei 1, Donald Pierson 2, Mark Zion 2, Elliot Schneiderman 2, Aavudai.
The Tyndall Centre comprises nine UK research institutions. It is funded by three Research Councils - NERC, EPSRC and ESRC – and receives additional support.
Welcome to the PRECIS training workshop
LSF Universus By Robert Stober Systems Engineer Platform Computing, Inc.
HardSSH Cryptographic Hardware Key Team May07-20: Steven Schulteis (Cpr E) Joseph Sloan (EE, Cpr E, Com S) Michael Ekstrand (Cpr E) Taylor Schreck (Cpr.
Grid Remote Execution of Large Climate Models (NERC Cluster Grid) Dan Bretherton, Jon Blower and Keith Haines Reading e-Science Centre
Purdue RP Highlights TeraGrid Round Table May 20, 2010 Preston Smith Manager - HPC Grid Systems Rosen Center for Advanced Computing Purdue University.
NeST: Network Storage John Bent, Venkateshwaran V Miron Livny, Andrea Arpaci-Dusseau, Remzi Arpaci-Dusseau.
Holding slide prior to starting show. Lessons Learned from the GECEM Portal David Walker Cardiff University
© Geodise Project, University of Southampton, Workflow Support for Advanced Grid-Enabled Computing Fenglian Xu *, M.
Group # 14 Dhairya Gala Priyank Shah. Introduction to Grid Appliance The Grid appliance is a plug-and-play virtual machine appliance intended for Grid.
Managing a growing campus pool Eric Sedore
Advances in Support of the CMAQ Bidirectional Science Option for the Estimation of Ammonia Flux from Agricultural cropland Ellen Cooter U.S. EPA, National.
Geant4 GRID production Sangwan Kim, Vu Trong Hieu, AD At KISTI.
Five todos when moving an application to distributed HTC.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
GridOS: Operating System Services for Grid Architectures
Reading e-Science Centre
Dag Toppe Larsen UiB/CERN CERN,
Dag Toppe Larsen UiB/CERN CERN,
Grid Canada Testbed using HEP applications
Haiyan Meng and Douglas Thain
Initial job submission and monitoring efforts with JClarens
Production Manager Tools (New Architecture)
Presentation transcript:

The impacts of climate change on global hydrology and water resources Simon Gosling and Nigel Arnell, Walker Institute for Climate System Research, University of Reading Dan Bretherton and Keith Haines, Reading e-Science Centre, University of Reading Summary of current research Explores how uncertainties associated with climate change propagate through to estimated changes in the global hydrological cycle and water resources stresses with climate change. The application of both pattern-scaling and High Throughput Computing (HTC) is central to the research. Changes in the global hydrological cycle Different modelling institutes use different plausible representations of the climate system within their global climate models (GCMs), giving a range of climate projections for a single emissions scenario. A method of accounting for this “climate model structural uncertainty” in climate change impacts assessment is to use this range of projections from the ensemble of plausible GCMs, to produce an ensemble of impacts projections. First the patterns of climate change from each of 21 GCMs are identified associated with globally averaged warmings of 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 4.0, 5.0 and 6.0°C (relative to ), 189 patterns in all. These patterns are pre-generated from the GCM output by the ClimGen model developed at UEA. This represents a large reduction of input data for the future impacts modelling, although the approach fairly assumes that the pattern of climate change simulated by GCMs is relatively constant (for a given GCM) under a range of rates and amounts of global warming. Nevertheless, the use of GCM output directly in the future would avoid this assumption. These patterns are then applied to Mac-PDM.09, a global hydrological model (GHM). In order to run the ensemble of GHM with different patterns of climate change a Grid computing solution is used. Figure 1 shows some of the results. Effects on water resources A water-resources model, which assumes that watersheds with <1000m3/capita/year are water-stressed, is used to assess global water resources stresses, for different assumptions of future population change and global warming. Figure 2 shows some of the results. Figure 2. Percentage of the global population that experiences increases in water stress for different degrees of global warming. Campus Grid enables many model runs to take place simultaneously. This is an example of High Throughput Computing. Reduced time taken for 189 runs from 32 days (single computer) to 9 hours 2 main challenges in running GCMs + MAC-PDM models on the Grid: 1 Required to make minimal changes to models for Grid execution. 2 Large amount of input and output. 160GB storage required for 189 runs. This would increase greatly if GCM forcing were used directly for the GHM simulations Total Grid storage only 600GB, shared by all users; 160GB not always available. Solution chosen was SSH File System (SSHFS - as shown in Figure 3. Scientist’s own file system was mounted on Grid server via SSH. Data transferred on demand to/from compute nodes via Condor’s remote I/O mechanism. Model remained unmodified, accessing data via file system interface. It is easy to mount remote data with SSHFS, using a single Linux command. Running models on the Reading Campus Grid The Grid is a Condor Pool ( containing library & lab computers. SSHFS limitations and alternatives Maximum simultaneous model runs was 60 for our models, implemented using a Condor Group Quota. This allowed us to submit all jobs, but only 60 were allowed to run simultaneously. Limited by Grid and data server load. Need SSH access to data server, not always possible for other institutes’ data. Software requires sys.admin. to install. We are now experimenting with Parrot following earlier work by CamGrid at the University of Cambridge. Parrot is another way to mount remote data. It talks to HTTP, FTP, GridFTP and other remote I/O services, so SSH access to data is not required. Figure 3. Using SSHFS to run models on Grid with I/O to remote file system... Campus Grid Large file system Grid storage, not needed Grid server Scientist’s data server in Reading Remote FS mounted using SSHFS Data transfer via SSH Data transfer via Condor Further work We would like to run the hydrological model with climate data forcing stored in repositories at various different institutes. Running climate simulations locally would not then be necessary, but the amount of data transfer involved would be much larger. We would like the running models to access the forcing repositories directly, to avoid storing copies of all the forcing data sets locally. Data transfer would then be over much larger distances, with slower network connections in some cases. Current e-research effort is focussed on these challenges, and we also plan to apply the techniques to other models and to other grids. Figure 1. (A) Ensemble-mean change from present in average annual runoff. (B) Number of GCMs showing an increase in average annual runoff. Both are for a global- average temperature rise of 2ºC.