The Montage Image Mosaic Service: Custom Mosaics on Demand ESTO John Good, Bruce Berriman Mihseh Kong and Anastasia Laity IPAC, Caltech

Slides:



Advertisements
Similar presentations
NAGIOS AND CACTI NETWORK MANAGEMENT AND MONITORING SYSTEMS.
Advertisements

Aug 3-4, 2003 Victoria1 Montage, Pegasus and ROME G. B. Berriman, J.C. Good, M. Kong, A. Laity IPAC/Caltech J. Jacob, A. Bergou, D. S. Katz JPL R. Williams.
Test Case Management and Results Tracking System October 2008 D E L I V E R I N G Q U A L I T Y (Short Version)
Science Archives in the 21st Century Best Practices in Ingestion and Data Access at the NASA/IPAC Infrared Science Archive
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
20 Spatial Queries for an Astronomer's Bench (mark) María Nieto-Santisteban 1 Tobias Scholl 2 Alexander Szalay 1 Alfons Kemper 2 1. The Johns Hopkins University,
The Application Layer Chapter 7. Electronic Mail Architecture and Services The User Agent Message Formats Message Transfer Final Delivery.
Science Applications of Montage: An Astronomical Image Mosaic Engine ESTO G. Bruce Berriman, John Good and Anastasia Laity.
1 The World Wide Web Architectural Overview Static Web Documents Dynamic Web Documents HTTP – The HyperText Transfer Protocol Performance Enhancements.
Montage: Experiences in Astronomical Image Mosaicking on the TeraGrid ESTO.
An Astronomical Image Mosaic Service for the National Virtual Observatory
A Grid-Enabled Engine for Delivering Custom Science- Grade Images on Demand
An Astronomical Image Mosaic Service for the National Virtual Observatory / ESTO.
CREATING A MULTI-WAVELENGTH GALACTIC PLANE ATLAS WITH AMAZON WEB SERVICES G. Bruce Berriman, John Good IPAC, California Institute of Technolog y Ewa Deelman,
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
Digitized Sky Survey Update Brian McLean : Archive Sciences Branch / Operations and Engineering Division.
Why Build Image Mosaics for Wide Area Surveys? An All-Sky 2MASS Mosaic Constructed on the TeraGrid A. C. Laity, G. B. Berriman, J. C. Good (IPAC, Caltech);
WorkPlace Pro Utilities.
Status 2008 Ian D. Thompson ©University of Strathclyde 2008.
The Japanese Virtual Observatory (JVO) Yuji Shirasaki National Astronomical Observatory of Japan.
Chapter 17 - Deploying Java Applications on the Web1 Chapter 17 Deploying Java Applications on the Web.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Robert Fourer, Jun Ma, Kipp Martin Copyright 2006 An Enterprise Computational System Built on the Optimization Services (OS) Framework and Standards Jun.
WSRF Supported Data Access Service (VO-DAS)‏ Chao Liu, Haijun Tian, Dan Gao, Yang Yang, Yong Lu China-VO National Astronomical Observatories, CAS, China.
f ACT s  Data intensive applications with Petabytes of data  Web pages billion web pages x 20KB = 400+ terabytes  One computer can read
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
Running Scientific Workflow Applications on the Amazon EC2 Cloud Bruce Berriman NASA Exoplanet Science Institute, IPAC Gideon Juve, Ewa Deelman, Karan.
DDN & iRODS at ICBR By Alex Oumantsev History of ICBR  Campus wide Interdisciplinary Center for Biotechnology Research  Core Facility  Funded by the.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
GridFE: Web-accessible Grid System Front End Jared Yanovich, PSC Robert Budden, PSC.
Unit – I CLIENT / SERVER ARCHITECTURE. Unit Structure  Evolution of Client/Server Architecture  Client/Server Model  Characteristics of Client/Server.
Processes and Threads Processes have two characteristics: – Resource ownership - process includes a virtual address space to hold the process image – Scheduling/execution.
Computer Emergency Notification System (CENS)
 Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over a network (typically the Internet). 
Advanced Computer Networks Topic 2: Characterization of Distributed Systems.
09/02 ID099-1 September 9, 2002Grid Technology Panel Patrick Dreher Technical Panel Discussion: Progress in Developing a Web Services Data Analysis Grid.
Server to Server Communication Redis as an enabler Orion Free
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
Microsoft Management Seminar Series SMS 2003 Change Management.
Virtual Observatories, Astronomical Telescopes and Instrumentation (August 2002)1 An Architecture for Access To A Compute Intensive Image Mosaic Service.
Web Server.
Module: Software Engineering of Web Applications Chapter 2: Technologies 1.
Adrian Jackson, Stephen Booth EPCC Resource Usage Monitoring and Accounting.
Rolando Gaytan Clay Schumacher Josh Weisskopf Cory Simon Aaron Steil (Reiman Gardens) – Client Dr. Tien Nguyen - Advisor.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
1 3 Computing System Fundamentals 3.3 Computer Systems.
Image Processing A Study in Pixel Averaging Building a Resolution Pyramid With Parallel Computing Denise Runnels and Farnaz Zand.
What is Firefly (1) A web UI framework for web applications
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
1 The World Wide Web Architectural Overview Static Web Documents Dynamic Web Documents HTTP – The HyperText Transfer Protocol Performance Enhancements.
NASA Exoplanet Science Institute The Keck Observatory Archive ndex.shtml A collaboration between NExScI and the.
Next Generation of Apache Hadoop MapReduce Owen
Secure Access Link (SAL): Supporting Cost Savings and Improving Secure Access.
Operating Systems Salihu Ibrahim Dasuki (PhD) CSC102 INTRODUCTION TO COMPUTER SCIENCE.
Creating Grid Resources for Undergraduate Coursework John N. Huffman Brown University Richard Repasky Indiana University Joseph Rinkovsky Indiana University.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Pegasus WMS Extends DAGMan to the grid world
Cloudy Skies: Astronomy and Utility Computing
NRAO VLA Archive Survey
Recap: introduction to e-science
Database Driven Websites
Montage: An On-Demand Image Mosaic Service for the NVO
Cover page.
CS703 - Advanced Operating Systems
Google Sky.
Chapter 21 Successfully Implementing The Information System
Introduction to research computing using Condor
Presentation transcript:

The Montage Image Mosaic Service: Custom Mosaics on Demand ESTO John Good, Bruce Berriman Mihseh Kong and Anastasia Laity IPAC, Caltech M51 2MASS J-band, 0.2 x 0.2 deg. Wall clock time 26s

What Is Montage? Montage is a portable, scaleable toolkit for producing science-grade image mosaics from input FITS images Preserves astrometry and flux of input images Delivers mosaic according to users specifications Rectifies background radiation to a common level Utilities for, e.g., creating 3-color images & tiling images Code available for download at Over 300 downloads by astronomers In active use in supporting processing pipelines & data product generation, E/PO, quality assurance and science analysis Design: see Berriman et al. 2004, ASP Conf 314, ADASS XIII, 593; Berriman et al. 2003, ASP Conf 295, ADASS XII, 343 Ubeda and Pellerin 2007, Ap J Lett. 3-color IRAC images of GRSMC Ogle et al 2007, Ap J. 3-color IRAC mosaic of 3C 326

Mosaic Service Front End Data sets supported 2MASS All Sky (IRSA) SDSS DR6 (FermiLab) DSS (Space Telescope) Usage Restrictions for First Release 1 deg on a side max 10 simultaneous jobs Results kept for 72 hours Account set-upMonitoring Options Wall clock time 184 s

Job Control & Monitoring Completed Jobs Jobs Running. Status refreshed every15s... and/or notification Bookmark and return later...

Sample Results Page

Design Drivers for Mosaic Service Cluster housed at IRSA/IPAC Rapid response and high throughput Pathfinder for projects such as LSST Design Goals Inexpensive, commodity hardware Highly fault-tolerant Scaleable, extensible and distributable Portable, open source software Modular software for maintainability & extensibility to other applications Hardware Choices 15 Xeon 3.2-GHz dual-processor dual-core (60 threads) Dell 2650 Power Edge servers Aberdeen Technologies 6-TB RAID-5 disk farm for staging files Total Cost US$ 60K

Throughput Specifications 15 Dual-processor dual-core compute nodes  60 simultaneous independent jobs. Throughput: 15 square degrees of 2MASS mosaics a minute or 21,000 square degrees a day (arcsecond resolution) or Almost 2 TB of image data/day Data transfer is ultimate performance limitation Distributed processing overcomes this problem Wall Clock Timing Comparisons - NGC 5584, 0.4 deg x 0.4 deg 2MASS-J52 images638 s SDSS-g8 images184 s DSS-R1 image166 s

System Architecture Request Object Management Environment - Kong, Good and Berriman, ASP Conf Ser, 347, ADASS XIV, 213

Program Interface Prototype evaluated by Astrogrid Underlying ROME functionality is program-friendly by design; release wraps this with forms and HTML output for typical users. Prototype evaluated by Astrogrid. Functionality includes Request authentication) and password, easily extended to use certificates, etc. Polling, notification, and asynchronous (socket) messages. Requests / responses in HTML or XML; includes status information (such as job status filtering). Asynchronous aborts. ROME Processors can allow for control input via socket (via requests through ROME server). Distributed, heterogenous operations. Dedicated processing. Processors can limit the jobs to a specific applications or users √ √

Future Plans Upload table of sources Building cutout/mosaics for multiple sources. User-defined WCS. The service already supports arbitrary (user-supplied) FITS headers but not deployed Three-color images. User data. By uploading a image list (URLs), the user can mosaic their own data or data lists from IVO SIA services. Standard plates. Cutout from large (~5 degree) pre-built plates. A second cluster is currently being set up to handle such "production” runs. These upgrades require only wrappers around the “core” service