12/19/01MODIS Science Team Meeting1 MODAPS Status and Plans Edward Masuoka, Code 922 MODIS Science Data Support Team NASA’s Goddard Space Flight Center.

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

1 MODAPS - Land Processing Status Robert Wolfe and Ed Masuoka Code 619, NASA GSFC Sadashiva Devadiga Sigma Space MODIS Science Team Meeting April 15-17,
Status of MODIS Production (C4/C5 Testing) Mike Teague 1/5/06.
MODIS Collection 5 Processing Status May 14, 2008 Barbara DeShong Dr. Michael Teague.
Chapter 5: Server Hardware and Availability. Hardware Reliability and LAN The more reliable a component, the more expensive it is. Server hardware is.
VIIRS Land PEATE Masuoka, Wolfe, Isaacman and Devadiga.
MODAPS PIP Report August 27, #
MODAPS PIP Report June 25, #
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Visible Human Networking Update Thomas Hacker June 3, 2000.
02/07/2001 EOSDIS Core System (ECS) COTS Lessons Learned Steve Fox
1 Andrew Hanushevsky - HEPiX, October 6-8, 1999 Mass Storage For BaBar at SLAC Andrew Hanushevsky Stanford.
CT NIKHEF June File server CT system support.
AMSR-E SIPS Processing Status Presented by Kathryn Regner Information Technology and Systems Center at the University of Alabama in Huntsville JAXA / AMSR-E.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
The MODIS online archive and on-demand processing Edward Masuoka NASA Goddard Space Flight Center, Greenbelt, MD, USA.
The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
Local IBP Use at Vanderbilt University Advanced Computing Center for Research and Education.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
1 MODIS Land C6 Schedule and Status Sadashiva Devadiga 1,2, Ye Gang 1,2 and Edward Masuoka 1 1 NASA GSFC 2 Science Systems and Applications Inc. MODIS/VIIRS.
MURI Hardware Resources Ray Garcia Erik Olson Space Science and Engineering Center at the University of WI - Madison.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
Hosting on a managed server hosted by TAG  No technical support required  Full backup of database and files  RAID 5 system means that if a hard drive.
November 2, 2000HEPiX/HEPNT FERMI SAN Effort Lisa Giacchetti Ray Pasetes GFS information contributed by Jim Annis.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Tape logging- SAM perspective Doug Benjamin (for the CDF Offline data handling group)
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
EOS Terra MODIS Land Processing and Distribution Overview Joseph M Glassy, Director, MODIS Software Development at NTSG School of Forestry, Numerical Terradynamics.
CLASS Information Management Presented at NOAATECH Conference 2006 Presented by Pat Schafer (CLASS-WV Development Lead)
MODLAND Volumes and Loads Status MODIS Land Science Team Workshop July 15, 2003 Robert Wolfe MODIS Land Team Support Group NASA GSFC Code 922, Raytheon.
AMSR-E SIPS Processing Status Kathryn Regner Information Technology and Systems Center at the University of Alabama in Huntsville
EVLA Data Processing PDR Scale of processing needs Tim Cornwell, NRAO.
CASPUR Site Report Andrei Maslennikov Lead - Systems Amsterdam, May 2003.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
CHAPTER 2: BASIC OF OPEN SOURCE OPERATING SYSTEM Part 2 (Linux installation)
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
7/13/04 Shaida Johnston MODIS Science Team Meeting 1 Current MODIS Data System: Processing, Archiving and Distribution Shaida Johnston July 13, 2004.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
Page 1 Land PEATE support for CERES processing Ed Masuoka Gang Ye August 26, 2008 CERES Delta Design Review.
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
COMPASS Computerized Analysis and Storage Server Iain Last.
MODIS SDST, STTG and SDDT MODIS Science Team Meeting (Land Discipline Breakout Session) July 13, 2004 Robert Wolfe Raytheon NASA GSFC Code 922.
TERRA UPDATE Launch Date NSIDC DAAC Launch Readiness MODIS Snow and Ice Products General Product Release Timeline.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
MODIS Distributed Active Archive Centers (DAACs) Level 1 Processing/Reprocessing Level 1+ Archive and Distribution Presentation to MODIS Science Team December.
D0 Farms 1 D0 Run II Farms M. Diesburg, B.Alcorn, J.Bakken, R. Brock,T.Dawson, D.Fagan, J.Fromm, K.Genser, L.Giacchetti, D.Holmgren, T.Jones, T.Levshina,
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
MODAPS PIP Report January 27,
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
MODAPS PIP Report August 25,
MODAPS PIP Report September 1,
MODAPS PIP Report August 7, #
NOAA Near Real-Time AIRS Processing and Distribution System Walter Wolf Mitch Goldberg Lihang Zhou NESDIS/ORA/CRAD.
11 October 2000Iain A Bertram - Lancaster University1 Lancaster Computing Facility zStatus yVendor for Facility Chosen: Workstations UK yPurchase Contract.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Near Real-Time Status of MODIS and VIIRS Land products Ed Masuoka, NASA/GSFC 619 Gang Ye, Carol Davidson, Sadashiva Devadiga and Jeff Schmaltz, SSAI/GSFC.
7/13/04 Edward Masuoka MODIS Science Team Meeting 1 MODAPS Reprocessing Collections 4 and 5 Ed Masuoka July 13, 2004.
Power Systems with POWER8 Technical Sales Skills V1
MODIS SST Processing and Support for GHRSST at OBPG
The INFN TIER1 Regional Centre
JDAT Production Hardware
AMSR-E SIPS Processing Status
Lee Lueking D0RACE January 17, 2002
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Presentation transcript:

12/19/01MODIS Science Team Meeting1 MODAPS Status and Plans Edward Masuoka, Code 922 MODIS Science Data Support Team NASA’s Goddard Space Flight Center

12/19/01MODIS Science Team Meeting2 MODAPS Science Computing Facilities (SCFs) GES DAAC NSIDC DAAC EDC DAAC Archive & Distribute L1+ (ESDIS/ECS) and DAO, ancillary data (GES DAAC) Archive & Distribute Land Data (ESDIS/ECS) Archive & Distribute Snow & Ice (ESDIS/ECS) Science Appli- cations Educ- ation DAACs QA Metadata Update MODIS Level 2+ Average Volume at 1X processing rate with 2/96 baseline volumes in ( ) where different from current production 154 GB/day 392 GB/day 15 GB/day 239GB/day (154 GB/day) 175GB/day (58 GB/day) 11 GB/day (15 GB/day) 368 GB/day 210 GB/day GES DAAC

12/19/01MODIS Science Team Meeting3 3X 6/00 +.5X 2/96 (1,028GB/day) by 5/15/ X 2/96 (193 GB/day) 2.25X 2/96 (347 GB/day) 4X 2/96 (616 GB/day) by 3/1/02

12/19/01MODIS Science Team Meeting4 MODAPS MODAPS produces 2.4TB/day and ships 1.2TB to DAACs for archive and 1.1TB to Science Team when running at 3X –1x forward processing for Terra and 2x reprocessing Current processing systems are: –80p SGI Origin 2000 for forward processing and –64p SGI Origin 3000 for reprocessing

12/19/01MODIS Science Team Meeting5 MODIS Production Rates

12/19/01MODIS Science Team Meeting6 Archived Volume (GB/day)

12/19/01MODIS Science Team Meeting7 8 Fibre Channels 100Mbytes/sec HiPPI channel 800MB/sec SGI Origin Mhz R12K processors 32GB memory DAACs 100TB ADIC Tape Library 24 drives (10MBps per drive) 30TB RAID in 30 file systems Gigabit Ethernet to 100baseT 110 Linux systems Dual Pentium 1.1Ghz 4GB RAM 200 GB storage Level 2 - Level 3 Daily Land and Atmosphere Processing Gigabit Ethernet 100MB/sec Science Team, Development and Test Systems 8 Pentium III cpu’s Database Server Running Linux 4 Fibre Channel to SCSI bridges 3 SCSI per bridge 1 Fibre Channel per bridge Reprocessing string (mtvs2) 30TB

12/19/01MODIS Science Team Meeting8 MODAPS dual-processor systems running Linux will be added to the SGI Origins in March 2002 –Linux processors will run Level 2 through Level 3 Daily products (70% of processing load) –PGEs have been ported and products are being verified by Science Team (due date 1/31/02) –In testing, 32 Linux systems =1X production Disk increased by 60TB to handle extra volume Can meet required 4X (3x Terra, 1X Aqua) for 2002, Aqua volume is still at 2/96 baseline

12/19/01MODIS Science Team Meeting9 Upgrade Test System Adding 32 Linux processors to existing 16 processor SGI Origin 2000 Adding 9TB of disk to test system (11TB total) Allows science testing at 1x Test system will be used to test both PGE and MODAPS system changes Global test data sets are being defined to allow better science testing of 8, 16, 32-day products

12/19/01MODIS Science Team Meeting10 Future Finish Linux product verification 1/31 Install redundant power for production 2/02 Improve Export error handling 3/31 Machine-to-Machine Ordering for Reprocessing 7/02