CPU Sizing vs. Latency Analysis FTS EDR Latency Simulation 5 March 2008 Doug Shannon.

Slides:



Advertisements
Similar presentations
VDL Mode 4 Performance Simulator (DLS enhancements) presented by EUROCONTROL Montreal, 26 October 2004.
Advertisements

Computer Organization Lab 1 Soufiane berouel. Formulas to Remember CPU Time = CPU Clock Cycles x Clock Cycle Time CPU Clock Cycles = Instruction Count.
Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Community Satellite Processing Package NOAA Satellite Conference, 8-12 April 2013 Liam Gumley CIMSS/SSEC, UW-Madison Hurricane Sandy 2012/10/28 06:25 UTC.
VIIRS LST Uncertainty Estimation And Quality Assessment of Suomi NPP VIIRS Land Surface Temperature Product 1 CICS, University of Maryland, College Park;
Tutorial 10: Performing What-If Analyses
Is SC + ILP = RC? Presented by Vamshi Kadaru Chris Gniady, Babak Falsafi, and T. N. VijayKumar - Purdue University Spring 2005: CS 7968 Parallel Computer.
CS 6290 Evaluation & Metrics. Performance Two common measures –Latency (how long to do X) Also called response time and execution time –Throughput (how.
1 External Sorting Chapter Why Sort?  A classic problem in computer science!  Data requested in sorted order  e.g., find students in increasing.
MassConf: Automatic Configuration Tuning By Leveraging User Community Information Computer Science Wei Zheng, Ricardo Bianchini, Thu Nguyen Rutgers University.
- Sam Ganzfried - Ryan Sukauye - Aniket Ponkshe. Outline Effects of asymmetry and how to handle them Design Space Exploration for Core Architecture Accelerating.
1 Institut für Datentechnik und Kommunikationetze Sensitivity Analysis & System Robustness Maximization Short Overview Bologna, Arne Hamann.
AIMS Workshop Heidelberg, March 1998 P617 - Evaluation of High- Performance Databases for Interactive Services Dr.ing. Gunnar Brataas ClustRa AS.
1 International EOS/NPP Direct Readout Meeting October 2005 National Polar-orbiting Operational Environmental Satellite System (NPOESS) Direct Readout.
1 Ultra-Low Duty Cycle MAC with Scheduled Channel Polling Wei Ye Fabio Silva John Heidemann Presented by: Ronak Bhuta Date: 4 th December 2007.
Dyer Rolan, Basilio B. Fraguela, and Ramon Doallo Proceedings of the International Symposium on Microarchitecture (MICRO’09) Dec /7/14.
An Application-Specific Design Methodology for STbus Crossbar Generation Author: Srinivasan Murali, Giovanni De Micheli Proceedings of the DATE’05,pp ,2005.
Memory Management n 1. Single contiguous allocation n 2. Partitioned organization: –Static, Dynamic n 3. (Pure) Paging.
NASA Goddard Space Flight Center Direct Readout Laboratory NPP/JPSS HRD/LRD Status Patrick Coronado NASA Goddard Space Flight Center directreadout.sci.gsfc.nasa.gov/ipopp.
Lecture 2: Technology Trends and Performance Evaluation Performance definition, benchmark, summarizing performance, Amdahl’s law, and CPI.
Impact Analysis of Database Schema Changes Andy Maule, Wolfgang Emmerich and David S. Rosenblum London Software Systems Dept. of Computer Science, University.
Ekrem Kocaguneli 11/29/2010. Introduction CLISSPE and its background Application to be Modeled Steps of the Model Assessment of Performance Interpretation.
JPSS CGS IDPS Product Generation
1 An SLA-Oriented Capacity Planning Tool for Streaming Media Services Lucy Cherkasova, Wenting Tang, and Sharad Singhal HPLabs,USA.
1 McMurdo Ground Station Workshop NPOESS Direct Readout March 10, 2004.
1 Copyright © 2011, Elsevier Inc. All rights Reserved. Appendix A Authors: John Hennessy & David Patterson.
Young Suk Moon Chair: Dr. Hans-Peter Bischof Reader: Dr. Gregor von Laszewski Observer: Dr. Minseok Kwon 1.
Sogang University Advanced Computing System Chap 1. Computer Architecture Hyuk-Jun Lee, PhD Dept. of Computer Science and Engineering Sogang University.
OpenDAP Server-side Functions for Multi-Instrument Aggregation ESIP Session: Advancing the Power and Utility of Server-side Aggregation Jon C. Currey (NASA),
JPSS Common Ground System NPP/J1 Algorithm Paradigm Shift IDPS Splinter May 10, 2011.
CrIS Use or disclosure of data contained on this sheet is subject to NPOESS Program restrictions. ITT INDUSTRIES AER BOMEM BALL DRS EDR Algorithms for.
11 MANAGING PERFORMANCE Chapter 16. Chapter 16: MANAGING PERFORMANCE2 OVERVIEW  Optimize memory, disk, and CPU performance  Monitor system performance.
1© Copyright 2012 EMC Corporation. All rights reserved. EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT EMC Symmetrix.
Interface Data Processing Segment ArchitectureFigure David Smith, JPSS CGS Chief Architect Kerry Grant, JPSS CGS Chief Engineer Raytheon Intelligence.
Avoiding Planetary Rover Damage by Automated Path Planning Michael Flammia Mentor: Dr. Wolfgang Fink Tempe, AZ April 18 th, 2015.
6 December On Selfish Routing in Internet-like Environments paper by Lili Qiu, Yang Richard Yang, Yin Zhang, Scott Shenker presentation by Ed Spitznagel.
1 Part VII Component-level Performance Models for the Web © 1998 Menascé & Almeida. All Rights Reserved.
CSC Multiprocessor Programming, Spring, 2012 Chapter 11 – Performance and Scalability Dr. Dale E. Parson, week 12.
VIIRS Product Evaluation at the Ocean PEATE Frederick S. Patt Gene C. Feldman IGARSS 2010 July 27, 2010.
Processor Structure and Function Chapter8:. CPU Structure  CPU must:  Fetch instructions –Read instruction from memory  Interpret instructions –Instruction.
Partitioning Screen Space 2 Rui Wang. Architectural Implications of Hardware- Accelerated Bucket Rendering on the PC (97’) Dynamic Load Balancing for.
Page 1 IDPS Dec 2004 NP-IDPS-042 HARDCOPY UNCONTROLLED NPP Deliverable Products Sizing Estimates Tyler Hall March 30th, 2006.
Page 1 Land PEATE support for CERES processing Ed Masuoka Gang Ye August 26, 2008 CERES Delta Design Review.
ATS Information Technology AAWS June 8-10, 2004 EXCELLENCE THROUGH TEAMWORK NPOESS Schedule and Processes.
Image Processing A Study in Pixel Averaging Building a Resolution Pyramid With Parallel Computing Denise Runnels and Farnaz Zand.
NGAS ATMS Cal/Val Activities and Findings Degui Gu, Alex Foo and Chunming Wang Jan 13, 2012.
Sunpyo Hong, Hyesoon Kim
Cache Small amount of fast memory Sits between normal main memory and CPU May be located on CPU chip or module.
Performance Computer Organization II 1 Computer Science Dept Va Tech January 2009 © McQuain & Ribbens Defining Performance Which airplane has.
VIIRS NCC Imagery at Beta CCR DR 4859 Dr. Thomas Kopp – Imagery Validation Lead Dr. Donald Hillger – Imagery Product Lead Mr. Ryan Williams - Imagery.
N A T I O N A L O C E A N I C A N D A T M O S P H E R I C A D M I N I S T R A T I O N NPP DATA ACCESS Mitch Goldberg JPSS Program Scientist June 21, 2012.
Ocean Color Research Team Meeting May 1, 2008 NPOESS Preparatory Project (NPP) Status Jim Gleason NPP Project Scientist.
1 Satellite Direct Readout Conference for the Americas NPOESS Field Terminal Segment December 11, 2002.
Introduction to Performance Tuning Chia-heng Tu PAS Lab Summer Workshop 2009 June 30,
Page 1 June 26, 2016 NPOESS Preparatory Project (NPP) Science Data Segment (SDS) Ocean PEATE Status and Plans January 27, 2010 Ocean PEATE Team.
CSE 340 Computer Architecture Summer 2016 Understanding Performance.
Lecture 3. Performance Prof. Taeweon Suh Computer Science & Engineering Korea University COSE222, COMP212, CYDF210 Computer Architecture.
Lecture 2: Performance Evaluation
Lecture 3: MIPS Instruction Set
Defining Performance Which airplane has the best performance?
NOAA Report on Satellite Data Calibration and Validation – Satellite Anomalies Presented to CGMS-43 Working Group 2 session, agenda item 3 Author: Weng.
Joint Polar Satellite System Common Ground System (JPSS CGS)
Ninghai Sun1, Lihang Zhou2, Mitch Goldberg3
Chapter 1 Introduction.
Lecture 3: MIPS Instruction Set
VDL Mode 4 Performance Simulator (DLS enhancements) presented by EUROCONTROL Montreal, 26 October 2004.
Performance And Scalability In Oracle9i And SQL Server 2000
CSC Multiprocessor Programming, Spring, 2011
VDL Mode 4 Performance Simulator (DLS enhancements) presented by EUROCONTROL Montreal, 26 October 2004.
Presentation transcript:

CPU Sizing vs. Latency Analysis FTS EDR Latency Simulation 5 March 2008 Doug Shannon

FTS DRO Mar 05, Contents FTS Latency – Simulation & Analyses –IDPS NPP Status –ATDS/FTS Simulation Overview –Example Simulation Results –ATDS/FTS Demo FTS HRD/LRD Latency Requirements: –SYS The LRD Field Terminal software, when installed on NPOESS representative hardware, shall produce Imagery EDRs within 2 minutes and all other EDRs specified in Appendix G within 15 minutes of receipt of mission data. Class 2 –SYS The HRD Field Terminal software, when installed on NPOESS representative hardware, shall produce Imagery EDRs within 2 minutes and all other EDRs specified in Appendix E, except for EDRs , , , and , within 15 minutes of receipt of mission data. Class 2

FTS DRO Mar 05, IDPS NPP Status IDPS NPP Build 1.5 –1 orbit NPP processing (101 mins) – 53 mins Meets EDR latencies (117.2 mins for 140 mins requirement) Major speedups in DMS performance Algorithm development & integration “ 95% complete ” –Future Builds 1.5.x.1 (3Q 08), B1.5.x.2 (2Q 09). OMPS, NHF, combined Albedo, Bright Pixel Move LSA Granulation out of VIIRS SDR (1.5.x.1) to improve IMG latency –ATDS/FTS getting new benchmarks on B1.5 algorithms Faster processing? Less algorithm sensitivity to scene content?

FTS DRO Mar 05, Algorithm Timing & Dependency Simulation Field Terminal Latency Analyses ATDS supports NPP, NPOESS/NPP & NPOESS performance analyses FTS latency simulations differences: –Receives C1/C2 LRD or HRD in real time; no stored data Sensors collect at 9.1 & 5.0 Mbps (average day/night) –Various FTS locations and weather/terrain conditions –Smaller EDR granules (NPP 85.7sec & NPOESS 42.9sec) –Processing Architecture - Split SDR - generate IMG sooner, after SDR Cal/Geo, before granulation Pre-load SDR static ancil/aux tiles (TBD) to reduce latency Assume no/minimal cross-granule dependency

FTS DRO Mar 05, VIIRS Cross-Granule Latency Tiers SDR

FTS DRO Mar 05, FTS Simulation (e.g. Omaha): 2 day 19 Passes with NPOESS S/C FTS Contacts with NPOESS S/C (1440 minutes = 1 days) Contact Durations: Max 13.1 mins Avg 10.5 mins Min 2 mins? <4mins 2.3%

FTS DRO Mar 05, Orbital Position Defines Dynamic Scene Content in Sensor Data Orbital Position defines Sensor NadirNCEP Weather Data Base Scene in VIIRS View Ocean Cloudy Snow/Ice Dynamic Processing

FTS DRO Mar 05, Impact of Weather/Terrain on FTS Data Algorithm loading for Clear-Ocean is heaviest, 21% over average. NCEP weather DB for Spring 2003 –90-100% ocean – 41% –90-100% clear – 8% –Clear & Ocean – 3% User can ’ t select his weather/terrain –ATSD can analyze user FTS locations & help size for field conditions >90% Clear >90% ocean

FTS DRO Mar 05, Algorithm, Timing & Dependency Simulator: FTS IDPS and Algorithm Models S/W H/W Science Algorithms

FTS DRO Mar 05, Peak demand (17 CPUs) not equal to CPU requirement. –2.6 GHz CPUs CPU resources driven by contact length & S/C sensors. –No ATMS & CrIS on C2 Example ATDS Simulation results – Omaha FTS scenario

FTS DRO Mar 05, EDR latencies are dynamic as scene content varies –Shows last VIIRS EDR for multiple granules Example ATDS Simulation results – Omaha FTS scenario

FTS DRO Mar 05, Latencies varied 1.5 – 7.7 mins –Imagery latency ~3.3 mins Example ATDS Simulation results – Omaha FTS scenario FTS IMG

FTS DRO Mar 05, On-going ATDS/FTS Trades Variable number of CPUs & processor speeds Smaller VIIRS/CrIMSS granules –Science implications for processing areas and adjacency. Weather/Terrain impact on IDPS Latency –Various FTS locations –Various weather & terrain conditions SDR architectural trades Selectable EDR configurations –HRD vs LRD algorithms –Generate high priority top EDRs only –Generate Imagery only

FTS DRO Mar 05, VIIRS HRD vs LRD Algorithm Processing 26% 0.3% 5% 9% 2% 14%/10 11% 10% 5% 2% 1%

FTS DRO Mar 05, Summary Due to algorithm scene sensitivity, highly variable weather/terrain are significant factors for latency and CPUs required. –Some new IDPS benchmarks show less than expected sensitivity. Ongoing IDPS algorithm optimization are improving FTS latencies. –Improvements to IDPS Infrastructure (DMS) are very good but don ’ t apply directly to FTS. We continue to add fidelity to our ATDS simulations, bounding nominal performance against worst-case scenarios in order to quantify system processor needs.

FTS DRO Mar 05, Backups 2005 back-to-back S/C contacts and gap analysis

FTS DRO Mar 05, Overlapping S/C contacts don ’ t occur due to spacecraft orbital phasing. Smallest gap of 10.2 minutes has minimal impact to FTS latency. Above 60N there is a large increase in contacts and EDRs. Back-to-back S/C Contacts Max gap is 2.1 orbits at equator Gap Time Between Contacts Analyzed STK 1330/1730/2130 contact data 60N