Page 1GS CDR May 2005 JSOC SDP Agenda Significant Level 4 requirements SDP Architecture Decomposition –Datacapture System –Datacapture System Components.

Slides:



Advertisements
Similar presentations
Network II.5 simulator ..
Advertisements

Archive Task Team (ATT) Disk Storage Stuart Doescher, USGS (Ken Gacke) WGISS-18 September 2004 Beijing, China.
DETAILED DESIGN, IMPLEMENTATIONA AND TESTING Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
A new Network Concept for transporting and storing digital video…………
Vorlesung Speichernetzwerke Teil 2 Dipl. – Ing. (BA) Ingo Fuchs 2003.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
GLAST LAT ProjectLAT Engineering Meeting, April 1, 2003 GLAST Large Area Telescope: Performance & Safety Assurance Darren S. Marsh Stanford Linear Accelerator.
HMI - Page 1HMI Team Meeting – Jan 26, 2005 JSOC Implementation HMI Team Meeting 26 Jan 2005 Jim Aloise System Programmer
Page 1JSOC Peer Review – AIA Science Center – March 17, 2005 AIA Science Data Processing Infrastructure Neal Hurlburt AIA Data Scientist
JSOC Overview - 1HMI/AIA Team Meeting – Feb 2006 JSOC Summary- Phil Scherrer Pipeline Processing - Rasmus Larsen Data Access - Rick Bogart Data Visualization.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
GLAST LAT ProjectOnline Peer Review – July 21, Integration and Test J. Panetta 1 Gamma-ray Large Area Space Telescope GLAST Large Area Telescope:
Page 1JSOC for SDO MOR October 2007 CARL (6)Level-0 Processing (b) HK Processing i. Hk Processing on HSB ii. RT HK Extraction Iii Dayfile HK Processing.
Advanced Technology Center 1 HMI Rasmus Larsen / Processing Modules Stanford University HMI Team Meeting – May 2003 Processing Module Development Rasmus.
Karen TianHMI/AIA Science Teams Meeting February 13-17, 2006 Virtual Solar Observatory VSO provides a unified view of data from diverse solar sources
LSU 01/18/2005Project Life Cycle1 The Project Life Cycle Project Management Unit, Lecture 2.
System Design/Implementation and Support for Build 2 PDS Management Council Face-to-Face Mountain View, CA Nov 30 - Dec 1, 2011 Sean Hardman.
STEREO Science Center SWG – Dec 2004 William Thompson NASA Goddard SFC Code Greenbelt, MD
Upcoming Enhancements to the HST Archive Mark Kyprianou Operations and Engineering Division Data System Branch.
The Pursuit for Efficient S/C Design The Stanford Small Sat Challenge: –Learn system engineering processes –Design, build, test, and fly a CubeSat project.
Module 10 Configuring and Managing Storage Technologies.
Web Development Process Description
GLAST LAT ProjectDOE/NASA Baseline-Preliminary Design Review, January 8, 2002 K.Young 1 LAT Data Processing Facility Automatically process Level 0 data.
MASSACHUSETTS INSTITUTE OF TECHNOLOGY NASA GODDARD SPACE FLIGHT CENTER ORBITAL SCIENCES CORPORATION NASA AMES RESEARCH CENTER SPACE TELESCOPE SCIENCE INSTITUTE.
GLAST LAT ProjectI&T PDR Presentation – Jan. 9, 2002 R. Claus1 Integration and Test Organization Chart I&T&C Manager Elliott Bloom WBS I&T Engineer.
Final Version Micro-Arcsecond Imaging Mission, Pathfinder (MAXIM-PF) Mission Operations Tim Rykowski Jeffrey Hosler May 13-17, 2002.
OOI CI LCA REVIEW August 2010 Ocean Observatories Initiative OOI Cyberinfrastructure Architecture Overview Michael Meisinger Life Cycle Architecture Review.
GLAST LAT ProjectEGSE Peer Design Review, August 17, S. WilliamsEGSE Overview Electrical Ground Support Equipment Overview Scott Williams Stanford.
1 U.S. Department of the Interior U.S. Geological Survey Contractor for the USGS at the EROS Data Center EDC CR1 Storage Architecture August 2003 Ken Gacke.
CRISP & SKA WP19 Status. Overview Staffing SKA Preconstruction phase Tiered Data Delivery Infrastructure Prototype deployment.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
PDS Geosciences Node Page 1 Archiving Mars Mission Data Sets with the Planetary Data System Report to MEPAG Edward A. Guinness Dept. of Earth and Planetary.
University of Southern California Center for Systems and Software Engineering Barry Boehm, USC CS 510 Software Planning Guidelines.
Page 1LWS Teams Day JSOC Overview HMI-AIA Joint Science Operations Center Science Data Processing a.k.a. JSOC-SDP Overview.
EGEE is a project funded by the European Union under contract IST HellasGrid Hardware Tender Christos Aposkitis GRNET EGEE 3 rd parties Advanced.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
CLASS Information Management Presented at NOAATECH Conference 2006 Presented by Pat Schafer (CLASS-WV Development Lead)
Swift HUG April Swift data archive Lorella Angelini HEASARC.
Page 1JSOC Peer Review 17Mar2005 HMI & AIA JSOC Architecture Science Team Forecast Centers EPO Public Catalog Primary Archive HMI & AIA Operations House-
Nov JWST Data Management Systems Gretchen Greene.
Page 1JSOC Peer Review 17Mar2005 HMI & AIA JSOC Architecture Science Team Forecast Centers EPO Public Catalog Primary Archive HMI & AIA Operations House-
Page 1JSOC Overview August 2007 HMI Status HMI is virtually done. –Virtually  similar to but not in fact –Front window issue resolved, flight window now.
06-1L ASTRO-E2 ASTRO-E2 User Group - 14 February, 2005 Astro-E2 Archive Lorella Angelini/HEASARC.
Connecting with Computer Science2 Objectives Learn how software engineering is used to create applications Learn some of the different software engineering.
August 2003 At A Glance The IRC is a platform independent, extensible, and adaptive framework that provides robust, interactive, and distributed control.
Solar Probe Plus A NASA Mission to Touch the Sun March 2015 Instrument Suite Name Presenter's Name.
System/SDWG Update Management Council Face-to-Face Flagstaff, AZ August 22-23, 2011 Sean Hardman.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Chapter 9: Networking with Unix and Linux. Objectives: Describe the origins and history of the UNIX operating system Identify similarities and differences.
Implementation Review1 Archive Ingest Redesign March 14, 2003.
1 SUZAKU HUG 12-13April, 2006 Suzaku archive Lorella Angelini/HEASARC.
SwCDR (Peer) Review 1 UCB MAVEN Particles and Fields Flight Software Critical Design Review Peter R. Harvey.
GLAST Large Area Telescope LAT Flight Software System Checkout TRR Test Environment Sergio Maldonado FSW Test Team Lead Stanford Linear Accelerator Center.
Page 1JSOC Review – 17 March 2005 JSOC 17 March 2005 “Peer” Review Stanford SDP Development Plan Philip Scherrer
Unit 1: IBM Tivoli Storage Manager 5.1 Overview. 2 Objectives Upon the completion of this unit, you will be able to: Identify the purpose of IBM Tivoli.
Research and Service Support Resources for EO data exploitation RSS Team, ESRIN, 23/01/2013 Requirements for a Federated Infrastructure.
AIRS Meeting GSFC, February 1, 2002 ECS Data Pool Gregory Leptoukh.
Enterprise Vitrualization by Ernest de León. Brief Overview.
JSOC Status at SDO PSR-1 January 21-22, 2009 GSFC
I&T&C Organization Chart
HMI/AIA Joint Science Operations Center (JSOC) Design
JDAT Production Hardware
Status and Overview of HMI–AIA Joint Science Operations Center (JSOC) Science Data Processing (SDP) April 22, 2008 P. Scherrer.
Jim Aloise System Programmer
JSOC Pipeline Processing System Components
Science Data Capture Jim Aloise Nov , 2004 System Programmer
HMI – AIA Joint Science Operations Center “Peer” Overview, 17 March AM HEPL Conference Room, Stanford Overview of JSOC Parts (Phil, 5 min.) JSOC.
Integration & Test Instrument Operations Coordination
<Your Team # > Your Team Name Here
Presentation transcript:

Page 1GS CDR May 2005 JSOC SDP Agenda Significant Level 4 requirements SDP Architecture Decomposition –Datacapture System –Datacapture System Components –Pipeline Processing System –Pipeline Processing System Components –Software Configuration Items –Hardware Configuration Items –Data Archive –Data Distribution –Performance Analysis –Heritage SDP Network Design & Security Test approach Implementation plan Deliverables hardware/software/documentation Development environment/CM Process Sustaining support pre-launch and post-launch Identify organization(s) providing support Identify types of support: vendor licenses, h/w, etc. Trade Studies and Prototyping Efforts since PDR Risks & Mitigations Procurement status

Page 2GS CDR May 2005 DMR – JSOC SDP Requirements Science Data Processing, Archiving and Distribution Each SOC shall provide the necessary facility, software, hardware and staff to receive, process, archive and distribute the science data generated by its instruments. My Documents\DMR_SOC_Req.ppt

Page 3GS CDR May 2005 JSOC (Stanford Science Data Processing) Configuration IDC Pipeline Processor [etc.] Analysis Cluster jim\My Documents\hardware_config3.vsd Pipeine Processing Data Base Server LAN Tape Robot Server DDS Switch heartbeat Active Passive Fibre Channel Switch Tape Robot Disks Switch ethernet Disk arrays File Server Datacapture System SDP

Page 4GS CDR May 2005 JSOC Data Capture System Components Database Server Tape Server DSDS Data Storage & Distribution System Tape Drive Primary Storage Disks lev0 “module” Record Cache GUI Data Capture Interface Operator Processing History Log FTP Disks DDS Utility Libraries

Page 5GS CDR May 2005 JDAT network

Page 6GS CDR May 2005 JSOC Pipeline Processing System Components Database Server SUMS Storage Unit Management System DRMS Data Record Management System SUMS Tape Farm SUMS Disks Pipeline Program, “module” Record Manage ment Keyword Access Data Access DRMS Library Link Manage ment Utility Libraries JSOC Science Libraries Record Cache PUI Pipeline User Interface Pipeline processing plan Processing script, “mapfile” List of pipeline modules with needed datasets for input, output Pipeline Operato r Processing History Log

Page 7GS CDR May 2005 S/W Configuration Items SUMS DRMS PUI Event Mgr Science Libs Pipeline Util Libs Export Support Lev0 Lev1 Standard Products

Page 8GS CDR May 2005 H/W Configuration

Page 9GS CDR May 2005 H/W Configuration (cont)

Page 10GS CDR May 2005 Telemetry Data Archive Telemetry data is archived twice The Data Capture System archives tlm files for offsite storage Archive tapes are shipped to the offsite location and verified for reading A feedback mechanism will be established to ack the SDP that a tape is verified or that another copy needs to be sent The Data Capture System copies tlm files to the Pipeline Processing System The Pipeline Processing System archives tlm data for local storage and acks the SDP when it is successful Only when the SDP has received positive acks on both archive copies does it inform the Data Capture System to include the tlm file in the.arc file to the DDS, who is now free to remove the file from its tracking logic The lev0 data on the Pipeline Processing System is also archived

Page 11GS CDR May 2005 JSOC Data Export System DRMS Package Format Custom Keywords Utilities Selecte d Data Records API Drilldown Overview New/Avail Statistics Keywords Range Search Browse SOHO GONG WCS-x FITS VOTable plain (FITSz) (CDF) (JPEG) Researcher A General Public Grid Adaptor Grid VSO Adaptor VSO CoSEC CoSEC Adaptor Researcher B Pre- process Filter Mask Track Auto Update Redistribute Notify Filename Schema _1958.fits AR12040_171.vot magCR2801:lon064lat20S.txt Compress gzip jpeg low-resolution Script Access Space Weather

Page 12GS CDR May 2005 Performance Analysis AIA/HMI combined data volume2 PB/yr = 60 MB/s –read + write x 2 –quick look + final x 2 –one reprocessing x 2 –25% duty cycle x 4 2 GB/s (disk) 0.5 GB/s (tape) NFS over gigabit ethernet (50-100MB/s/channel) 1 – 4 GB/s –4 – 8 channels per server, 5 servers (today) SAIT-1 native transfer rate (25-30 MB/s/drive) 0.5 – 0.6 GB/s –10 SAIT-1 drives per library, 2 libraries (today)

Page 13GS CDR May 2005 Heritage MDI ground system design and implementation MDI production processing for 9+ years MDI sustaining engineering for h/w and s/w upgrades MDI “lessons learned” folded into JSOC SDP design

Page 14GS CDR May 2005 Stanford/Lockheed Connections Stanford DDS MOC LMSAL 1 Gb Private line JSOC Disk array NASA AMES “White” Net

Page 15GS CDR May 2005 Test Approach Telemetry data is simulated with MDI data packaged in SDO VCDUs Various telemetry pathologies can be created to validate telemetry processing The DDS is simulated by packaging the VCDUs into files and injecting them into the SOC front end using the DDS/SOC protocol The lev0 processing is validated by comparing input and output images Timing analysis and throughput studies will be performed to evaluate h/w and s/w tradeoffs Instrument sunlight tests will be used to validate the lev1 processing Each s/w library and subsystem has a regression test suite that is run to verify each new CM release Full end to end testing is accomplished via the mission I&T plan

Page 16GS CDR May 2005 Integrated Schedule JSOC Science Data Processing (SDP) / DDS I&T Start Dates Delivery of Flight EGSE SDPJune 2005 Prototype SDP System ReadyDec 2005 JSOC Network ReadyDec 2006 DDS-JSOC Testing Dec 2006 GSRT#2- Science Data Processing Test (Ka-band)Jan 2007 HMI Connectivity, Dataflow, Retransmissions TestFeb 2007 AIA Connectivity, Dataflow, Retransmissions TestFeb 2007 GSRT#3-Mission Operations& RF Communications TestMar 2007 GSRT#4-Fully Integrate Ground System Mar 2007 Ground System FreezeJan 2008 GSRT#4-Launch Readiness TestFeb 2008

Page 17GS CDR May 2005 Implementation Plan TBD – Get from Carl

Page 18GS CDR May 2005 Deliverables EGSE hardware EGSE software TBD – Get from Carl (from contract)

Page 19GS CDR May 2005 Configuration Management & Control Capture System –Managed by JSOC CCB –Controlled in CVS SUMS, DRMS, PUI, etc. Infrastructure –Managed by JSOC CCB after launch –Controlled in CVS PUI Processing Tables –Managed by HMI and/or AIA Instrument Scientist –Controlled in CVS Level 0,1 Pipeline Modules –Managed by HMI and/or AIA Instrument Scientist –Controlled in CVS Science Analysis Pipeline Modules –Managed by program author –Controlled in CVS

Page 20GS CDR May 2005 CM with CVS

Page 21GS CDR May 2005 Prototype/Trade Study Sequence “MDI Heritage” software for EGSE Science W/S (currently running) Study to separate keywords from image data in a keyword Database MDI database optimization study MDI internal data set representation study Storage management trade study (commercial HSM vs. MDI heritage) Archive media study to replace MDI Ampex robotic units Storage Area Network (SAN) / Network Attached Storage (NAS) trade study Location of offsite media storage

Page 22GS CDR May 2005 Risks and Mitigations Physical location of JSOC - Tracking through Stanford Management - Rent space off-campus Cost risk in assumed rate of computer technology development - Advances already made - Wide margin of costing allowed Data Base Performance - Prototyping underway - Design alternatives - Higher performance equipment

Page 23GS CDR May 2005 Development & Test Plans and Procurement Schedule HMI and AIA Data EGSE installed –Prototype for I/F testing with GSMar, 2005 onward –Version 2 to support flight inst.June 2005 JSOC Capture System –Purchase computersSummer 2006 –Support DDS testingFall 2006 –Final system installedSpring 2007 JSOC SDP Infrastructure, SUMS, DRMS, PUI –Prototype testing of core systemJune 2005 –Fully functionalDec, 2005 Purchase computers for JSOC Jan, 2007 Infrastructure Operational April, 2007 Data Product Modules Jan, 2008 Test in I&T and with DDS,MOC as called for in SDO Ground System schedule During Phase-E Add media and disk farm capacity in staged plan, half-year or yearly increments First two years of mission continue Co-I pipeline testing support

Page 24GS CDR May 2005

Page 25GS CDR May 2005 Test Approach Unit test Sub System Integration Regression Tests Performance Test System Integration I &T with DDS/MOC On-line test results and history

Page 26GS CDR May 2005 Procurement Status – : First 2 Years Procure development system with most likely components (e.g. tape type, cluster vs SMP, SAN vs NAS, etc) Modify pipeline and catalog infrastructure and implement on prototype system. Modify analysis module API for greater simplicity and compliance with pipeline. Develop calibration software modules. Complete level 0 processing code to support HMI/AIA instrument testing. – : Two years prior to launch Complete Level-1 analysis development, verify with HMI/AIA test data. Populate prototype system with MDI/TRACE data to verify performance. Procure, install, verify computer hardware. Implement higher-level pipeline processing modules with Co-I support –During Phase-E Add media and disk farm capacity in staged plan, half-year or yearly increments First two years of mission continue Co-I pipeline testing support