Camera Control System and Data Flow Oct 14, 2008 internal review Stuart Marshall Mike Huffer *, Terry Schalk, Jon Thaler.

Slides:



Advertisements
Similar presentations
M A Wajid Tanveer Infrastructure M A Wajid Tanveer
Advertisements

Network Management Overview IACT 918 July 2004 Gene Awyzio SITACS University of Wollongong.
© 2007 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 Communicating over the Network Network Fundamentals – Chapter 2.
Slide 1 ITC 2005 Gunnar Carlsson 1, David Bäckström 2, Erik Larsson 2 2) Linköpings Universitet Department of Computer Science Sweden 1) Ericsson Radio.
WBS & AO Controls Jason Chin, Don Gavel, Erik Johansson, Mark Reinig Design Meeting (Team meeting #10) Sept 17 th, 2007.
SLAC June Gpix Camera Camera DAQ/Control System SLAC Program Review T. Schalk CCS team.
ISOC Peer Review - March 2, 2004 Section GLAST Large Area Telescope ISOC Peer Review Test Bed Terry Schalk GLAST Flight Software
Chapter 2 Network Models.
Course Instructor: Aisha Azeem
The OSI Model A layered framework for the design of network systems that allows communication across all types of computer systems regardless of their.
 The Open Systems Interconnection model (OSI model) is a product of the Open Systems Interconnection effort at the International Organization for Standardization.
© 2007 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 Communicating over the Network Network Fundamentals – Chapter 2.
DCS LEB Workshop ‘98, Rome, Detector Control System, H.J.Burckhart,1 Detector Control System H.J Burckhart, CERN u Motivation and Scope u Detector and.
SCADA and Telemetry Presented By:.
Introduction to USB Development. USB Development Introduction Technical Overview USB in Embedded Systems Recent Developments Extensions to USB USB as.
Input/OUTPUT [I/O Module structure].
Protocol Architectures. Simple Protocol Architecture Not an actual architecture, but a model for how they work Similar to “pseudocode,” used for teaching.
1. 2 Purpose of This Presentation ◆ To explain how spacecraft can be virtualized by using a standard modeling method; ◆ To introduce the basic concept.
© 2007 Cisco Systems, Inc. All rights reserved.Cisco Public ITE PC v4.0 Chapter 1 1 Connecting to the Network Networking for Home and Small Businesses.
Imperial College Tracker Slow Control & Monitoring.
1 Lecture 20: I/O n I/O hardware n I/O structure n communication with controllers n device interrupts n device drivers n streams.
Internet Addresses. Universal Identifiers Universal Communication Service - Communication system which allows any host to communicate with any other host.
NCSX NCSX Preliminary Design Review ‒ October 7-9, 2003 G. Oliaro 1 G. Oliaro - WBS 5 Central Instrumentation/Data Acquisition and Controls Princeton Plasma.
Gunther Haller SiD LOI Meeting March 2, LOI Content: Electronics and DAQ Gunther Haller Research Engineering Group.
Magnetic Field Measurement System as Part of a Software Family Jerzy M. Nogiec Joe DiMarco Fermilab.
© 2007 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 Network Services Networking for Home and Small Businesses – Chapter 6.
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
THE OSI REFERENCE MODEL Open Systems Interconnection (OSI) International Organization for Standardization( ISO)
Particle Physics & Astrophysics Representing: Mark Freytag Gunther Haller Ryan Herbst Michael Huffer Chris O’Grady Amedeo Perazzo Leonid Sapozhnikov Eric.
Cisco S2 C4 Router Components. Configure a Router You can configure a router from –from the console terminal (a computer connected to the router –through.
SLAC Particle Physics & Astrophysics The Cluster Interconnect Module (CIM) – Networking RCEs RCE Training Workshop Matt Weaver,
The OSI Model.
Utilities 14 October 2008 Martin Nordby, Gordon Bowden.
CHAPTER TEN AUTHORING.
ATCA based LLRF system design review DESY Control servers for ATCA based LLRF system Piotr Pucyk - DESY, Warsaw University of Technology Jaroslaw.
Topics of presentation
Software Architectural Styles Andrew Midwinter, Mark Mullen, Kevin Wong, Matt Jones 1.
1 CHAPTER 8 TELECOMMUNICATIONSANDNETWORKS. 2 TELECOMMUNICATIONS Telecommunications: Communication of all types of information, including digital data,
Hobby - Eberly Telescope MARS Software Design Review November 2002 Michael H. Ward Systems Analyst McDonald Observatory MHW 04-November
© 2004 Mercury Computer Systems, Inc. FPGAs & Software Components Graham Bardouleau & Jim Kulp Mercury Computer Systems, Inc. High Performance Embedded.
1. 2 Router is a device which makes communication between two or more networks present in different geographical locations. Routers are data forwarding.
Software Status Sonja Vrcic Socorro,
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 1: Introduction to Scaling Networks Scaling Networks.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
Lecture 12: Reconfigurable Systems II October 20, 2004 ECE 697F Reconfigurable Computing Lecture 12 Reconfigurable Systems II: Exploring Programmable Systems.
Gunther Haller SiD Meeting January 30, 08 1 Electronics Systems Discussion presented by Gunther Haller Research Engineering.
August 2003 At A Glance The IRC is a platform independent, extensible, and adaptive framework that provides robust, interactive, and distributed control.
Source Controller software Ianos Schmidt The University of Iowa.
Mountaintop Software for the Dark Energy Camera Jon Thaler 1, T. Abbott 2, I. Karliner 1, T. Qian 1, K. Honscheid 3, W. Merritt 4, L. Buckley-Geer 4 1.
IBM Global Services © 2005 IBM Corporation SAP Legacy System Migration Workbench| March-2005 ALE (Application Link Enabling)
COMPUTER NETWORK AND DESIGN CSCI 3385K. Host-to-Host Communications Model Older model Proprietary Application and combinations software controlled by.
GLAST Large Area Telescope LAT Flight Software System Checkout TRR Test Environment Sergio Maldonado FSW Test Team Lead Stanford Linear Accelerator Center.
LSST Camera Electronics Overview September 16, 2008 R. Van Berg for the LSST Electronics Team.
Department of Particle & Particle Astrophysics Modular Data Acquisition Introduction and applicability to LCLS DAQ Michael Huffer,
Data and Computer Communications Eighth Edition by William Stallings Chapter 15 – Local Area Network Overview.
TCP/IP Protocol Suite Suresh Kr Sharma 1 The OSI Model and the TCP/IP Protocol Suite Established in 1947, the International Standards Organization (ISO)
COMPUTER NETWORKS Hwajung Lee. Image Source:
SEPTEMBER 8, 2015 Computer Hardware 1-1. HARDWARE TERMS CPU — Central Processing Unit RAM — Random-Access Memory  “random-access” means the CPU can read.
Online Software November 10, 2009 Infrastructure Overview Luciano Orsini, Roland Moser Invited Talk at SuperB ETD-Online Status Review.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
ComCam Hardware and Interfaces James Howard ComCam CAM LSST 2017 Project & Community Workshop 2017 Aug 15.
Agenda ComCam planning and scheduling (James & Jacques) CCS & CCS-TCS Interface (Tony) Software to use ComCam Software to get data out of ComCam (Mike)
ComCam Hardware and Interfaces James Howard ComCam CAM LSST 2017 Project & Community Workshop 2017 Aug 15.
How SCADA Systems Work?.
August 23, 2000 R.N. Dubois Dow Chemical
EEC-484/584 Computer Networks
EEC4113 Data Communication & Multimedia System Chapter 1: Introduction by Muhazam Mustapha, July 2010.
Basics of Telecommunications and Networks
NetFPGA - an open network development platform
Presentation transcript:

Camera Control System and Data Flow Oct 14, 2008 internal review Stuart Marshall Mike Huffer *, Terry Schalk, Jon Thaler

Internal review Oct 14 ‘08 2 Outline CCS Definition CCS Scope Control Architecture Data Flow Architecture

Internal review Oct 14 ‘08 3 Camera Control System The CCS maintains the state of the camera as a whole, and thus is able to orchestrate the sequence of operations that enables data collection. It receives status information from the camera subsystems and is able to detect and report faults. (This does not include the detection of safety-related error detection, which is the responsibility of individual subsystems.) It provides the camera interface to the observatory control system (OCS), responding to commands from and reporting status information to the OCS; It is the interface between the human operators and the instrument. It can respond to reconfiguration commands, ensuring orderly transitions between different modes of camera operation. It provides the hardware and software necessary to receive the data streams generated by the camera and transmit them to downstream clients, such as the Data Management system (DM).

Internal review Oct 14 ‘08 4 Camera Systems Locations test/staging mirror coating data control office cable wrap platform lift shaft telescope pier entry mech. clean room white room camera utilities Suite of camera maintenance rooms on east side of service & ops building Approximate routing of utility runs from camera on telescope: Flex Hard Nordby

Internal review Oct 14 ‘08 5 subsystem CCS Baseline Architecture Intelligent CCS Network Messages {cmd, telemetry, alarms} Mediation console S M subsystem OCS telemetry logging alarms M “framework” Library of classes, Documentation, Makefiles, Code management console M engineer Developer Same GUI tools User scripting S

Internal review Oct 14 ‘08 6 Every arrow has an interface at each end. Red means it’s a CCS group responsibility. Subsystems that do not produce data (only status info). OCS SASSDS CommandResponse Data DM FCS MCM Subsystems that produce data. Similar for WFS/WDS and GSS/GAS (see next slide) Similar for TSS, RAS, SCU, VCS, and L2U Subsystem managers Subsystems mapping to Managers

Internal review Oct 14 ‘08 7 Camera buses Camera Body Thermal (T5U)Science DAQ (SDS) Guide Analysis (GAS) WF DAQ (WDS) TCM Thermal (T3U,T4U) Shutter (SCU) Filters (FCS) Vacuum (VCS) Power/Signal (PSU) Cryostat Thermal (T1U,T2U) FP actuation (FPU) Guide array (GSS) Wave Front (WFS) Science array (SAS) Raft Alignment (RAS) Camera Control (CCS) Command Status Auxiliary systems Control Room Observatory buses Camera Control Architecture

Internal review Oct 14 ‘08 8 CCS architecture The CCS architecture is described in DocUshare (Document-4083). In a nutshell, the CCS consists of a Master Control Module (MCM) and a collection of devices. This is very similar to the TCS SW which will hopefully simplify the system and lessen the maintenance burden. Ground rules / requirements... * The control system is built on nodes in a tcp/ip network (ethernet?). * Each device (eg, shutter) must be capable of of supporting an API to the CCS network. The CCS will provide:. Device APIs that support a limited number of language bindings.. MCM adaptors to the device controllers that enable device diagnostics using the developer's language binding. * All devices must be individually addressable.

Internal review Oct 14 ‘08 9 CCS architecture II * One device per physical and logical connection. Multiple connections means multiple devices. * Devices must always be listening (no dead time). * Each device must have a watchdog timer and a well defined response to communications loss. * Every device must implement at least German's minimal state machine model (CTIO talk): [Offline] [Active] [Idle] [Error] (not all transitions shown here) * There will be a standardized packet format, accommodating:. Commands (ack/nak). Responses. Asynchronous publication (status, config params,...)

Internal review Oct 14 ‘08 10 CCS architecture III * Devices must know the time. The MCM is the CCS time server. * Except for primary data flow only the MCM has an external interface.. The MCM must interface with the OCS/FDB.. The human interface to CCS is through the MCM.. GUI implementations must support scripting.. The CCS control API must be able to support the same GUI that the OCS/TCS operators use. * Long term maintenance: Collaborators will develop, deliver, and debug their SW. Long term (eg, for 10 years), the SW must be widely maintainable by people who did not participate in the original code writing.

Internal review Oct 14 ‘08 11 CCS architecture IV Implementation issues : * Implementation decisions must take into consideration possible incompatibilities and the desire for homogeneity.. Hardware platforms.. Operating systems.. Programming languages. * Code repository = Subversion * Bug tracking and wiki:. Code reuse:. Test stand needs:. CCS needs to control various environmental quantities that OCS will do on mtn.. We will need mock-up OCS, FDB, etc.. This year's test stand SW needs to work, but not necessarily the final implementation.. How many CCS's will we really need on the mountain? (eg, clean room) How capable must they be?. How will we implement alarms? We should have a single person responsible for this.. How will we implement Displays? How many different types?

Internal review Oct 14 ‘08 12 Data Flow: Science DAQ System (SDS) Processes, buffers & publishes science data to both telescope and DM systems Science Array (21 center rafts) –~3.2 Gpixels readout in 1-2 seconds Wave-Front-System (WFS) (4 corner rafts) Guider (4 corner rafts) Camera Control System SDSdata management & telescope systems camera wire protocol optimized for: small footprint, low latency, robust error correction Equally important requirement is to support camera development & commissioning –This means support for test-stand activity. Why is this support necessary? –reuse –allows early field testing –preserves across development, commissioning & operation: camera team’s software investment camera operational experience (CCS) commodity wire protocol (Ethernet)

Internal review Oct 14 ‘08 13 Irreducible functions Convert read-out representation to physical representation Manage and perform electronics cross-talk correction for the transient analysis Mount the server side of the client image interface –Stream data to DM Mount CCS interface Manage and perform image simulation Support In-situ monitoring & diagnostics Provide image buffering, used for… –“3 day” store –flow-control (client interface) –simulation –monitoring & diagnostics

Internal review Oct 14 ‘08 14 SDS Summary Architecture is now relatively mature –Functional prototypes are in-hand –Performance and scaling have been demonstrated to meet the performance requirements Focus has now shifted to the definition & development of the client interface. However: –A significant amount of interface work remains between camera electronics and the input side of the SDS –The specification (& implementation!) of necessary diagnostics is a work in progress A camera-centric activity, however… observatory could well add to these requirements… There is more then one SDS… –Support for “test-stands” –WFS & Guider activities –Full camera commissioning & operation “Building 33” Mountain-top –The SDS design is intended to encompass and support all these activities…

Internal review Oct 14 ‘08 15 The SDS & its data interfaces camera to telescope WFS data processing system to telescope guider data processing system image data processing system data management system WFS arrayscience arrayguider array SDS

Internal review Oct 14 ‘08 16 The test-stand crate & its interfaces sensor/raft/electronics etc… SDS crate router/switch Linux host test-jig Control, analysis & long-term archival 1-8 channels 1-2 gbit (copper) gbits (fiber)

Internal review Oct 14 ‘08 17 Using the development board FPGA (FX020) USB 1 GE Fiber/transceiver General purpose I/O (LVDS)

Internal review Oct 14 ‘08 18 Interface ICDs actively being developed We have had several workshops to develop a consistent set of expectations across the connected systems TCS in chile in 2007 Electronics in Boston slac in 2007 & 2008 DM and OCS in tucson in 2008 French group (filter subsystem) Coming up: –DM 2 day workshop (nov) –TCS workshop (nov - dec)

Internal review Oct 14 ‘08 19 SDS-Computer room to Telescope SDS- to Staging area, Clean room etc Sci. image flow 4 x 10 Gb fibers COMPUTER ROOM - Mountain Cisco Layer 3 switch To Base 10 GigE lines For replicating control rooms, TCS/OCS and all other services, videos, phones etc TCS and OCS Switch has 1.4 Tb backbone Second image ???? 48 port 100/1000 baseT blade Image or Part image To Central Mountain router

Internal review Oct 14 ‘08 20 Backup Slides

Internal review Oct 14 ‘08 21 Defining Subsystems Master Control Module The MCM controls all other CCS subsystems via messages passed over tcp/ip networking on a private camera net. The MCM is the only "device" visible to the OCS and represents the camera as a whole. Power control - utility room This subsystem enables, monitors, sequences power sources which are in the ground utility room It includes switches, relays, and logic to provide appropriate sequencing and interlocks between power source In particular, we might not power the heat exchanger if the insulating vacuum system is not functioning. Power control -- camera This subsystem enables, monitors, sequences power sources which are in the utility trunk volume. It includes switches, relays, and logic to provide appropriate sequencing and interlocks between power sources. Filter Changer (FCS) Filter changer is self contained set of 5 filters, electromechanical motion devices and controller.

Internal review Oct 14 ‘08 22 Defining Subsystems (cont’d) Shutter Assuming a two blade shutter instrumented for position vs. time measurements Focal Plane Thermal Control Controls the temperature of the focal plane assembly. It must monitor temperatures and heater settings in each of the 25 rafts. Inputs are temperatures reported by the science array via the TCM (or possibly the SDS) and output is to the RCM's via the TCM (or again possibly the SDS). Cryogen control (camera) Cryogen is delivered from the utility room to the valve box in the utility trunk. Actuated valves (at least 4) control the flow of cryogen to the cold plate (BEE heat sink) and the cryoplate (CCD/FEE heat sink). This system controls and monitors this flow. Sensors for air pressure (for actualted valves) and nitrogen gas purge may be needed. Temperature sensors in the lines, cold plates, and valve boxes are also likely. The output temperatures from this system would be visible to the cryogen delivery system on the ground and could be used to bring the CCD temperature control system within the limits of the focal plane heaters.

Internal review Oct 14 ‘08 23 Defining Subsystems (cont’d) Cryogen control (ground) Comprises 3polycold refrigeration systems, a pumping system, heat exchanger, storage reservoir, and round-trip transport lines. This system provides monitoring and control of this system as a whole. It must be able to start/stop and adjust these devices during operation. Cryostat temperature monitoring This system provides monitoring of all temperature sensors in the cryostat which are not part of the cryogen flow or science array systems. Camera body Thermal Control This is environmental control and monitoring of the camera body including all areas outside of the cryostat. In particular, this includes the L1/L2 chamber, the L3/shutter/filter zone and the carousel zone also. The system consists of a temperature regulated supply (from the ground) of dry nitrogen gas feeding a fan coil unit with a supply/return line for chilled water. The fan coil circulates the dry N2 throughout the volume absorbing heat in chilled water coils. Operation depends upon the supply of dry N2 (low pressure) and chilled water (telescope utility). Temperature sensors in the camera body skin and components will be inputs, with the system adjusting either fan speeds, or water flow rates to regulate the skin temperature of the system.

Internal review Oct 14 ‘08 24 Defining Subsystems (cont’d) Utility Trunk Thermal Control This is environmental control and monitoring of the utility trunk. The system consists of a temperature regulated supply (from the ground) of dry nitrogen gas feeding a fan coil unit with a supply/return line for chilled water. The fan coil circulates the dry N2 throughout the volume absorbing heat in the chilled water coil. Operation depends upon the supply of dry N2 (low pressure) and chilled water (telescope utility). Temperature sensors in the utility trunk components will be inputs, with the system adjusting either fan speeds, or water flow rates to regulate the skin temperature of the system. Cryostat vacuum control Two turbo-pumps with backing from the utility room. Operation would assume the backing pumps are on (with verification). Two or more vacuum gauges and possibly an RGA would be used to monitor the vacuum quality. Interface to turbo pumps unknown at this time but may include health/safety data from the pump itself.

Internal review Oct 14 ‘08 25 Defining Subsystems (cont’d) Utility room vacuum control The on-ground utility room vacuum system provides three distinct vacuum systems: - backing vacuum for the cryostat - backing vacuum for the valve box - insulating vacuum for the cryogen supply system lower portion. These systems will include some number of gauges and valves (manual) and also a N2 (nitrogen) gas purge supply with some monitoring. Valve box vacuum control This system comprises one turbo pump and some gauges and is backed by the utility room backing pump system (shared with the cryostat vacuum but they can be manually valved off). Sensors/gauges for monitoring both the vacuum and the pump/valves would also be included in this system. science array (TCM) Timing and Control Module (TCM) distributes commands and timing to the ~21 raft controllers. The TCM delivers raft status to the CCS.

Internal review Oct 14 ‘08 26 Defining Subsystems (cont’d) Science Data Acquisition (SDS) The SDS is the data acquisition and distribution system for science data. It receives the data stream from the CCDs, does preliminary processing, buffering and temporary storage of the science data. While under the control of the CCS, the SDS also provides services that are beyond and asynchronous with the camera readout. guide sensor (TCM) Timing and Control Module (TCM) distributes commands and timing to the 4 guide controllers. The subsystem module (this object) instantiates the states that represent the guider system. Guider Data Acquisition This is the portion of the SDS devoted to guiding. It nominally operates when the science array is integrating. Also, the real-time analysis of the guider data may take place in the SDS which then outputs offset parameters that the telescope control system uses to control the telescope tracking.

Internal review Oct 14 ‘08 27 Defining Subsystems (cont’d) Wavefront sensor (TCM) Timing and Control Module (TCM) distributes commands and timing to the 4 guide controllers. The subsystem module (this object) instantiates the states that represent the guider system. WFS Data Acquisition The SDS is the data acquisition and distribution system for wavefront sensor data. It receives the data stream from the sensors, does preliminary processing, buffering and temporary storage of the wavefront. While under the control of the CCS, the SDS also provides services that are beyond and asynchronous with the camera readout. The primary client of this data is the telescope system. It is not expected to be routinely consumed by any other client. Focal Plane laser spot system This system consists of a set of laser sources and optics which project a fixed pattern onto the focal plane for in-situ calibration procedures. Control will essentially consist of just enable/disable and on/off.

Internal review Oct 14 ‘08 28 Representing: Mark Freytag Gunther Haller Ryan Herbst Chris O’Grady Amedeo Perazzo Leonid Sapozhnikov Eric Siskind Matt Weaver CCS APC Paris Workshop Introduction to the SDS (the Science DAQ System) Michael Huffer, Stanford Linear Accelerator Center August 27, 2008

Internal review Oct 14 ‘08 29 The Channel Physically: –fiber-optic pair connected to transceivers Camera SDS –transceiver choice not yet made, however… SDS is relatively insensitive to this decision Logically: –Runs camera-private protocol full duplex asynchronous may operate from 1 to 3.2 Gb/sec –we will operate at 3.2 provides error detection & correction – data not buffered on camera QOS support (4 virtual channels) –bulk-data transfer (science data) –R/W configuration registers (minimal usage) –triggering (most likely not used) VHDL library provided for interface on BEE to Cluster Element Back-End-Electronics Camera FPGA Protocol interface

Internal review Oct 14 ‘08 30 The Cluster Element Is the unit of intelligence and buffering for the SDS … –services science data from 1–4 Channels –has two (2) 10-GE Ethernet MACs –contains up to 1 TByte of flash memory (currently 1/2 TByte) –contains a reconfigurable FPGA fabric ( + DSPs) –contains a PowerPC processor 450 MHZ) with: 512 Mbytes RLDRAM-II (currently 128 Mbytes) 128 Mbytes configuration memory –processor operates an Open Source Real/Time kernel (RTEMS) POSIX compliant interfaces standard I/P network stack –processor has a generic DMA interface which supports up to 8 GBytes/sec of I/O –processor has I/O interfaces to: all four input channels (at full rate) both Ethernet MACs (at full rate) flash memory (access at up to 1GByte/Sec) FPGA fabric

Internal review Oct 14 ‘08 31 RCE board + RTM (Block diagram) MFD RCE Media Carriers RCE Zone 2 Zone 3 slice 0 slice 1 slice 2 slice 3 slice 7 slice 6 slice 5 slice 4 MFD Fiber-optic transceivers backplane

Internal review Oct 14 ‘08 32 RCE board + RTM Media Carrier Media Slice RCE Power conversion Zone 1 (power) Zone 2 Zone 3 transceiver s

Internal review Oct 14 ‘08 33 Cluster Interconnect board + RTM (Block diagram) Zone 2 Zone 3 MFD CX-4 connectors 10-GE switch L2 backplane RCE X2 1-GE X2 1-GE 10-GE switch L2 Switch Managemen t Managemen t bus

Internal review Oct 14 ‘08 34 Cluster Interconnect board Power conversion Zone 3 Zone 2 Zone 1 1 GE Ethernet Management Processor 10 GE switch X2 External Ethernet

Internal review Oct 14 ‘08 35 ATCA (horizontal) crate (front) Fans shelf manager Cluster Interconnec t board

Internal review Oct 14 ‘08 36 Generic Ideas for Continued Discussion TCS/SOAR heritage where useful Subsystem boundaries defined by “tightly coupled” Implementing the event driven state machine model Communications: –Master/slave only –No peer-to-peer –Peer info via “subscription”??? Network (tcp/ip) is only connection to subsystem CCS proxy modules vs. resident CCS code in subsystem. Use of EA and UML/SysML to document design CCS trac/svn site exists Supervision of a distributed set of processes on a distributed set of nodes: –Startup/shutdown –Logging –Debugging/tracing/snooping tools –Graphical tools for monitoring –Error handling/restarts –Links to “hardware protection system”

Internal review Oct 14 ‘08 37 Defining Subsystems Substantial updates from mechanical engineering team Additions include utilities transport, utility trunk evolution, off- telescope operations Review and re-parse subsystem definitions based on “subsystems are tightly coupled” Track definitions and functionality of subsystems in UML Each subsystem (CCS view) is based on a generic “controller” model