ATST Software Conceptual Design ATST Conceptual Design Review 26 Aug 2003.

Slides:



Advertisements
Similar presentations
Ch:8 Design Concepts S.W Design should have following quality attribute: Functionality Usability Reliability Performance Supportability (extensibility,
Advertisements

1 1999/Ph 514: Channel Access Concepts EPICS Channel Access Concepts Bob Dalesio LANL.
Apache Struts Technology
By Philippe Kruchten Rational Software
Introduction CSCI 444/544 Operating Systems Fall 2008.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Technical Architectures
MIT iCampus iLabs Software Architecture Workshop June , 2006.
1 ITC242 – Introduction to Data Communications Week 12 Topic 18 Chapter 19 Network Management.
DCS Architecture Bob Krzaczek. Key Design Requirement Distilled from the DCS Mission statement and the results of the Conceptual Design Review (June 1999):
Software Frameworks for Acquisition and Control European PhD – 2009 Horácio Fernandes.
Application architectures
WBS & AO Controls Jason Chin, Don Gavel, Erik Johansson, Mark Reinig Design Meeting (Team meeting #10) Sept 17 th, 2007.
Chapter 13 Embedded Systems
PALM-3000 PALM-3000 Software Requirements Review Thang Trinh PALM-3000 Requirements Review, Caltech Campus November 12, 2007.
System Integration Management (SIM)
Architectural Design Establishing the overall structure of a software system Objectives To introduce architectural design and to discuss its importance.
Architectural Design.
ATST Software and Instrument Development 18 March 2009 Boulder, CO.
Why Analysis Process Refer to earlier chapters Models what the system will do makes it easier for understanding no environment considered (hence, system.
Overview of the Database Development Process
What is Architecture  Architecture is a subjective thing, a shared understanding of a system’s design by the expert developers on a project  In the.
1 CMPT 275 High Level Design Phase Architecture. Janice Regan, Objectives of Design  The design phase takes the results of the requirements analysis.
1. 2 Purpose of This Presentation ◆ To explain how spacecraft can be virtualized by using a standard modeling method; ◆ To introduce the basic concept.
Rational Unified Process Fundamentals Module 4: Disciplines II.
Introduction and Overview Questions answered in this lecture: What is an operating system? How have operating systems evolved? Why study operating systems?
Agent-based Device Management in RFID Middleware Author : Zehao Liu, Fagui Liu, Kai Lin Reporter :郭瓊雯.
ITEC224 Database Programming
An Introduction to Software Architecture
CSE 303 – Software Design and Architecture
DCS Overview MCS/DCS Technical Interchange Meeting August, 2000.
50mm Telescope ACS Course Garching, 15 th to 19 th January 2007 January 2007Garching.
1 Introduction to Database Systems. 2 Database and Database System / A database is a shared collection of logically related data designed to meet the.
9/14/2012ISC329 Isabelle Bichindaritz1 Database System Life Cycle.
ANSTO E-Science workshop Romain Quilici University of Sydney CIMA CIMA Instrument Remote Control Instrument Remote Control Integration with GridSphere.
Magnetic Field Measurement System as Part of a Software Family Jerzy M. Nogiec Joe DiMarco Fermilab.
Software Requirements Engineering CSE 305 Lecture-2.
Architecting Web Services Unit – II – PART - III.
© 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 1 Architectural Styles.
SOFTWARE DESIGN.
André Augustinus 10 September 2001 DCS Architecture Issues Food for thoughts and discussion.
Chapter 10 Analysis and Design Discipline. 2 Purpose The purpose is to translate the requirements into a specification that describes how to implement.
Design Analysis builds a logical model that delivers the functionality. Design fully specifies how this functionality will be delivered. Design looks from.
Server to Server Communication Redis as an enabler Orion Free
9 Systems Analysis and Design in a Changing World, Fourth Edition.
9 Systems Analysis and Design in a Changing World, Fourth Edition.
ICALEPCS’ GenevaACS in ALMA1 Allen Farris National Radio Astronomy Observatory Lead, ALMA Control System.
SOFTWARE DESIGN AND ARCHITECTURE LECTURE 13. Review Shared Data Software Architectures – Black board Style architecture.
Abstract A Structured Approach for Modular Design: A Plug and Play Middleware for Sensory Modules, Actuation Platforms, Task Descriptions and Implementations.
Architecture View Models A model is a complete, simplified description of a system from a particular perspective or viewpoint. There is no single view.
ICALEPCS 2005 Geneva, Oct. 12 The ALMA Telescope Control SystemA. Farris The ALMA Telescope Control System Allen Farris Ralph Marson Jeff Kern National.
© FPT SOFTWARE – TRAINING MATERIAL – Internal use 04e-BM/NS/HDCV/FSOFT v2/3 JSP Application Models.
CSI 3125, Preliminaries, page 1 SERVLET. CSI 3125, Preliminaries, page 2 SERVLET A servlet is a server-side software program, written in Java code, that.
Marcelo R.N. Mendes. What is FINCoS? A set of tools for data generation, load submission, and performance measurement of CEP systems; Main Characteristics:
DØ Offline Reconstruction and Analysis Control Framework J.Kowalkowski, H.Greenlee, Q.Li, S.Protopopescu, G.Watts, V.White, J.Yu.
Mar 18, 2003PFIS CDR1 Control System Summary of Changes Since PDR All the motors, drivers, sensors, switches, etc. have been chosen Built up a mechanism.
Oct 17, 2001SALT PFIS Preliminary Design Review Control System 1 Southern African Large Telescope Prime Focus Imaging Spectrograph Preliminary Control.
1 Channel Access Concepts – IHEP EPICS Training – K.F – Aug EPICS Channel Access Concepts Kazuro Furukawa, KEK (Bob Dalesio, LANL)
M. Caprini IFIN-HH Bucharest DAQ Control and Monitoring - A Software Component Model.
SOFTWARE DESIGN & SOFTWARE ENGINEERING Software design is a process in which data, program structure, interface and their details are represented by well.
Architecting Web Services
Architecting Web Services
Software Quality Engineering
Software models - Software Architecture Design Patterns
An Introduction to Software Architecture
Design Yaodong Bi.
Overview Activities from additional UP disciplines are needed to bring a system into being Implementation Testing Deployment Configuration and change management.
Chapter 6: Architectural Design
Telescope Control System Preliminary Design Review
Software Development Process Using UML Recap
Presentation transcript:

ATST Software Conceptual Design ATST Conceptual Design Review 26 Aug 2003

Presentation Structure Introduction –Approach –Things to watch for Requirements Functional design overview Technical design overview Virtual Instrument Model

Design Architectures Requirements Behavior Implementation Functional Design Technical Design

Special points Configurations –How observations are modeled in system Virtual Instrument Model –Provides flexibility in laboratory-style operations Device Model –Uniform implementation and control of devices Container/Component Model –Flexibility in a distributed environment

Key Science Requirements Combine multiple post-focus instruments –Operate simultaneously –Coordinated observing with remote sites Match flexibility and adaptability achieved by DST –Support ‘laboratory-style’ operation (modular instruments) –Support visitor instruments –< 30 min switching time between of active instrument set >40 year lifetime Massive data rates Track on/off solar disk (up to 2sr off)

Software Requirements Science Requirements Reality Software Requirements Software Design Common Sense

What types are there? Functional – what must the system do? Performance – how well must the system run? Interface – how does the system talk with the outside? Operational – how is the system to be used? Documentation – how is the system to be described? Security – who/what can do what/when? Safety – what can’t go wrong?

Functional Design Purpose –Focus on behavior and structure (what and why) –Measure against requirements Use cases/Overall design Information flow Principal Systems –Observatory Control System (OCS) –Data Handling System (DHS) –Telescope Control System (TCS) –Instruments Control System (ICS)

Overall Design Approach Want to adapt a conventional modern observatory software architecture to the special needs of the ATST –Avoid re-invention, but… –Concentrate on multiple instruments operating simultaneously in a laboratory environment –Flexibility is a key requirement of the functional design Overall functional design derived from ALMA, Gemini, and SOLIS, with consideration from other projects as well. –All share a common overall structure (with wildly different implementations) –All highly distributed with strong communications infrastructure

Overall Design OCS UIs, Coordination, Planning, Services DHS Science data collection Quick look TCS Enclosure/Thermal, Optics Mount, GIS ICS Virtual Instruments User Interfaces Core Software

Information flow Experiment Configurations Observations Data Virtual Instrument Component Device Driver Device Driver Device Driver Hardware Engineering Archives Quick Look

Operating Characteristics Distributed architecture using a communications bus –Components can be placed anywhere (and moved as needed) –Devices may be constrained by device driver constraints –Components locate each other by name, not location –Language and other environment choices are independent of behavior –Behavior separated from control by a control surface Communications bus –Multiple channels –Provides inter-language peer-to-peer communication –Locates components and provides connection handles –Monitors connectivity (detects communication failures) –High-speed, robust

Observatory Control System (OCS) Roles: –Construct sequence of configurations for each observation –Coordinate operation of TCS, ICS, and DHS –Provide user interfaces for operations –Provide services for applications –Provide ATST Common Software for all systems

Data Handling System Roles –Accumulate science data (including header information) handle data rates handle data volumes –Analyze data for system performance (quick look) –Provide archival, retrieval and distribution services

Telescope Control System Requirements Coordination and control of telescope components –Interface to the Observatory Control System –Configuration management –Safety interlock handling Ephemeris, pointing, and tracking calculations. –Time base control and distribution –Pointing models –Target trajectory distribution Image quality –Active and adaptive optics management –Thermal management

Telescope Control System Subsystems –M1 Control –M2 Control –Feed Optics Control –Adaptive Optics Control –Mount Control –Enclosure Control General Systems –Time base –Global Interlock –Thermal Management Global Interlock Thermal Management Image Quality Acquisition, Track, and Guidance M1M2 Feed Optics Adaptive Optics MountEnclosure TCS

Global Interlocks Interlock Telescope Control System High-Level Data Flows –OCS configurations –ICS configurations –TCS events and archiving Low-Level Data Flows –Subsystem configurations –Trajectories –Image quality data –Interlock events OCS TCSICS cfgs events FOCSECS M1CS MCS AOCSM2CS cfgs Trajectory Image quality data

Telescope Control System Virtual Telescope Model –Tip of the hat to Pat Wallace (DRAL). –Several points of view (Instruments, AO WFS, aO WFS). Pointing and Tracking –Off-axis telescope should be irrelevant. –Tracking will be at solar rate. –Coordinate systems include ecliptic and heliocentric. –Limited references to build pointing map (1 object/6 months). –Open-loop tracking for coronal and non-AO work. –Closed loop tracking uses AO as guider. Thermal Management –Daily thermal profile. –Monitor heat loads and dome flushing.

Mount Control System Pointing and Tracking All ephemeris and position calculations are done by the TCS. The MCS follows a 20 Hz trajectory stream provided by the TCS. This stream consists of (time, position) values that the MCS must follow. Current position, demand position, torques, and rates are output at 10 Hz. Thermal Management The MCS must keep the mount structure at ambient temperature. May be provided by a separate controller (TBD). Interlocks Interlock conditions cause a power shutdown, brakes on, cover closed. Caused by: GIS, over-speed, over-torque, mechanical obstruction (locking pin, manual drive crank, liftoff failure). Provided by a separate controller (PLC-based) that is always operational.

Mount Control System Servo Requirements Pos = 3 arcsec V slew = TBD °/sec V trk = TBD °/sec A slew = TBD °/sec 2 A trk = TBD °/sec 2 Jitter = TBD °/sec Azimuth Elevation Coudé Trajectory 20 Hz Trajectory summing and smoothing. Bias M2 Bias 0.01 Hz

M1 Mirror Control System Axial Support: blending and averaging AO information, applying force map, correcting servo feedback. Mirror Position: detecting translation and rotation errors, (feedback to actuators?). Thermal management: controlling temperature, applying thermal profile estimates. Controller: interfacing to TCS & GIS, simulator. TCS GIS M1CSAOCS Axial Support Mirror Position Thermal Management M1 Controller Actuators, Force Sensors Force Map Translation, Rotation Sensors Aperture Stop Blowers, Coolers, Exchangers Thermal Profile Interlocks Simulator Interfaces

M2 Control System Tip-Tilt-Focus Base configuration set by TCS. Corrections from AOCS in 10 Hz stream. Blending data Conversion into off-axis translation and rotation. Thermal Management Secondary Mirror Heat Stop Lyot Stop M2CS TCS AOCSMCS Blending Conversion XYZ Rotation XYZ Translation Thermal Management Base Configuration aO & AO TTF 10 Hz Tip-Tilt Offset 0.01 Hz

Feed Optics Control System Small Mirrors M3: Gregorian feed. M7: Coudé feed. Other Optics aO WFS beamsplitter Polarizers Filters? Thermal Management Mirror Cooling Tube Ventilation Coudé entrance

Adaptive Optics Control System M1 DHS M2 OCS Command Channel Event Channel Data Channel 0.1 Hz 10 Hz On Demand aO WFS AO WFS AOCS TTM6 DM5 2 KHz On Demand Mount 0.01 Hz Active optics system

Adaptive Optics Control System M2 M1 M5 M6 WFS Mount aO/AOCS TTF bias offload ~10 Hz Low-order figure offload ~0.1 Hz Tip-tilt bias offload ~0.01 Hz Tip-Tilt Mirror ~2 KHz Deformable Mirror ~2 KHz Active Optics 10 Hz Adaptive Optics ~2 KHz Adaptive Optics/ Active Optics Control System ~2 KHz

Enclosure Control System Azimuth: drives, brakes, encoders, sensors. Shutter: drive, brakes, encoders, sensors, sun shade Auxiliary: vent gates, thermal controller, cranes Controller: interface, interlock, simulator TCS GIS ECSMCS AzimuthShutterAuxiliary M1 Controller Drives, Brakes Encoders, Sensors Drives, Brakes Thermal Management Vent Gates Interlocks Simulator Interfaces Sun Shade Encoders, Sensors Cranes

Instrument Control System Requirements Laboratory/Experiment Style Observing –Flexible instrument configuration –Use of multiple components Instruments –Facility instruments follow all ATST interfaces –Visitor instruments obey a minimal set of ATST interfaces –Multiple instruments must work together. Telescope –Control the beam position –Control modulators, AO, and other image modifiers Development –Several development locations with numerous partners.

Available Components Instrument Control System Management Controls the lifecycle of virtual instruments Allocates components Interface Controls configurations from the OCS Provides user interfaces Presents TCS and DHS as resources. Development Provides standard instrument as template OCS ICS DHSTCS ViSPVisTFVI NIrSPVI Component

Instrument Control System Management Lifecycle 1.Select from list of components 2.Build a VI or retrieve an existing VI 3.Register VI with ICS 4.Submit configuration to OCS 5.OCS schedules configuration 6.ICS enables your VI 7.VI takes control of components 8.Interact with your VI [Engineering Mode] Available Components My VI ICS OCS

Technical Design Overview Purpose –Focus on implementation issues (how) –Must allow implementation of functional design –Identify options, make choices Tiered hierarchy –Isolates technology layers –Allows technology replacement

ATST Technical Architecture Component Support Config DB Error System Log System Time System Data Channels Admin Apps Applications App Framework APIs & Libraries Scripting Support UIF Libraries Container Support Alarm System Archiving System Development Tools Communications Middleware High-level APIs/Tools Services Core Support Base Tools Data Handling Support Astro Libraries Device Drivers Integrated APIs/Tools

Communications Communications Bus Notification Service Synchronous Communications Asynchronous Communications

Services Logging Events Alarms Connection Persistent Stores

Interfaces Logically separated into three classes: –Lifecycle (startup/reset/shutdown of components) –Functional (command/action/response - behavior) –Service access (connection, log, event, alarm, stores) Functional interfaces define accessible behavior –Physically extend the lifecycle interface –Narrow interfaces (few commands used) –All devices use a common interface Formally specified and enforced using communications middleware

Container/Component Model Industry standard approach to distributed operations (.NET, EJB, CORBA CCM, etc). Components implement functionality Containers provide services to components and manage component lifecycle Component Container Functional interface Lifecycle interface Service interface

Containers ATST supports two types of Containers –Porous – Components provide direct access to their functional interfaces –Tight – Containers wrap component functional interfaces Interface Wrapper

Components Hierarchically named Can (currently) be either Java or C/C++ Three lifecycle models for components –Eternal: created on system start, run to system stop –Long-lived: created on demand, run until told to stop –Transient: exist only long enough to satisfy request Exist only inside containers –Common base interface allows manipulation by container Critical attributes maintained in separate persistent store

Observations - 1 Program of configurations Configurations flow through system: status is updated as they are operated on by system Components and devices respond to configurations Configurations implemented as sets of attributes (name/value pairs) Configurations are uniquely named and permanently archived (as are observations)

Observations - 2 Observations constructed using Observing Tool Choice of OT is TBD (ALMA, Gemini, STScI, others) External representation is XML Configurations may be composed from other configurations and components may decompose a configuration into a set of smaller configurations

Observations - 3 Observation managers track sets of observations as they are operated on by the system Components tag header information to identify the component associated with the generation of that header information

Real-time Systems Laboratory-style operations –Rapid setup and reconfiguration –Engineering observations Inexpensive –Existing software implementation or design –Rapid deployment –Code reuse Contractual design and development –Easily partitioned work packages –Common infrastructure and tools –Simulators

Device Model The device model is used by all real-time components –Other models are available for OCS services (log, notification, etc.). It provides a common interface to: –High-level objects (OCS and DHS). –Other devices. –Low-level hardware drivers. –Global services (log, event, database, etc.). It can be inherited by more complex devices. It forces all systems to obey the command/action model. It operates in a peer-to-peer environment.

Device Model Devices have common properties: –Attributes: state, health, debug, and initialized. –Command Interface: offline, online, start, stop, pause, resume, get, set. –Services Interfaces: event, database. Devices have common operations: –Initialize, power-up, check parameters, change state, execute actions, handle errors, respond to queries, receive and generate asynchronous events. Devices have common communications: –Name/connection registration. –Event listeners and posters. –Databases, log, and alarm services.

Device Command Interface Devices have a simple command interface: –Offline, online, start, stop, pause, resume, set, and get. Each command moves the device state machine to another state: –States are: OFF, IDLE, BUSY, PAUSED, FAULT OFF IDLE PAUSED FAULT BUSY online offline stop/done start resume ( fault ) pause ( fault )

Configurations All devices use configurations to transport information. –A group of attributes and corresponding value –i.e., a filter wheel might act upon: {position=red; rate=10; starttime=10:38:18}. –Values may be any native data type, arrays, and lists. Configurations are followed throughout the system. –Each device action is traceable to a configuration. –Header information is reconstructed from configuration events. Configurations have states: –A configuration may be in multiple states in multiple devices. –States do not iterate. –Final state has an associated completion code. CreatedQueuedRunningDone

Inherited Devices The Device class is an abstract class; it needs to be inherited by another class. –These classes specify information and operations unique to a particular device. They may create configuration templates, run background tasks, handle hardware control, and generate specific information. –For example, the MotorDevice inherits the Device class to operate servo motors and define positions, limits, power, and brake operations. An extended class may itself be extended. –The DiscreteMotorDevice inherits the MotorDevice class to provide defined positions and name-to-position conversion.

High-Level Devices Some devices do not operate hardware, they operate other devices or connect to high-level, non-device objects (OCS/DHS). –The SequenceDevice runs other devices in a defined order and phase. –The ControllerDevice executes scripts. –The MultiAxisDevice coordinates simultaneous actions. High-level devices are an aggregation pattern. –They do not override or hide the low-level devices. –They allow the low-level devices to operate independently.

Command/Action Model Commands cause external actions to occur. A command will return immediately. The action begins in a separate thread. Multiple commands can be given while actions are on-going. This allows us to “stop” an action, or queue up the next “start”. Actions return asynchronous state information. A device will transition to the BUSY state, then either back to the IDLE state, or to the ERROR state. Command Process Action Process Shared Data Config Response Actions Completion Synchronization Mechanism

Peer-to-Peer Communications Devices must have flexible connections. –Depending upon the requested operation, a device may need to communicate with several different devices. –Devices must not exclusively control another device. Peer-to-peer communications allows a loose federation of devices. –No single point failures, outside of the communications system and the naming service. –Multiple federations can exist simultaneously in the system.