Download presentation
Presentation is loading. Please wait.
Published byEdmund Little Modified over 9 years ago
1
A software framework for the Advanced Technology Solar Telescope Steve Wampler
2
What is ATST? 7X photon collecting power Unvignetted light path Prime focus heatstop/occultor Integrated 1300+ actuator AO Rotating Coudé lab 70TB/day collected, 5TB/day delivered Haleakala, Maui, Hawaii Advanced Technology Solar Telescope
3
Observatory SW Trends Away from ‘custom’ toward: –Commodity hardware (PC), distributed systems –Commodity OS (Linux, FreeBSD, etc.) –Common Software infrastructure –COTS/Community communication middleware –Standard software models (tiered, separation of technical and functional architectures, etc.) –Common high-level models and architectures
4
Common Software Common framework for all observatory software –ALMA ACS is excellent example –ATSTCS draws from general ACS model Much more than just a set of libraries Separates technical and functional architectures Provides bulk of technical architecture through Container/Component model Allows application developers to focus on functionality Provides consistency of use System becomes easier to maintain and manage
5
Communication Middleware Avoids need to write communication infrastructure in-house –Less effort –Less in-house expertise required –Access to outside expertise –Benefit from wide-spread use Often provides rich set of features Supports actions required to run in a distributed environment Lots to choose from (both commercial and community)
6
What’s wrong with Middleware? 900-lb gorilla: –Promotes dependency on small set of vendors –Hard to control direction –Typically deeply integrated into common services –Difficult to change once integrated (lots to choose from, but once you choose, you’re stuck) –Sometimes obsolete before deployment –Too feature-rich (Kid-In-Candy-Store syndrome) Adopting standards (e.g. CORBA) instead of specific packages can help, but: –Standards not particularly agile, often incomplete –Can still get stuck as technology advances
7
ATST Common Software
8
ATST Containers and ComponentsComponents Container Lifecycle controlService access interfaces Custom interface User code
9
ATST Components IRemote ILifeCycle IController Container Component Controller IFunctional IContainerIComponent Technical Interfaces Functional Interfaces Narrow functional interfaces –IComponent has just two commands: get, set –IController has six commands: get, set, submit, pause, resume, and cancel Controller implements “command/action/response” model –Actions execute asynchronously from command submission (no blocking) –Supports multiple simultaneous actions –Supports scheduled actions These classes implement technical interfaces, subclasses add functionality. e.g. doSubmit, doAction
10
Many services available
11
Service Layers Service tools Component-specific data All knowledge of service implementation isolated by the respective Service Tool Tool chaining keeps tools small and focused Uniform access from Components Designed for use, not implementation Bridge between functional and technical
12
Containers/Components
13
Toolbox Loaders Toolbox Loader Palate of Service Tools Toolbox Shared Private
14
Ex: Toolbox Loader
15
An Example: Event Service Container setToolBoxLoader(“atst.cs.ice.IceToolBoxLoader”); setToolBoxLoader(“atst.cs.jaco.JacoToolBoxLoader”); Component Event.post(eventName, eventMessage); Toolbox eventTool.post(getTimestamp(), appName, eventName, eventMessage); ICE Event Service Tool JacORB Event Service Tool
16
An Example: Log Service Component Log.warn(“M2 temp overload); Toolbox logTool.log(getTimestamp(), appName, “warning”, message); Buffered DB Log Service Tool Post Log Service Tool Print Log Service Tool Three service tools chained
17
How well is ATSTCS working? Approach seems successful (early alpha-release supported both CORBA and ICE) –Choice of service tool has no impact on component development – can select between CORBA/ICE at runtime –Already helped in unexpected ways (licensing conflicts) ICE/CORBA service tools easy to write: –Both similar architectures –Both align well with access helper models Less aligned services likely to be harder –Changes are well isolated at service tool layer, however 3 rd party implementation of TCS simulation Developer code “simpler”
18
Ex: Controller Simulator
19
Event system performance Three dual-core machines (source, target, and event server), GbE network, CentOS4 Two versions of ICE: 2.1 and 3.2 (CORBA OmniNotify/TAO much slower) 1,000,000 events, ~120 bytes each All Java (JDK 1.6) code (C++ and Python currently much slower) Two service tools tested (unbatched and batched), now combined into a single tool (per-event stream batching), both fully stable
20
Log service performance All dual-core machines (source, logdb server), CentOS4, GbE PostgreSQL version 8.2.4, JDK 1.6 500,000 messages, two sizes of payload end-to-end performance (client to database) Buffered and unbuffered service tools tested –Multi-buffering to handle message spikes
21
But wait, there’s more… Toolboxes themselves can be chained, so higher-level application layers can delegate management of specialized services to ATSTCS (e.g. DHS header service and data distribution service) Service tool chains can be changed dynamically (DHS Quick-look Probe) Can help with system migration as software technology advances and to help integrate legacy systems: –E.g. (carefully) chain ICE and CORBA tools together –Reduces effects of ‘big bang’ system upgrades when core infrastructure changes
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.