Jeff Hill
LANSCE Requirements – a Review EPICS Paradigm Shift Magic Pipes Data Access, is it Easy CA? Database CA Service Server Upgrades On How We Can Move Forward, IMHO Conclusion
LANSCE, a versatile machine – Originally producing H+, H-, and polarized H- Each with different intensities, duty factors, and even energies – depending on experimental and medical isotope production needs LANSCE timing and flavoring of data – Flavoring Selection based on - logical combinatorial of beam gates – Timing Selection based on - time window sampling Many permutations – Too many to, a-priori, install records for all of them – Subscription update filtering is needed
Distributed Control Data Acquisition Physics Control Open Source Vendor Neutral OS Neutral Small Footprint EPICS
What is a Data Acquisition System? Replacing …
What is a Data Acquisition System? Must efficiently filter, and archive, copious amounts of data ▪ selecting interesting occurrences ▪ Saving them for later detailed processing/evaluation Must be easily reconfigurable
What is a Distributed Data Acquisition System? Must be runtime reconfigurable by clients ▪ Don’t expect to know initially, ▪ When designing/compiling runtime system ▪ What experiments/filters might be devised later on ▪ Experiments/filters configured when client subscribes ▪ Don’t expect to know initially ▪ When designing/compiling runtime system ▪ What data aggregations will be on different data branches to different clients
Current weaknesses deploying EPICS into Data Acquisition situations? Record processing provides good flexibility to create event filters, but … ▪ Frequently, it isn't possible to know all of the experiments when the IOC’s database is designed ▪ A distributed data acquisition system needs runtime reconfiguration, initiated by client side tools ▪ Limited data model ▪ No runtime aggregation, or user defined types
Current weaknesses deploying EPICS into Data Acquisition situations? No support for site specific tagging of the data ▪ If a site needs to filter for LANSCE H- beam ▪ Filtering based on process control attributes such as the time- stamp is awkward ▪ Filtering based on site specific parasitic PV attribute data (the LANSCE flavor) leads to better structured control room applications
Before EPICS, a process control system by design, sometimes used for data acquisition Now EPICS, a process control and data acquisition system by design Not an upgrade, but a leap forward in terms of the general utility of the system
Alarm State PV ValueSignal Data Time Stamp Device Support Record Support DB Common CA Server Record Specific Values Device Specific Values
Issues transporting data through software layers Independency Data Lifecycle Concurrency Efficiency
Internal code changes in one of Device Support, Record Support, CA Server shouldn’t require matching changes in one of the others Need runtime data introspection
Data Access provides runtime introspection Catalog, an abstract interface to a structured data ▪ traverse reveals all of the fields and their purposes ▪ find locates a field of a particular purpose Clerk provides simple get interface ▪ clerk.get ( id_units, unitsString ); ▪ Range errors detected during conversion (an upgrade)
Data is created during record processing, but must not be destroyed until the last per-client thread in the server is done filtering / copying it Reference counting smart pointers manage data lifecycle When the last pointer reference is destroyed the data are destroyed
Data is modified during record processing, but must not be modified at the same time that a server’s per-client thread is filtering / copying it Auto-locking smart pointers manage concurrency
Smart Pointers work through flexible abstract handle interface Application chooses locking strategy ▪ Locks can be shared between objects ▪ If the data are immutable then no locking is required Applications choose reference counting strategy ▪ Reference counter might be shared between objects ▪ Reference counter might be embedded with the data ▪ Immortal data? no reference counter required
Smart pointer reference counting Uses new atomic operations library ▪ Much faster than a mutex Data Access Arrays transferred in moderate, fixed, sized chunks
In the EPICS community we have – Application developers (well adjusted, etc) EPICS database, screens, matlab, python, tcl, etc – System programmers (geeks) Device drivers, EPICS internals, etc Implementing Data Access interface for particular data structure / class System programmer job
Once interfaced with Data Access – A high level (i.e. easy ca interface is available) – DA is not the data manipulation interface used by application level users – Users use the public interface of the data structure / class which has been interfaced – Communities develop around the data structures / classes standardized by particular applications, industries, and instruments
CA Server Database Device Support Database CA Service
General strategy Database service – part of the database implementation ▪ Therefore can be (should be) intimately aware of database internals Improvements Eliminate subscription list protection mutex allocated in every record Allow communication via non-contiguous arrays ▪ Eliminate EPICS_CA_MAX_ARRAY_SIZE parameter
Eliminate EPICS_CA_MAX_ARRAY_SIZE dbGetField, dbPutField, dbPutCallback array API – void pointer, number of elements, type code – Contiguous buffer greater than equal largest array must exist in ca server In contrast, Data Access – Arrays passed as a sequence of compile-time-typed contiguous blocks with multi-dimensional bounds – Sever now uses moderate-sized, fixed-sized, communication buffers Somehow Data Access interfaced data must be the source/sink for a database field
Eliminate subscription list protection mutex allocated in every record Database service can protect its subscription list using DB scan lock Smart pointer handle for the db service ▪ Uses the db scan lock for synchronization
General strategy Minimal changes to existing database access code which can be easily verified ▪ No architectural changes Refactor dbGetField, dbPutField, dbPutCallback code onto lowest common denominator interface ▪ Presuming db scan lock already owned ▪ Having been taken at a higher level ▪ Field modification callback function pointer parameter ▪ Lowest common denominator interface is private
Lock, in every record for subscription list is eliminated Opportunities to eliminate locking ▪ Consolidate CA service locking and db scan locking dbGetField, dbPutField, dbPutField API retained exactly backwards compatible Functionally and almost line-by-line equivalent, but refactored, code in dbGetField, dbPutField, dbPutField
Designed to transport polymorphic data Event queue carrying polymorphic/parasitic data New API ▪ Identical server-to-service and client-side-application-to- client-lib Designed for SMP Eliminated from server EPICS_CA_MAX_ARRAY_SIZE Multicast listener Binding to specific network interfaces Inherited from PCAS
Event filtering >camonitor "fred$F $(PV:)>30 && $(PV)<40" fred$F $(PV:)>30 && $(PV)< :58: fred$F $(PV:)>30 && $(PV)< :58: fred$F $(PV:)>30 && $(PV)< :58: fred$F $(PV:)>30 && $(PV)< :58: fred$F $(PV:)>30 && $(PV)< :58: fred$F $(PV:)>30 && $(PV)< :58: >camonitor "fred$F $(PV:flavor)==30 " fred$F $(PV:flavor)== :58: fred$F $(PV:flavor)== :58: fred$F $(PV:flavor)== :58: fred$F $(PV:flavor)== :58: fred$F $(PV:flavor)== :58:
pv { timeStamp alarm { acknowledge { pending } condition { status, severity } } limits { display { upper, lower } control { upper, lower } alarm { major { upper, lower } minor { upper, lower } } labels units precision class { name } } Green indicates that a value is stored. In a DA tree a node does not need to be a leaf node in order to carry a value. This allows for less hierarchy traversal when doing a basic fetch. For example. Catalog & someData; Clerk clerk (someData ); double value; clerk.get (pi_pv, value );
pv { signal { devicdName timeStamp waveform { value sampleRate } LANSCE { flavor }
Addressing Configuration Routing issues
One to many communication, but unlike IP broadcasting Designed to make IP routers transparent IP Multicasting is mature, widely deployed Commercial stock exchanges Multimedia content delivery industries
Administratively scoped multicast groups Multicasting has good potential to simplify configuring of Channel Access in large systems with multiple subnets ScopeIPV4 Range link-local to Site-local to Org-local to Global to
Search requests Clients send to a special IP v4 address ▪ No client side code changes Server listens to a ▪ multicast group, or ▪ multicast group on a specified interface Beacon messages Ditto, but visa-versa Can eliminate need for CA repeater
Client side EPICS_CA_ADDR_LIST ▪ If address is multicast group ▪ Search messages sent to that mc group EPICS_CA_BEACON_ADDR_LIST ▪ If not defined ▪ Monitor mc groups in EPICS_CA_ADDR_LIST for beacons ▪ If defined and not empty ▪ Specifies mc group set to be monitored for beacon messages ▪ if defined and empty - will not receive mc beacons
Server side EPICS_CAS_INTF_ADDR_LIST isn't defined ▪ Server listens on all network interfaces ▪ For messages sent to EPICS_CAS_SERVER_PORT Unicast and broadcast messages Multicast messages, sent to any multicast group address found in EPICS_CA_ADDR_LIST
Server side EPICS_CAS_INTF_ADDR_LIST is defined ▪ If address is a multicast group then multicasts sent to that group are received on all configured interfaces ▪ Addresses of the form ▪ {multicast group address, interface address} ▪ Multicasts sent to specified mc group will be received Only on specified interface ▪ Unicast and broadcast traffic Will not be received on specified interface (unless enabled elsewhere)
EPICS_CAS_BEACON_ADDR_LIST Specify beacon destinations (mc or otherwise) ▪ When EPICS_CAS_INTF_ADDR_LIST isn't defined then ▪ Defaults to EPICS_CA_BEACON_ADDR_LIST, or if that isn't defined, EPICS_CA_ADDR_LIST
Logical names for multicast groups Recommended, but ▪ Just use DNS, local host files, etc
EPICS_CA_BEACON_PORT Defaults to EPICS_CA_REPEATER_PORT ▪ EPICS_CA_REPEATER_PORT becomes deprecated Just an appropriate name change
Network admin must decide where the boundaries will lie for different levels of administratively scoped IPV4 multicasting A multicast will not be auto-forwarded outside of its scope
Conservative approach is nature state of control system community Don’t fix it if it isn't broken Its very simple, robust, and efficient now ▪ Will new features only detract from the fundamentals responsible for success?
Hazards of conservative approach New features only added as evolution instead of architectural upgrades There can be a patchwork of new features instead of a grand design Learning the system becomes more difficult because of a need to ▪ Carefully absorb a chapter on each patchwork ▪ Carefully deciding which patchwork will be best
Perhaps we can afford some new capabilities We started out on 20MHz with 2MB of memory Quad core SMP will soon be the default With a larger user base we can beat the bugs out of the carpet faster, and amortize the expense over more projects Low cost of embedded processors ▪ only a few signals per processor is cost effective
Basic principals of software quality New features, only in new-feature release Patches, only in patch releases
Perhaps the path is easily navigated Provide backwards compatibility New features, only in new-feature releases Patches only, in patch releases
Full disclosure Users of new-features release pay a price ▪ Document new bugs so that they can be fixed ▪ New programming Interfaces need user feedback ▪ They should not be locked down early on Until a new release reaches sufficient maturity ▪ Not a good choice for essential production systems
Next generation CA server library substantially complete Provides many advantages Database CA Service nearing completion Multicasting Simplified configuration of large systems!