ONCS Presentation Run control from user’s perspective Common Toolbox Interface to RHIC control C.Witzig PHENIX Core Week June 9, 1999

Slides:



Advertisements
Similar presentations
March 24-28, 2003Computing for High-Energy Physics Configuration Database for BaBar On-line Rainer Bartoldus, Gregory Dubois-Felsmann, Yury Kolomensky,
Advertisements

June 19, 2002 A Software Skeleton for the Full Front-End Crate Test at BNL Goal: to provide a working data acquisition (DAQ) system for the coming full.
Data Acquisition System for 2D X-Ray Detector Beijing Synchrotron Radiation Facility (BSRF) located at Institute of High Energy Physics is the first synchrotron.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/27 A Control Software for the ALICE High Level Trigger Timm.
DSP online algorithms for the ATLAS TileCal Read Out Drivers Cristobal Cuenca Almenar IFIC (University of Valencia-CSIC)
Chapter 1 and 2 Computer System and Operating System Overview
DAQ WS03 Sept 2006Jean-Sébastien GraulichSlide 1 Interface between Control & Monitoring and DDAQ o Introduction o Some background on DATE o Control Interface.
Cambodia-India Entrepreneurship Development Centre - : :.... :-:-
Slide 1 of 9 Presenting 24x7 Scheduler The art of computer automation Press PageDown key or click to advance.
DIRAC API DIRAC Project. Overview  DIRAC API  Why APIs are important?  Why advanced users prefer APIs?  How it is done?  What is local mode what.
What is Router? Router is a device which makes communication between two or more networks present in different geographical locations. Routers are data.
Version Control with Subversion. What is Version Control Good For? Maintaining project/file history - so you don’t have to worry about it Managing collaboration.
CLEO’s User Centric Data Access System Christopher D. Jones Cornell University.
Emulator System for OTMB Firmware Development for Post-LS1 and Beyond Aysen Tatarinov Texas A&M University US CMS Endcap Muon Collaboration Meeting October.
Imperial College Tracker Slow Control & Monitoring.
The Pipeline Processing Framework LSST Applications Meeting IPAC Feb. 19, 2008 Raymond Plante National Center for Supercomputing Applications.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
IMPLEMENTATION OF SOFTWARE INPUT OUTPUT CONTROLLERS FOR THE STAR EXPERIMENT J. M. Burns, M. Cherney*, J. Fujita* Creighton University, Department of Physics,
HPS Online Software Discussion Jeremy McCormick, SLAC Status and Plans.
U N C L A S S I F I E D FVTX Detector Readout Concept S. Butsyk For LANL P-25 group.
(More) Interfacing concepts. Introduction Overview of I/O operations Programmed I/O – Standard I/O – Memory Mapped I/O Device synchronization Readings:
PHENIX upgrade DAQ Status/ HBD FEM experience (so far) The thoughts on the PHENIX DAQ upgrade –Slow download HBD test experience so far –GTM –FEM readout.
André Augustinus 10 September 2001 DCS Architecture Issues Food for thoughts and discussion.
2009 Sep 10SYSC Dept. Systems and Computer Engineering, Carleton University F09. SYSC2001-Ch7.ppt 1 Chapter 7 Input/Output 7.1 External Devices 7.2.
June 3, 2005H Themann SUSB Physics & Astronomy 1 Phenix Silicon Pixel FEM S. Abeytunge C. Pancake E. Shafto H. Themann.
Local Trigger Unit (LTU) status T. Blažek, V. Černý, M. Kovaľ, R. Lietava Comenius University, Bratislava M. Krivda University of Birmingham 30/08/2012.
Summary of CSC Track-Finder Trigger Control Software Darin Acosta University of Florida.
Recent Software Issues L3 Review of SM Software, 28 Oct Recent Software Issues Occasional runs had large numbers of single-event files. INIT message.
Svtsim status Bill Ashmanskas, CDF simulation meeting, Main authors: Ashmanskas, Belforte, Cerri, Nakaya, Punzi Design goals/features: –main.
Bernardo Mota (CERN PH/ED) 17/05/04ALICE TPC Meeting Progress on the RCU Prototyping Bernardo Mota CERN PH/ED Overview Architecture Trigger and Clock Distribution.
Chapter 10 Chapter 10: Managing the Distributed File System, Disk Quotas, and Software Installation.
Framework for MDO Studies Amitay Isaacs Center for Aerospace System Design and Engineering IIT Bombay.
Data Sharing. Data Sharing in a Sysplex Connecting a large number of systems together brings with it special considerations, such as how the large number.
Slide 1 Project 1 Task 2 T&N3311 PJ1 Information & Communications Technology HD in Telecommunications and Networking Task 2 Briefing The Design of a Computer.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
1 The PHENIX Muon Identifier Front End Electronics Andrew Glenn (University of Tennessee), for the PHENIX collaboration Andrew Glenn 5/1/01 April APS Meeting.
DØ Online16-April-1999S. Fuess Online Computing Status DØ Collaboration Meeting 16-April-1999 Stu Fuess.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
1Malcolm Ellis - Tracker Meeting - 28th November 2006 Electronics - Station Acceptance  Hardware: u 1 MICE cryostat with 1 VLPC cassette. u VME crate,
1. LabVIEW and EPICS Workshop EPICS Collaboration Meeting Fall 2011.
CSCI1600: Embedded and Real Time Software Lecture 33: Worst Case Execution Time Steven Reiss, Fall 2015.
9/12/99R. Moore1 Level 2 Trigger Software Interface R. Moore, Michigan State University.
Anritsu Automation Platform (AAP) AAP PC Connects to the system via IP connection (system switch) AAP was developed to add features that were requested.
DØ Online Workshop3-June-1999S. Fuess Online Computing Overview DØ Online Workshop 3-June-1999 Stu Fuess.
FVTX Electronics (WBS 1.5.2, 1.5.3) Sergey Butsyk University of New Mexico Sergey Butsyk DOE FVTX review
1 Calorimeters LED control LHCb CALO meeting Anatoli Konoplyannikov /ITEP/ Status of the calorimeters LV power supply and ECS control Status of.
Status & development of the software for CALICE-DAQ Tao Wu On behalf of UK Collaboration.
Peter W. PhillipsATLAS SCT Week, CERN, September/October 2002 Electrical Tests of SCT modules using RODs Peter W Phillips Rutherford Appleton Laboratory.
TDAQ Experience in the BNL Liquid Argon Calorimeter Test Facility Denis Oliveira Damazio (BNL), George Redlinger (BNL).
October Test Beam DAQ. Framework sketch Only DAQs subprograms works during spills Each subprogram produces an output each spill Each dependant subprogram.
LHC CMS Detector Upgrade Project RCT/CTP7 Readout Isobel Ojalvo, U. Wisconsin Level-1 Trigger Meeting June 4, June 2015, Isobel Ojalvo Trigger Meeting:
Event Management. EMU Graham Heyes April Overview Background Requirements Solution Status.
ONCS Subsystem Status Chris Witzig (BNL), Stephen Pate (NMSU) DC Meeting PHENIX Collaboration Week 22-June-1998 Chain Test Software Subsystem Support Tests.
P H E N I X / R H I CQM04, Janurary 11-17, Event Tagging & Filtering PHENIX Level-2 Trigger Xiaochun He Georgia State University.
ATLAS SCT/Pixel Off Detector Workshop, UCL, 15 June ROD Test Stand Lukas Tomasek LBL
COMPASS DAQ Upgrade I.Konorov, A.Mann, S.Paul TU Munich M.Finger, V.Jary, T.Liska Technical University Prague April PANDA DAQ/FEE WS Игорь.
Geant4 GRID production Sangwan Kim, Vu Trong Hieu, AD At KISTI.
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
Martin L Purschke The PHENIX Online Computing System 1 The PHENIX Online Computing System Martin L. Purschke, Edmond Desmond, Lars Ewell, John Haggerty,
Online clock software status
Online Database Work Overview Work needed for OKS database
DAQ for ATLAS SCT macro-assembly
Controlling a large CPU farm using industrial tools
Online Software Status
TYPES OFF OPERATING SYSTEM
Programmable Logic Controllers (PLCs) An Overview.
CSCI1600: Embedded and Real Time Software
DCM II DCM II system Status Chain test Schedule.
Chapter 13: I/O Systems.
CSCI1600: Embedded and Real Time Software
Presentation transcript:

ONCS Presentation Run control from user’s perspective Common Toolbox Interface to RHIC control C.Witzig PHENIX Core Week June 9,

Data Collection Module (DCM) Data Collection Module (DCM) Sub-Event Buffer (SEB) Partition Module (PM) Data Collection Module (DCM) Front End Module (FEM) Front End Module (FEM) Front End Module (FEM) Front End Module (FEM) Assembly &Trigger Processor (ATP) Assembly &Trigger Processor (ATP) Assembly &Trigger Processor (ATP) Assembly &Trgger Processor (ATP) Assembly &Trigger Processor (ATP) Assembly &Trigger Processor (ATP) Assembly &Trigger Processor (ATP) Assembly &Trigger Processor (ATP) Assembly &Trigger Processor (ATP) ATM Switch to RCF Busy Global Level 1 Local Level 1 Master Timing Module (MTM) Granule Timing Module (GTM) BusyAccept Timing Fiber Serial Control ARCNET Sub-Event Buffer (SEB)...

Software Overview

Run Control Considerations Present the user with a “standard” (uniform) interface/entry point into the DAQ PHENIX partitioning requirement Must talk to many distributed processors (VME crates, work stations) by the design of PHENIX DAQ There must be complementary ways to send commands from within the run control for debugging

Current Situation: Rapid changes in 1008 Hardware is arriving, user has “special wishes”, quick changes, not everything works in hardware/software Reading out individual granules by themselves in standalone mode Reading out multiple granules Reading out via alternative paths (DCM over VME, over PM, over SEB) Current setups include EMCAL, DC, PC, TEC, BB, GL1, TS 2 TM crates, 2 DCM crates, 1 GL1 crate

Current ER Setup at BNL

Basic concepts (1) Run control is a standalone server (process on phoncs0) Every user starts its own copy that is characterized by a unique arbitrary string ( user input - eg your initials) a unique tag (an integer) (handled invisible to the user) 2 scripts: daq_start.sh bj PBSC.W daq_cleanup.sh bj daq_start.sh checks whether there are not too many run controls running builds up the cleanup scripts starts the run control server, which reads the configuration files for the granule(s) waits for user commands daq_cleanup.sh executes the cleanup scripts

ASCII based configuration files // //configuration file for PBSC setup in ER // #define IP_PORT 5027 ONCS_GTM, GTM.EMC.W, TMserver10, 0x , … \ $ONLINE_CONFIGURATION/GTM/GTM.EMClaser ONCS_ROOBJECT, ro1b, daq1bobjmanager,… IP_PORT ONCS_PAR, par.1b, daq1bobjmanager, …, vme, ro1b ONCS_FEM2, dcm.pbsc.w0, ….., dsp5, …. ONCS_DSP5, dcm.pbsc.75, ….. Vme, … par.1b FEM.PBSC, FEM.EMC.W.0, dcm.pbsc.w0, \ i: $ARCNET_DATA/jwemal21.hex \ i: $ARCNET_DATA/jwemal42.hex DD_SERVER, dd_server1b, default, EVT_BIN/ndd_event_server, IP_PORT

After the startup…. Run control waits for user commands (next slide) All components of DAQ system are brought into data taking mode through the following sequence: –initialisation (done once at the beginning) –download (could be done several times) Between initialisation and download the user normally configures the components, e.g. change GTM mode bit file, selects the files to download into the FEM etc If the download is successful the run control is “ready” start_run - end_run commands start_run has the following parameters: –run number –event_limit, volume limit, time_limit [all optional] Example: select FEM and GTM file download start_run for 1000 events - end_run select another FEM and GTM file download start_run…..

Internally... Run control server has the following objects: one partition object one control object for every type of DAQ component (process_stage) –FEMstage controls all FEMs and reflects their state –DCMstage controls all DCMs and reflects their state Every component has a proxy object that knows what to do when (eg at download time). [The process stage knows nothing about the internals of the component]

How to send commands to the run control?? 3 modes of operation: in “local” mode run control accepts commands from standard input in “remote” mode run control accepts commands from remote processes (same host, another host) from a “GUI” Development of Java GUI just started

What’s next? GUI for single run control Coordination of several run controls running in parallel a common server that keeps track of GL1 configuration and which granules belong to which granules Integration of SEB/ATPs into the run control Integration of subsystem monitoring programs into the real time environment Databases [longer term item - post ER]

Toolbox news: running means The most-often used tool for gain monitoring of your detector is the running mean value. You compute the running mean of some laser response, or a test pulse, which is supposed to be stable over time. Mathematically accurate running means are simple to do but true memory hogs. We introduce two classes, fullRunningMean and pseudoRunningMean, which allow you to compute many running mean values in parallel, no frills. Full is the “memory hog” accurate running mean, while pseudoRunningMean is the faster and smaller pseudo value. For gain monitoring where the values don’t change much normally (20, 30% at most), the pseudo is a very good approximation of the real running mean at a fraction of cost and time of the “real thing”. RunningMean *bbmean_p = new pseudoRunningMean(128,50); // 128 channels, 50 values deep RunningMean *bbmean_f = new fullRunningMean(128,50); // 128 channels, 50 values deep “pseudo” and “full” both inherit from and abstract class “RunningMean” so you can easily compare “full” to “pseudo” and see if pseudo is good enough.

Running mean example (PBSC) RunningMean *pm = new pseudoRunningMean(144,50); // 144 channels, 50 values deep for (i=0; i<500; i++) { if (! ( evt = it->getNextEvent() ) ) break; Packet *p = evt->getPacket(8002); // get pbsc packet 8002 if (p) { p->fillIntArray(array, 144, &nw); // get all 144 values delete p; pm->Add(array); // update the running means } delete evt; } for (j=0; j getMean(j); Runs as ROOT macro, as shared lib, standalone. rn = ( ro * (d-1)+ x ) /d d = running mean depth rn new mean value ro old mean value x new reading Algorithm:

How good is it? For “realistic” gain values with typical drifts, the deviation of “pseudo” from “full” is less than 0.5%. You should test what you get though. (The experiment at CERN used this for all PM-based detectors routinely). The upper plot is the values over time middle: real running mean, lower: deviation in %

Response to a “jump”, extreme case Upper: The values over time, with the “jump” middle: response of the “full” running mean over time lower: response of the pseudo running mean over time

How good is it? For “realistic” gain values with typical drifts, the deviation of “pseudo” from “full” is less than 0.5%. You should test what you get though. (The experiment at CERN used this for all PM-based detectors routinely). The upper plot is the values over time middle: real running mean, lower: deviation in %

Example (some random numbers) When the “fill depth” is reached, is scrolls to the left. Looks like a real polygraph…

rGraphs Often you want to see a value change over time, such as luminosity over time events/second over time scaler values over time temperature over time Dow Jones index over time... rGraph gives you a simple “strip chart” of a value you want to look at over time.

Example (some random numbers) When the “fill depth” is reached, is scrolls to the left. Looks like a real polygraph…