Download presentation
Presentation is loading. Please wait.
Published byLionel Phillips Modified over 8 years ago
1
The online Web-Based Monitoring (WBM) system of the CMS experiment consists of a web services framework based on Jakarta/Tomcat and the ROOT data display package. Due to security concerns, many monitoring applications of the CMS experiment cannot be run outside of the experimental site. As such, in order to allow remote users access to CMS experimental status information, we implement a set of Tomcat/Java servlets running in conjunction with ROOT applications to present current and historical status information to the remote user on their web browser. The WBM services act as a portal to activity at the experimental site. In addition to HTML, Javascripts are used to mark up the results in a convenient folder schema. No special browser options are necessary on the client side. The primary source of data used by WBM is the online Oracle database; the WBM tools provide browsing and transformation functions to convert database entries into HTML tables, graphical plot representations, XML, text and ROOT- based object output. The ROOT object output includes histogram objects and n-tuple data containers suitable for download and further analysis by the user. We devised a system of meta-data entries describing the heterogeneous database which allows the user to plot arbitrary database quantities, including multiple value versus time plots and time correlation plots. The server consists of two dual-core CPUs, and a RAID configuration of 5 TB of disks on which to store up to a year of monitoring output (older results will be archived to tape storage). Examples of WBM tools and services which assisted Shifters in their Remote Monitoring duties will be noted in this poster by. http://www.uscms.org/roc/ HCAL Test Beam 2006 http://cmsmon.cern.ch https://twiki.cern.ch/twiki/bin/view/CMS/OnlineWBhttps:/twiki.cern.ch/twiki/bin/view/CMS/HcalTestbeam2006 http://www.uscms.org/roc/cms_runs.html http://www.uscms.org/roc/cms_mtcc.html http://tacweb.cern.ch:8080 https://twiki.cern.ch/twiki/bin/view/CMS/SiStripTrackerDQM http://cmsmon.cern.ch/cmsdb/servlet/RunSummaryTIF http://home.fnal.gov/~biery/snapshot/ http://cmsmon.cern.ch/cmsdb/servlet/DcsLastValue https://lhcatfnal.fnal.gov/ MTCC Results (2) MTCC Results (1) Process Summary Web-Based Monitoring Remote Operations Center CMS Tracker Slice TestTracker DQM RunSummaryTIFScreen Snapshot Service Tracker DCS MonitoringLHC@FNAL ROC Alan L. Stone (Fermilab) CMS Remote Monitoring 15 th IEEE NPSS Real Time Conference 2007 · Fermilab, Batavia IL, 60510 · April 29 – May 4, 2007 WBM The Magnet Test and Cosmic Challenge (MTCC) took place in two phases in August & October of 2006. MTCC was the first time CMS included more than one sub-detector in the readout, and in the presence of the full & stable magnetic field up to 4.1 T. Remote data quality monitoring (DQM) shifts were taken from the FNAL ROC, and shifters were tasked with running as many DQM modules as possible as well as the IGUANA Event Display. Communication was through video and phone conferencing, logbook entries and frequent emails. Webcams were in FNAL ROC & CERN control room. B=3.8T Run=2605 Event=3981 We describe recent CMS activities performed at Fermi National Accelerator Laboratory (FNAL) Remote Operations Center (ROC). The FNAL ROC was located on the 11 th floor of Wilson Hall at Fermilab (built in 2005), and equipped with a dozen Linux PCs with multiple LCD displays, gigabit network connection, a file & web server, & videoconferencing capability. The main focus of the FNAL ROC group is independent of location. We concentrate on working closely with our CMS colleagues in the areas of data acquisition, triggers, data monitoring, data transfer, software analysis and database access, in order to commission the sub-detectors with the common goal of efficient & high quality data taking for physics. 20042005200620072008 HCAL Test Beam Construction of WH11 ROC WBM project begins MTCC I & II DQM development Commissioning detector & trigger systems Snapshot Producer Captures screen images periodically Sends images to snapshot server Implemented as Java application Public Network Snapshot Consumer Periodically fetches snapshots from server Displays snapshots in a web browser Snapshot Server Receives images from producers Converts images to PNG Serves images to consumers Runs under Tomcat Snapshot Flow Private Network Example of SSS running on a PC in the Tracker Analysis Centre at CERN: Slow Controls System (25 Apr 2007 12:11AM CERN) TIB:TOP: Top level of the control system for the powering of the TIB+ side. The three buttons correspond to the power for the TIB, the TID & the main LV supply. The first two are in error as the third is OFF. PLCs values: Shows an array of Temps of different probes of the TIB system. The name of the probe (current value) is in the white (colored) box. Green means the Temp is inside the acceptable window (chiller is set to 15 C) & grey means the probe is not functional. PLC Trending: Plot of the above values. The wayward trace shows one probe which currently needs investigation (it is disabled) Top plot shows a summary of the strip noise for six layers of TOB. Bottom plot presents the same information in a form of distribution. The panel on the right allows selection of a specific layer in order to look at the strip noise in the layer in more detail for effective trouble shooting. Similar summary plots are filled in for each tracker subsystem. A collection of plots on the left characterizes tracking and cluster performance. Shown are the signal-to- noise ratio and width of the clusters associated with tracks (top row), number of reconstructed hits per track and number of tracks per event (bottom row). The panel on the right shows a list of all available histrograms to monitor tracking performance. Assembled at the Tracker Integration Facility at CERN in Summer & Fall 2006. Instrumented ~20% of the CMS Tracker System for readout from Jan-May 2007 in warm and then cold (-10 C) stages: Outer Barrel (TOB±), Inner Barrel (TIB±) & End Cap (TEC±). Cosmic trigger (~1.1 Hz) with adjustable geometry. Over 1 Million events collected so far. Slice test of the CMS Calorimeter. Hadronic Barrel (HB) 2 wedges & 8 segs (40 deg); Endcap (HE) 4 segs (20 deg); Outer (HO) Rings 0,1,2. Plus one ECAL tile. And real electronics. CERN SPS H2 Beam line. Maximum intensities for 1E12 incident protons at 400 GeV/c: 9E7 pi+ (3E7 pi-) at 200 GeVc 1E6 e± at 150 GeV/c Located at Prevessin, HCAL TB data taking ran from June – Sept 2006 from the H2 facility. Data was transferred to FNAL ROC, typically within 5 minutes of end of run, and then reconstructed. Summary results were made available for online monitoring by anyone with a web browser. Particle ID w/TOF & Cerekov counters for very low energy (<9 GeV); muon veto & beam halo counters; trigger counters for multi- interactions. Example summary plots include the “Banana plots” – ECAL vs HCAL & total energy (muon subtracted) for 50 GeV pions and electrons. 50 GeV pi - 50 GeV e - HCAL Data Quality Monitor runs in live time at CERN & updates ~1000 events, processing only HCAL detector info, e.g. number of digis, occupancy maps. Output is HTML & PNG files, web browser friendly, feedback can be prompt so problem is addressed in minutes (loose cable, reinitialize electronics, etc.). Follow the links. Commissioning tool. For example, in monitoring the pedestals, one expects a Gaussian mean & RMS ~ 1. Import detector control system data from the Oracle database. Comfort displays visualize the DCS status of High and Low Voltage, Current, Temperature w/geometrical detector cartoon. Point the mouse cursor over a single channel & a pop-up window will display channel values. Click on channel for more details and histogram plots. WBM Run 7636, Event 2648 collected on April 13 th. The cosmic muon was detected in the TIB, TOB and TEC. The TIB (black) is in the center, followed by the TOB (lighter gray) and then the TEC (darker gray). In the IGUANA event displays, the track is drawn by a red line, the rechits are represented by green dots and the clusters of rechits are shown as blue dots. IGUANA can run remotely on a live stream of data (about 1 event/min) through a WBM Event Proxy. WBM FNAL ROC support for SiTracker activities included data transfer, bookkeeping, Elog, Event Display, RunSummary & DCS Monitoring. WBM The RunSummaryTIF servlet is a query, and the form which is shown allows the user to search for runs and various optional criteria such as date and time, Run mode (Physics, Pedestal, Timing, Calibration), Partition (trigger configuration), and more. An example of a date query is shown, with a multi-run result, & the result for Run 7647, which had 26 file fragments & 9736 events. From single run results, links are provided to Online DQM, Online Elog, and HV Status. Also provided is a method of returning a query in simple text format. Users can query the database from a script or batch job using a wget command w/ several options. WBM Information for last hour of single channel device. Histograms can be rescaled by value or time. WBM The LHC@FNAL Remote Operations Center is located on the 1 st floor of Wilson Hall at Fermilab. Its primary focus is to establish a ROC that supports the commissioning and operations of both LHC and CMS by the spring of 2007. One of the main goals is to give the CMS and accelerator scientists and engineers remote access and display information that are available in control rooms at CERN, therefore the consoles & layout of the LHC@FNAL were chosen to match the specifications at CERN. Each console includes 2 console PCs (eye level) & 1 fixed-display PC (upper level), with the latter connected to a projector. Network: open access (gigabit switch) & protected access (dedicated router). Videoconferencing in dedicated adjacent conference room, and point-to-point capability from console equipment. LHC@FNAL official opening: 12 Feb 2007 CMS Tracker Slice Test were first users of the ROC, commissioning the brand new infrastructure, took shifts, participated in live- time remote monitoring. Tier-1 computing facility team have developed detailed monitoring displays & alarms to ensure the mass storage and processing facilities at Fermilab are operational, and the ROC is staffed daily by a Tier-1 shifter. CMS Trigger & detector commissioning begins in late May 2007 – global data taking preparation. Tracker Slice Test Location Move to WH1 LCH@FNAL LHC Beam & Physics A cosmic muon was detected by all four detectors participating in the run. The event display shows how the particle transversed the detector with the reconstructed 4D segments in the Muon drift tubes (magenta), the reconstructed hits in HCAL (blue), the uncalibrated reconstructed hits in ECAL (light green), and the locally reconstructed track in the Tracker (green). A muon track was reconstructed in the drift tubes and extrapolated back into the detector taking into account the magnetic field. [I.Osborne http://cdsweb.cern.ch/record/1011022] Tier-0 & Tier-1 facility teams at CERN & FNAL handled all the data transfer & mass storage. However the files were not in a user friendly format (ROOT), no database was in place to inform users the “when, what & where” of files. FNAL ROC group devised a solution for MTCC data bookkeeping & created a automation scheme for job submission, file conversions and data quality monitoring (Python). 15 min. cronjob to check for new data files at FNAL. Condor-based work cluster batch system. Condor DAQman facility to synchronize processes file by file. Elog entry whenever new merged files completed. Available through WBM. The MTCC Process Summary page contains information on the status of the MTCC data, ordered by the most recent run number (linked to RunSummary). Process Summary color coding guides shifters about the state of transfer (CASTOR to dCache), conversion (Streamer to ROOT) and/or DQM processing (Trigger, Tracker, HCAL or CSC, with links to WBM). For reference, the number of events, total number of files per run, stop time and Magnet current was also included. WBM DQM Browser: Output of the online High Level Trigger (HLT) DQM made available through WBM tools to the CMS and FNAL shifters in live-time for prompt monitoring & feedback to LTC experts. WBM Local Trigger & Control (LTC) Muon bit pattern for run 2623: bit 0 : Drift Tube (DT) bit 1 : Cathode Strip Chamber (CSC) bit 2 : RBC1 (RPC trigger for wheel +1) bit 3 : RBC2 (RPC trigger for wheel +2) bit 4 : RPC-TB (RPC trigger from the trigger board) Only muon triggers were used in MTCC-I. Calorimeter triggers were added in MTCC-II. Magnet current as a functio n of time for Run 2241. WBM The CSC muon chambers provide signals from anode wires (cathode strips) along a constant η ( ). Each chamber has six layers of wires & strips. The 2D hits (recHits) are monitored in the r- plane. The chambers are arranged in rings around the beam pipe. There are 4 separate layers of rings, referred to as Stations, with Station 1 (closest to the interation point) through 3 (furthest from IP) included in MTCC-I. Tracker DQM output for Run 2372. WBM Tracker participated only in MTCC-I. Reprocessing of MTCC-I Tracker data was crucial to algorithm & alignment studies. No additional tracker data of any kind would be available for several months & no magnetic data for over a year. FNAL ROC provided a common sample of ROOT files for tracker data analyses. Reprocessed data sample consisted of ~100 runs in total: B=0T (11M evts), B=3.8T (14M evts), B=4.0T (2M evts).
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.