Presentation is loading. Please wait.

Presentation is loading. Please wait.

Umesh Joshi Fermilab Phase 1 Pixel Upgrade Workshop, Grindelwald August 28 - 31, 2012 CMS Pixel & HCAL Databases (An Overview)

Similar presentations


Presentation on theme: "Umesh Joshi Fermilab Phase 1 Pixel Upgrade Workshop, Grindelwald August 28 - 31, 2012 CMS Pixel & HCAL Databases (An Overview)"— Presentation transcript:

1 Umesh Joshi Fermilab Phase 1 Pixel Upgrade Workshop, Grindelwald August 28 - 31, 2012 CMS Pixel & HCAL Databases (An Overview)

2 Reminder The CMS Pixel and HCAL Databases are different physical instances of the same generic database design (details later) Our DB experience & ongoing activities FPix construction involved multiple institutions in different locations in the U.S. (track flow of parts) Pixel Online DB (ongoing): configuring the Pixel detector HCAL monitoring (ongoing): pedestals, laser, LED, radiation damage, etc. Current HCAL upgrades HO upgrade component testing: SiPM (silicon photomultiplier), mounting boards, control boards, bias boards, etc. HF upgrade component testing: multi-anode PMTs, new base boards, etc. We will be working on FPix upgrades Pixel and HCAL Databases: Short Background 2 Pixel Upgrade Workshop, Grendelwald 8/29/2012, Umesh Joshi

3 Detector Components Store all detector components, control electronics, configuration electronics, etc. Track every component – wafers, ROCs, pixels, modules, etc. Store all related data “Build the detector in the DB” We’ve embraced the concept of detector building in the DB Use components stored in the DB to build devices o Fpix: plaquettes, panels, blades, half-disks, half-cylinders, detector Build the readout and control chains Map detector components to readout and control chains Detector configuration and monitoring becomes straightforward Store all configuration data Configuration data Configuration keys and aliases Store all monitoring data Track individual channels Construction DB: What Do We Do? 3 Pixel Upgrade Workshop, Grendelwald 8/29/2012, Umesh Joshi

4 Pixel and HCAL databases are different entities, but share same design Database consists of two schema groups Global (can be used for any detector) CORE_CONSTRUCT CORE_ATTRIBUTE CORE_COND CORE_MANAGEMNT CORE_IOV_MGMNT Detector Specific PIXEL_COND PIXEL_CONSTRUCT Each schema contains multiple tables CORE_CONSTRUCT & CORE_ATTRIBUTE o together used to store detector components o component attributes, e.g. ROC posn on wafer, ROC posn on module, etc. PIXEL_COND: to store detector test, configuration, and monitor data CORE_COND: interfaces detector components (CORE_CONSTRUCT) and data (PIXEL_COND) CORE_MANAGEMNT: for tracking components across different institutions CORE_IOV_MGMNT (deployed only for Pixels): for manipulation and tracking of Pixel configuration keys DB Schemas: a Brief Description 4 Pixel Upgrade Workshop, Grendelwald 8/29/2012, Umesh Joshi

5 Our most critical & time consuming task: interfacing with detector experts Need to have very close interaction between experts building & testing detector and the DB group This interaction is what drives the design of DB tables, XML templates for loading data, and WBM pages The development process involves the use of 4 instances of a database, e.g. for Pixel DB o Template DB instance: currently in Fermilab (used for building the DB Loader) o Development DB: in CERN IT(INT2R) o Integration DB: in P5(CMSINTR) o Production DB: in P5(OMDS) Once agreed upon, the tables are deployed in the Template DB and the Development DB The DB Loader is built using the template DB in a dedicated machine, currently cmshcal05.cern.ch How Do We Function? 5 Pixel Upgrade Workshop, Grendelwald 8/29/2012, Umesh Joshi

6 How Do We Function? (cont) The generated DB Loader is deployed in a development machine, currently pcuscms34.cern.ch, and loading to the development DB enabled Data from the various tests are written out in the XML formats (provided), zipped, and copied to a spool area where the DB Loader picks it up and loads the data in the DB A cronjob is setup to periodically wake up and look for zipped XML files. When a file is found, the loader reads the data and loads it in the DB For each user table deployed (in PIXEL_COND schema), a XML template has to be generated. A one-to-one map between a XML template and a user DB table exists We will provide all XML templates to the experts building and testing detector components We prefer that the experts testing the devices take on the responsibility of generating the XML files and copy them (zipped copies) to the spool area. Work on deployment of WBM pages (as agreed upon) is also initiated. 6 Pixel Upgrade Workshop, Grendelwald 8/29/2012, Umesh Joshi

7 Once the data loads successfully in the development DB, tables are deployed in the integration and production databases in P5, and DB Loaders are installed in designated machines. If the data loads successfully in the integration DB, one is assured that data will load in the production DB and can proceed. The process of loading data in production can be automated. (This is what we currently do for HCAL monitoring data in P5) Data loaded in the development DB can be viewed using the development WBM page, e.g. http://cmshcal05.cern.ch:8080/cms-wbmhcal/servlet/ServiceHub/hcal.HomeService http://cmshcal05.cern.ch:8080/cms-wbm-hcal/servlet/ServiceHub/pixel.HomeService Data loaded in the production can be viewed using the production WBM page, e.g. https://cmswbm.web.cern.ch/cmswbm/cmsdb/servlet/ServiceHub/hcal.HomeService https://cmswbm.web.cern.ch/cmswbm/cmsdb/servlet/ServiceHub/pixel.HomeService How Do We Function? (cont) 7 Pixel Upgrade Workshop, Grendelwald 8/29/2012, Umesh Joshi

8 8

9 9

10 10

11 We currently have a group working on HCAL and Pixel databases. Our focus has currently been on HCAL. We are starting to think about Pixels The group members are Zhen Xie (Princeton University) Based in CERN Currently working with HCAL Offline DB, Lumi DB, and Trigger DB Main DB contact for Pixels Valdas Rapsevicius (US CMS) Based in Vilnius Currently working on Run Registry (designed & developed), all WBM applications for HCAL Main contact for HCAL & Pixel WBM applications Dmitry Vishnevskiy (US CMS) Based in CERN Currently working on HCAL monitoring, detector diagnostics, and DB Main DB contact for HCAL Umesh Joshi (Fermiab) Based in Fermilab Contact person for HCAL & Pixels DB Group Members 11 Pixel Upgrade Workshop, Grendelwald 8/29/2012, Umesh Joshi


Download ppt "Umesh Joshi Fermilab Phase 1 Pixel Upgrade Workshop, Grindelwald August 28 - 31, 2012 CMS Pixel & HCAL Databases (An Overview)"

Similar presentations


Ads by Google