Presentation is loading. Please wait.

Presentation is loading. Please wait.

An Overview of the Pixel and HCAL Databases

Similar presentations


Presentation on theme: "An Overview of the Pixel and HCAL Databases"— Presentation transcript:

1 An Overview of the Pixel and HCAL Databases
Reminder The CMS and HCAL Databases are different instances of the same generic database design They are the same at inception but have different names – Pixel DB and HCAL DB Their lives diverge as they grow and become very different entities Briefly talk about Past and & Present projects FPix construction experience HCAL monitoring Experience Current HCAL upgrades: A melding of the above two experiences Give you a brief walk through the construction and conditions schemas (central to construction) Most important : user interfaces and the implementation procedure we follow 7/23/2018 Pixel Software Meeting Apr 16, 2012, Umesh Joshi

2 FPix Construction DB Experience: Issues
Complex detector with a large number of components. Wafers, HDI, VHDI, plaquettes, panels, blades, disks, control electronics, readout electronics, etc. Multiple institutions– Purdue University (Indiana), Johns Hopkins (Maryland), Kansas State University (Kansas), Fermilab (Illinois) – were involved in the construction process Within a single institution (Fermilab), there were multiple data producers Streamline parts flow: Data produced in one location necessary for testing and selection at the next stop Components had to be closely tracked, e.g. from wafer testing to final detector placement They had to be tracked in their journeys through different institutions All data from testing of components, wafers to fully assembled detector, had to be properly stored. 7/23/2018 Pixel Software Meeting Apr 16, 2012, Umesh Joshi

3 FPix Construction DB Experience: Goals
Store all data in a central location Be able to track every component in its entire journey Make it possible for any collaborator anywhere in the world to access the data. Fermilab was involved in constructing two major detectors, HCAL & FPix. HCAL was already built, but they needed a DB for monitoring and troubleshooting. Needed two databases Design a database that met these requirements Generic schema design (for Pixels & HCAL) Deploy the Oracle database in Fermilab 7/23/2018 Pixel Software Meeting Apr 16, 2012, Umesh Joshi

4 FPix Construction DB Experience: Goals
Provide a simple common method for users to load data in the DB. No need for users to be well versed in DB specifics Users produce data in specified XML formats (specified by DB members) Zip them and copy them to a spool area A DB Loader cronjob picks it up, parses it, and loads the data in the DB Use existing tools or provide tools to retrieve, analyze, and view data from the DB ROOT/OCCI, Excel, DB browsers Currently use WBM (from the HCAL experience). This is now standard. 7/23/2018 Pixel Software Meeting Apr 16, 2012, Umesh Joshi

5 FPix Construction DB Experience: Result
Successful in using the DB to help build the FPix detector Users from all institutions were readily able to load and retrieve data from the DB All data from testing of components - wafers (all types), plaquettes, panels, blades, half disks, etc. - loaded in the DB. Data was retrieved and analyzed for component selection when needed. Using parent-child relationships, built the entire detector hierarchical structure in the DB. Could readily track a pixel in a ROC through the entire detector hierarchical structure. Did the same with the detector readout and control chains Mapped each detector element to the readout and control links This was a very straightforward result of the DB design 7/23/2018 Pixel Software Meeting Apr 16, 2012, Umesh Joshi

6 FPix Construction DB: Parts Flow
Sensor Wafers ROC Wafers BB modules select VHDIs plaquette TBMs HDIs Be panel PSI, Fermilab PSI, Purdue, JHU KSU, Fermilab PSI, Purdue Fermilab Purdue, Fermilab PSI: modules Fermilab: panels DB data Data source 7/23/2018 Pixel Software Meeting Apr 16, 2012, Umesh Joshi

7 Pixel Software Meeting Apr 16, 2012, Umesh Joshi
HCAL DB Experience Detector was constructed long before DB availability. Current status: most components now stored in the HCAL DB HCAL DB heavily used for monitoring pedestals, laser runs, LED runs, radiation damage, etc. All these processes have been automated. Data loaded in the DB dynamically updated in the WBM page. HCAL DB also used for detector configuration Automated procedures used to load data in the DB, analyze, and publish in the HCAL WBM page. New monitoring tools continue to be developed and deployed as needed 7/23/2018 Pixel Software Meeting Apr 16, 2012, Umesh Joshi

8 Pixel Software Meeting Apr 16, 2012, Umesh Joshi
HCAL Upgrades HCAL DB in P5 currently being used for HO upgrades (replace HPDs with SiPMs) and HF upgrades (replace all PMTs with new multi-anode PMTs). DB procedure adopted same as that used in FPix construction The components are tested at different institutions HO: Fermilab, Mumbai, CERN HF: Univ. of Iowa, CERN All data recorded is stored in the DB and published in the HCAL WBM page 7/23/2018 Pixel Software Meeting Apr 16, 2012, Umesh Joshi

9 Pixel Software Meeting Apr 16, 2012, Umesh Joshi
Database Schemas The Database consists of six different schemas Core Construction Schema: To store all detector components Core Attribute Schema: To store all attributes associated with a component type, e.g. ROC position on the wafer, ROC position on a module, etc. Core Management Schema: to store management info, e.g. institution, location, etc. Core Condition Schema to track all data stored in the DB. Extension Schema to store all data produced by users (only schema where new tables are added). Core IOV Management to store configuration information (will not discuss this for now)  All core data tables have associated history tables A quick walk through the construction, condition, and extension schemas 7/23/2018 Pixel Software Meeting Apr 16, 2012, Umesh Joshi

10 Pixel Software Meeting Apr 16, 2012, Umesh Joshi
Interface: Condition Schema Interface: Management Schema Interface: Attribute Schema Construction Schema: Quick Explanation 7/23/2018 Pixel Software Meeting Apr 16, 2012, Umesh Joshi

11 Pixel Software Meeting Apr 16, 2012, Umesh Joshi
Attribute Schema 7/23/2018 Pixel Software Meeting Apr 16, 2012, Umesh Joshi

12 Pixel Software Meeting Apr 16, 2012, Umesh Joshi
Management Schema 7/23/2018 Pixel Software Meeting Apr 16, 2012, Umesh Joshi

13 Pixel Software Meeting Apr 16, 2012, Umesh Joshi
Combined Construction Schema 7/23/2018 Pixel Software Meeting Apr 16, 2012, Umesh Joshi

14 Pixel Software Meeting Apr 16, 2012, Umesh Joshi
User Tables Condition Schema: Quick Explanation 7/23/2018 Pixel Software Meeting Apr 16, 2012, Umesh Joshi

15 Pixel Software Meeting Apr 16, 2012, Umesh Joshi
Combined Condition Schema 7/23/2018 Pixel Software Meeting Apr 16, 2012, Umesh Joshi

16 FPix & DB: How did we work?
Best illustrated with an example: loading data from ROC wafer testing Designated DB member(s) got together with the expert (Christian Gingu) and obtained information about ROC wafers Device specific info: Wafer serial number, ROC numbering system, manufacturers (attributes for component registration) Learn specifics of tests to be conducted (table design) DB member (s) did the following in the development DB registered all the wafers provided (serial numbers) and all associated ROCs Designed and deployed tables needed to store test data (approval of the Pixel expert) Built a new DB loader Generated XML templates for each of the data sets Use these XML files to load “junk” data in the development DB Provided working templates to the Pixel expert 7/23/2018 Pixel Software Meeting Apr 16, 2012, Umesh Joshi

17 FPix & DB: How did we work?
Pixel Expert (Christian Gingu) did the following Setup his test programs to output data in the XML formats provided Zipped them and copied them to a designated spool area where DB loader picked them up and loaded data in the DB In the event of errors, DB group was informed for troubleshooting. Cause determined and the process repeated until successful Upon success, Pixel expert tries loading in the integration DB Next step: work with Pixel experts to design development WBM page to display detector components and all test data. (HCAL experience) Final step: load data in the production DB From this point on, all data produced is loaded in the production DB (automated procedure), and the data can be readily viewed in the production WBM page (HCAL experience) 7/23/2018 Pixel Software Meeting Apr 16, 2012, Umesh Joshi

18 Pixel Software Meeting Apr 16, 2012, Umesh Joshi
What we learned People working on DB need to know the detector and the tests quite well. Very close interaction with the Pixel experts essential. My take: DB experts become part of the detector group After some time, hope is Pixel experts become more knowledgeable about DB. In the procedure currently implemented, initial overhead to setup DB infrastructure could be a bit time consuming and frustrating. However, all follow up procedures become routine and streamlined. Our experience: The benefits of an efficient working DB far outweighs the initial overheads needed to setup the framework (HCAL & Fpix experience) 7/23/2018 Pixel Software Meeting Apr 16, 2012, Umesh Joshi


Download ppt "An Overview of the Pixel and HCAL Databases"

Similar presentations


Ads by Google