Simulations and Software CBM Collaboration Meeting, GSI, 17 October 2008 Volker Friese Simulations Software Computing
Volker Friese CBM Collaboration Meeting, 17 October Simulations Key observables well in hands; many studies ongoing –to be ctd. with ever more realistic detector descriptions Sufficient information delivered for start of detector engineering Trigger considerations (charmonium, open charm) under way Look into running at SIS100: promising First steps to study so far uncovered topics: –centrality determination –event plane resolution –flow –TOF with start detector
Volker Friese CBM Collaboration Meeting, 17 October Software status Software development is still rapid Focus shifted from detector description to reconstruction / analysis Needed / ongoing –consolidation / cleanup –control –quality assessment –documentation
Volker Friese CBM Collaboration Meeting, 17 October Tools Event generators: –Strategy of adding signal to background event (UrQMD) not valid for low-multiplicity events (e. g. p+C). Look for realistic generators for such events. –For PSD studies: proper fragmentation model needed. SHIELD is ok, but does not run with TGeant3 MC engines: –TGeant3: our baseline. G3 not developed any longer –TGeant4: works (some problems), but physics list to be determined –TFluka: not operational. –Native Fluka used for radiation level studies
Volker Friese CBM Collaboration Meeting, 17 October Detector Description SystemMC GeometryDigitisation MVDMonolithic Digitiser charge sharing,clustering STS Segmented, passive materials Digitiser, charge sharing,clustering RICHPassive materialsHitProducer MUCHSegmented Digitiser, charge sharing,clustering TRDSegmentedHitProducer TOF Segmented, passive materials Hit Producer (advancd) ECALSegmentedShower model PSDSegmentedDigitiser
Volker Friese CBM Collaboration Meeting, 17 October Software status Geometrical description fairly well advanced; supports etc. mostly missing Detector response models ("digitizer") implemented for almost all subsystems Parameters taken from literature or educated guesses Have to be tuned to detector tests / prototype measurements
Volker Friese CBM Collaboration Meeting, 17 October Against all odds... Test beam time, September 2008 First data taking with untriggered, free-streaming DAQ Worked in principle (many open questions yet) First glimpse on such a data stream
Volker Friese CBM Collaboration Meeting, 17 October Towards modeling the data stream eventwise Event Generator MC Transport Reconstruction Digitisation Analysis eventwise Experiment eventwise Reconstruction Analysis free streaming Event builder eventwise Event Generator MC Transport Event builder Digitisation Reconstruction eventwise free streaming
Volker Friese CBM Collaboration Meeting, 17 October Online and Offline Online reconstruction (L1 / Hough) will not be implemented on "normal" architectures Implementation on FPGA / multi-core requires dedicated programming languages Up to now, models for the algorithms are implemented in CBMROOT. Will not continue to be so (?) Integration / connection of framework and online software to be rethought
Volker Friese CBM Collaboration Meeting, 17 October Computing CBM computing model to be worked out Some facts: –1 TB/s from detector –archival rate 1 GB/s 5 PB per CBM year Online processing will (most probably) be on site Reconstruction can in principle be distributed Analysis should be distributed Can full reconstruction be done online? If yes, will raw data be stored? Connections to FAIR computing concept?
Volker Friese CBM Collaboration Meeting, 17 October FANCy Proposal submitted September 2008 CBM is used as showcase
Volker Friese CBM Collaboration Meeting, 17 October CBM GRID Aims: –facilitate simulations by use of ressources other than GSI –enable larger statistics: 10 5 events 10 7 events –gain experience for using distributed computing for real data processing Status: –Central services installed at GSI, tests ongoing Perspectives: –2008: Small test grid (3-4 sites), test data challenge –2009: Production mode
CBM Grid Structure (work started 2008) K. Schwarz, F. Uhlig
running site GSI Computing Element (CE) Storage Element (SE) Packman (for installing experiment software on the Grid (CBMRoot) FTD (File Transfer Daemon) (intersite transfer) –transfer protocoll: xrootd first jobs have run successfully !!! first job output has been stored successfully at CBMGrid Storage Elements K. Schwarz, F. Uhlig
CBMGrid central services K. Schwarz, F. Uhlig