Nuons && Threads Suggestions SFT meeting December 15 2014 René Brun Nuons && Threads -> Suggestions115/12/2014.

Slides:



Advertisements
Similar presentations
Computer-System Structures Er.Harsimran Singh
Advertisements

FAWP Fast Analysis With Pythia These notes will be updated as FAWP evolves Current version:
H.G.Essel: Go4 - J. Adamczewski, M. Al-Turany, D. Bertini, H.G.Essel, S.Linev CHEP 2004 Go4 v2.8 Analysis Design.
Must-Know ROOT Class I/O/TGraph/Tntuple/Ttree/…. 1.
File Management Chapter 12. File Management A file is a named entity used to save results from a program or provide data to a program. Access control.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 2: Computer-System Structures Computer System Operation I/O Structure Storage.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
OS2-1 Chapter 2 Computer System Structures. OS2-2 Outlines Computer System Operation I/O Structure Storage Structure Storage Hierarchy Hardware Protection.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 2: Computer-System Structures Computer System Operation I/O Structure Storage.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 2: Computer-System Structures Computer System Operation I/O Structure Storage.
1 Presenter: Chien-Chih Chen Proceedings of the 2002 workshop on Memory system performance.
Reduced Instruction Set Computers (RISC) Computer Organization and Architecture.
General System Architecture and I/O.  I/O devices and the CPU can execute concurrently.  Each device controller is in charge of a particular device.
Filesytems and file access Wahid Bhimji University of Edinburgh, Sam Skipsey, Chris Walker …. Apr-101Wahid Bhimji – Files access.
ROOT: A Data Mining Tool from CERN Arun Tripathi and Ravi Kumar 2008 CAS Ratemaking Seminar on Ratemaking 17 March 2008 Cambridge, Massachusetts.
CH13 Reduced Instruction Set Computers {Make hardware Simpler, but quicker} Key features  Large number of general purpose registers  Use of compiler.
Chapter 1. Introduction What is an Operating System? Mainframe Systems
Operating System Review September 10, 2012Introduction to Computer Security ©2004 Matt Bishop Slide #1-1.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
2.1 Silberschatz, Galvin and Gagne ©2003 Operating System Concepts with Java Chapter 2: Computer-System Structures Computer System Operation I/O Structure.
CHAPTER 2: COMPUTER-SYSTEM STRUCTURES Computer system operation Computer system operation I/O structure I/O structure Storage structure Storage structure.
Silberschatz, Galvin, and Gagne  Applied Operating System Concepts Module 2: Computer-System Structures Computer System Operation I/O Structure.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 2: Computer-System Structures Computer System Operation I/O Structure Storage.
Chapter 2: Computer-System Structures
1 CSE Department MAITSandeep Tayal Computer-System Structures Computer System Operation I/O Structure Storage Structure Storage Hierarchy Hardware Protection.
2: Computer-System Structures
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
ROOT I/O recent improvements Bottlenecks and Solutions Rene Brun December
Recall: Three I/O Methods Synchronous: Wait for I/O operation to complete. Asynchronous: Post I/O request and switch to other work. DMA (Direct Memory.
Chapter 2: Computer-System Structures Computer System Operation I/O Structure Storage Structure Storage Hierarchy Hardware Protection Network Structure.
ROOT for Data Analysis1 Intel discussion meeting CERN 5 Oct 2003 Ren é Brun CERN Distributed Data Analysis.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
ROOT and Federated Data Stores What Features We Would Like Fons Rademakers CERN CC-IN2P3, Nov, 2011, Lyon, France.
Optimization of parameters for jet finding algorithm for p+p collisions at E cm =200 GeV T. G. Dedovich & M.V. Tokarev JINR, Dubna  Motivations.
STAR Event data storage and management in STAR V. Perevoztchikov Brookhaven National Laboratory,USA.
H.G.Essel: Go4 - J. Adamczewski, M. Al-Turany, D. Bertini, H.G.Essel, S.Linev CHEP 2003 GSI Online Offline Object Oriented Go4.
LHC Physics Analysis and Databases or: “How to discover the Higgs Boson inside a database” Maaike Limper.
We will focus on operating system concepts What does it do? How is it implemented? Apply to Windows, Linux, Unix, Solaris, Mac OS X. Will discuss differences.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 2 Computer-System Structures Slide 1 Chapter 2 Computer-System Structures.
David Adams ATLAS DIAL: Distributed Interactive Analysis of Large datasets David Adams BNL August 5, 2002 BNL OMEGA talk.
Silberschatz, Galvin and Gagne  Applied Operating System Concepts Chapter 2: Computer-System Structures Computer System Architecture and Operation.
The High Performance Simulation Project Status and short term plans 17 th April 2013 Federico Carminati.
Lecture 10 Page 1 CS 111 Summer 2013 File Systems Control Structures A file is a named collection of information Primary roles of file system: – To store.
1 Experience at CERN with luminosity monitoring and calibration, ISR, SPS proton antiproton collider, LEP, and comments for LHC… Werner Herr and Rüdiger.
Slide 1/29 Informed Prefetching in ROOT Leandro Franco 23 June 2006 ROOT Team Meeting CERN.
March, PROOF - Parallel ROOT Facility Maarten Ballintijn Bring the KB to the PB not the PB to the KB.
Diffractive Dijet Production Issues with Analysis Hardeep Bansil Birmingham Weekly ATLAS Meeting 15/09/2011.
OSes: 2. Structs 1 Operating Systems v Objective –to give a (selective) overview of computer system architectures Certificate Program in Software Development.
Update on G5 prototype Andrei Gheata Computing Upgrade Weekly Meeting 26 June 2012.
Work in progress Philippe & Rene. Memory Pools when reading Currently we new/delete – zipped buffer (when no cache) – unzipped buffer – branches target.
ALICE Offline Week October 4 th 2006 Silvia Arcelli & Chiara Zampolli TOF Online Calibration - Strategy - TOF Detector Algorithm - TOF Preprocessor.
S.Linev: Go4 - J.Adamczewski, H.G.Essel, S.Linev ROOT 2005 New development in Go4.
1 G4UIRoot Isidro González ALICE ROOT /10/2002.
Jet reconstruction with Deterministic Annealing Davide Perrino Dipartimento di Fisica – INFN di Bari Terzo Convegno Nazionale sulla Fisica di Alice – 13/11/2007.
I/O aspects for parallel event processing frameworks Workshop on Concurrency in the many-Cores Era Peter van Gemmeren (Argonne/ATLAS)
PROOF on multi-core machines G. GANIS CERN / PH-SFT for the ROOT team Workshop on Parallelization and MultiCore technologies for LHC, CERN, April 2008.
Progress on Simulation Software Hai-Ping Peng(USTC) Xiao-Shuai Qin(IHEP) Xiao-Rong Zhou(USTC) Yu Hu(IHEP) 2014 STC Workshop (ITP) Hai-Ping Peng.
Barthélémy von Haller CERN PH/AID For the ALICE Collaboration The ALICE data quality monitoring system.
ANALYSIS TRAIN ON THE GRID Mihaela Gheata. AOD production train ◦ AOD production will be organized in a ‘train’ of tasks ◦ To maximize efficiency of full.
HYDRA Framework. Setup of software environment Setup of software environment Using the documentation Using the documentation How to compile a program.
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
File System Structure How do I organize a disk into a file system?
Module 2: Computer-System Structures
Module 2: Computer-System Structures
Chapter 12 Pipelining and RISC
Chapter 2 Operating System Overview
CSE 542: Operating Systems
Module 2: Computer-System Structures
Module 2: Computer-System Structures
Presentation transcript:

Nuons && Threads Suggestions SFT meeting December René Brun Nuons && Threads -> Suggestions115/12/2014

1973: Thesis in Nuclear Physics (SC33/CERN, Diogene/Saturne/Saclay) : ISR/R232, p-p elastic scattering with C.Rubbia (Reconstruction) :SPS/NA4, deep inelastic muon scattering with C.Rubbia (Simul + Recons) : simulation of UA1 with C.Rubbia : simulation of OPAL with R.Heuer :simulation of GEM & SDC for the defunct SSC : simulation of ATLAS and CMS (letters of Intent) F.Gianotti, D.Froidevaux, V.Karimaki : busy with ROOT : interested by theoretical predictions for TOTEM (p-p elastic) and results : foundations for the Nuons model 2011……..: computing particles masses better than 1/ ……. Testing p-p elastic with TOTEM/UA4/D0/ISR 2012… Testing p-p interactions at the LHC (900 GeV, 2.76 TeV, 7 TeV) 2013… Testing nuons model with Jets at the LHC 2014… Predictions for 13 TeV + paper draft Nuons && Threads -> Suggestions2 From Algol to Nuons 15/12/2014

Nuons 15/12/2014Nuons && Threads -> Suggestions3 proton neutron

I am implementing my « physics model » to: – Model elementary particles using « nuons » – Compute particle masses with high accuracy – Test the model at many energies for p-p elastic scattering – Test the model at LHC energies: particles production and Jets Nuons && Threads -> Suggestions4 findall.C totem.C collide.C Nuons and C++ 15/12/2014

Example of event motivating my project 15/12/2014Nuons && Threads -> Suggestions5 Standard proton model Predicted cross section wrong by more than 1000 for t > 2 GeV^2

collisions 15/12/2014Nuons && Threads -> Suggestions6 PP elastic PP inelastic

Some programming details The 3 C++ programs findall, totem and collide (about LOC in total) are all running in batch and multi-threaded mode on several OpenLab machines with 2x6 cores Westmere or 2x12 cores Ivy Bridge or 2x14 cores E5-2697v3 now upgraded to 2x18 cores. My programs run from a few minutes to one day. – nohup root.exe –b –q « collide.C+(7000) » >x1.log& – eg processid While the program(s) are running, I can inspect the results (histograms or/and Trees), (say once per minute) from my laptop, stop and lauch again with a new set of parameters. – root >.x colshow.C(-12756) – This CINT script takes the file collide_12756.root from OpenLab/AFS and stores it on my laptop where histograms are visualized. Nuons && Threads -> Suggestions715/12/2014

More programming details Findall is a bit « lattice QCD like » % of the time is spent in TMinuit to compute the stable positions of a set of N nuons generated at random in a cube of size 1 fermi. Totem and Collide are quite similar to Pythia or Herwig. They simulate proton-proton collisions generating output particles and Jets. The scripts run on my laptop and show plenty of graphs comparing with the LHC experiments results. Nuons && Threads -> Suggestions8 Batch On OpenLab machines Batch On OpenLab machines Interactive script On my laptop Interactive script On my laptop Histograms, Tree afs Histograms, Tree afs scp 15/12/2014

More programming details(2) Findall saves results in a Tree (one particle per entry). It takes about 0.1s to compute a pion, 10 minutes for a proton and 20 minutes for a Omega. Totem generates histograms only (about 20 1&2D) collide generates about 100 histograms (1 & 2D) and a Tree with a size ranging from a few Mbytes/minute to several Gbytes/minute depending on the desired granularity of the collision information. About one billion collisions are generated in one day. Most histograms are filled millions of times per second. Nuons && Threads -> Suggestions915/12/2014

Experience  Suggestions All these applications are multi-threaded, a HUGE gain in REAL time for what I am doing. There are many many applications in HEP that look very similar : – All detector simulations – All event generators – Most physics analyses To make the most efficient use of the hardware, I had to make simple changes in ROOT or implement solutions that should be implemented in a more general way in ROOT. Nuons && Threads -> Suggestions1015/12/2014

Main Topics Random numbers and distributions : trivial Histograms Trees I/O in general Thread scalability considerations Nuons && Threads -> Suggestions11 Current ROOT is a blocker for performant multi-threaded applications Current ROOT is a blocker for performant multi-threaded applications 15/12/2014

Random Numbers No changes required in the TRandomXX classes. I am using only the nice and efficient TRandom3 (Mersenne Twister). I create a TRandom3 object per thread initialized with : TRandom3(pid *thnumb). I had to modify or circumvent all places referencing gRandom in full backward compatibility and in totally trivial ways: – TF1::GetRandom() -> TF1::GetRandom(double r=-1) – Similar changes should be applied to TH1::GetRandom and FillRandom – TGenPhaseSpace: add SetRandom function and member fRandom – Similar changes should also be applied to: TF2,TF3::GetRandom, Tunuran, TKDTree TMVA: Dataset, RuleEnsemble TGeoBBox, TGeoCompositeShape, TGeoChecker TRobustEstimator, TAttParticle, TVirtualMC, RooStudyPackage TApplicationRemote, TProof Nuons && Threads -> Suggestions1215/12/2014

Histograms & Threads Currently one has to set TH1::AddDirectory(0) to bypass gDirectory. However, this forces the user to do the histogram book-keeping himself. This makes the histogram merging phase a bit complex (see next slides with a solution). Histograms may be created in the main thread and filled (with thread-locking) at each fill. This is fine if the number of fills is negligible. The only realistic solution is to make a copy of all histograms per thread. However, in several applications, this can represent a substantial increase in memory. – In my case, I have at most 100 histograms (total 400 Kbytes per thread) – Alice monitoring has histograms, total size 1.5 Gbytes in memory! – Most analysis applications have a few hundred, up to a few thousand histograms Some tiny work is required to take advantage of the architecture already in place to: – Do lazy instantiations of the bins structures – Exploit better the TH1::SetBuffer mechanism, in particular in TH1::Merge and make vectorization possible. I could not survive without my I/O check-pointing (around one per minute) for histograms and Trees. This allows me to inspect at any time the current status of my jobs and interrupt them and change my parameters when I see that the results are not the ones expected. It also makes the running of multi-threading applications much safer. Nuons && Threads -> Suggestions1315/12/2014

Histograms : poor man Nuons && Threads -> Suggestions14 Main Thread TH1 *hrun, *hwatch Main Thread TH1 *hrun, *hwatch Thread 1 Create 97 histograms Loop on events Every N events, save thread histograms to file Thread 1 Create 97 histograms Loop on events Every N events, save thread histograms to file Thread 6 Create 97 histograms Loop on events Every N events, save thread histograms to file Thread 6 Create 97 histograms Loop on events Every N events, save thread histograms to file Thread 12 Create 97 histograms Loop on events Every N events, save thread histograms to file Thread 12 Create 97 histograms Loop on events Every N events, save thread histograms to file ……. Then Merge all thread files every NN events or at end of job What I have been doing for a long time and efficiency < 8/12 15/12/2014

Histograms (2) much better Nuons && Threads -> Suggestions15 Main Thread TH1 *hrun, *hwatch Main Thread TH1 *hrun, *hwatch Thread 1 Create 97 histograms Loop on events Every N events, merge histograms from all threads and save to file Thread 1 Create 97 histograms Loop on events Every N events, merge histograms from all threads and save to file Thread 6 Create 97 histograms Loop on events Every N events, merge histograms from all threads and save to file Thread 6 Create 97 histograms Loop on events Every N events, merge histograms from all threads and save to file Thread 12 Create 97 histograms Loop on events Every N events, merge histograms from all threads and save to file Thread 12 Create 97 histograms Loop on events Every N events, merge histograms from all threads and save to file ……. My current version 15/12/2014

Histograms Management (1) (my current solution) Nuons && Threads -> Suggestions 16 TH1::AddDirectory(0); TList htr[nthreads]; TH1D *hrun = new TH1D(…); TThread::Lock(); TList &hlist = htr[thnumb]; TH1D *hncol = new TH1D("hncol","number of collisions",66,0,66); hlist.Add(hncol); TH1D *hpoiss = new TH1D("hpoiss","Jets particle multiplicity",50,0,50); hlist.Add(hpoiss); … hncol->Fill(…); … TFile *fhist = TFile::Open(TString::Format("collide_%d.root",processID),"recreate"); hrun->SetBinContent(26,mainwatch->GetRealTime()); hrun->Write(); TList hlistall; int nh = htr[0].GetSize(); for (int ih=0;ih<nh;ih++) { TH1 *hcur = (TH1*)htr[0].At(ih)->Clone(); hlistall.Clear(); for (int t=1;t<ncpus;t++) { hlistall.Add(htr[t].At(ih)); } hcur->Merge(&hlistall); hcur->Write(); delete hcur; } fhist->SaveSelf(); delete fhist; Main thread in thread thnumb In any thread or end of main thread 15/12/2014

Histograms Management (2) (what I would like to see in ROOT) Nuons && Threads -> Suggestions 17 TH1::InitializeThreads(nthreads); TH1D *hrun = new TH1D(…); TH1::SetThreadDirectory(thnumb]; TH1D *hncol = new TH1D("hncol","number of collisions",66,0,66); TH1D *hpoiss = new TH1D("hpoiss","Jets particle multiplicity",50,0,50); … hncol->Fill(…); … TFile *fhist = TFile::Open(TString::Format("collide_%d.root",processID),"recreate"); hrun->SetBinContent(26,mainwatch->GetRealTime()); hrun->Write(); TH1::MergeThreads()->Write(); fhist->SaveSelf(); delete fhist; Main thread in thread thnumb In any thread or end of main thread 15/12/2014

Histograms (3) muuuch better Nuons && Threads -> Suggestions18 Main Thread TH1 *hrun, *hwatch Main Thread TH1 *hrun, *hwatch Thread 1 Create 97 histograms Loop on events Every N events, merge histograms from all threads and save to file Thread 1 Create 97 histograms Loop on events Every N events, merge histograms from all threads and save to file Thread 6 Create 97 histograms Loop on events Every N events, merge histograms from all threads and save to file Thread 6 Create 97 histograms Loop on events Every N events, merge histograms from all threads and save to file Thread 12 Create 97 histograms Loop on events Every N events, merge histograms from all threads and save to file Thread 12 Create 97 histograms Loop on events Every N events, merge histograms from all threads and save to file ……. What I would like to see Non blocking asynchronous I/O thread 15/12/2014

Trees & Threads Solution1 : one TTree per thread  one file per thread, then possibly merge files at end of job. – Currently this requires locking or/and fixing the non-thread-safe parts of TTree I/O – Not very user friendly as it requires more book-keeping Solution2: Use the TTree Buffer merge facility – This is much more efficient, but requires more memory – This solution is not yet fully operational for threads Solution 3: Create only one TTree in main thread (or any thread) – For each fill: Lock, Swap branch addresses, Fill, UnLock – This solution is nice for memory, but adds more sequentiality – This is my current solution, waiting for a better solution, eg Solution4 Solution4: same as Solution3, but with – An optimized branch addresses booking and swapping – Delegation of the pure I/O part to a separate asynchronous thread doing the zipping and disk writes. Solution 5: same as Solution 4, with in addition – Possibility to call branch::Fill per thread (This will be essential for GeantV) Nuons && Threads -> Suggestions1915/12/2014

Trees & Threads (my current solution) Nuons && Threads -> Suggestions20 TTree *T = 0; if (!T && fillTree) { TFile::Open(TString::Format("/data/brun/collide_%d _events.root",processID),"recreate"); T = new TTree("T","selected collide events"); T->Branch("i1",&i1,"i1/I"); T->Branch("i2",&i2,"i2/I"); T->Branch("nch",&nch,"nch/I"); T->Branch("nchCMS",&nchCMS,"nchCMS/I"); T->Branch("njets",&njets,"njets/I"); T->Branch("njetsCMS",&njetsCMS,"njetsCMS/I"); T->Branch("phi1",&phi1,"phi1/D"); ……. T->Branch("ptype",ptype,"ptype[nchCMS]/I"); T->Branch("pjet",pjet,"pjet[nchCMS]/I"); T->Branch("ppx",ppx,"ppx[nchCMS]/D"); T->Branch("ppy",ppy,"ppy[nchCMS]/D"); T->Branch("ppz",ppz,"ppz[nchCMS]/D"); T->Branch("ppt",ppt,"ppt[nchCMS]/D"); T->Branch("peta",peta,"peta[nchCMS]/D"); T->AutoSave("SaveSelf"); } if (fillTree && bigjet) { TThread::Lock(); T->SetBranchAddress("i1",&i1); T->SetBranchAddress("i2",&i2); T->SetBranchAddress("nch",&nch); T->SetBranchAddress("nchCMS",&nchCMS); T->SetBranchAddress("njets",&njets); T->SetBranchAddress("njetsCMS",&njetsCMS); T->SetBranchAddress("phi1",&phi1); ……. T->SetBranchAddress("ptype",ptype); T->SetBranchAddress("pjet",pjet); T->SetBranchAddress("ppx",ppx); T->SetBranchAddress("ppy",ppy); T->SetBranchAddress("ppz",ppz); T->SetBranchAddress("ppt",ppt); T->SetBranchAddress("peta",peta); T->Fill(); //every N events autosave if (event%1000==0) T->AutoSave(“SaveSelf”); TThread::UnLock(); } Main thread in initialisation thread thnumb Filling Tree in thread thnumb 15/12/2014

Trees & Threads (what would be faster and simpler) Nuons && Threads -> Suggestions21 TTree *T = 0; if (!T && fillTree) { TFile::Open(TString::Format("/data/brun/collide_%d _events.root",processID),"recreate"); T = new TTree("T","selected collide events"); T->Branch("i1",&i1,"i1/I"); T->Branch("i2",&i2,"i2/I"); T->Branch("nch",&nch,"nch/I"); T->Branch("nchCMS",&nchCMS,"nchCMS/I"); T->Branch("njets",&njets,"njets/I"); T->Branch("njetsCMS",&njetsCMS,"njetsCMS/I"); T->Branch("phi1",&phi1,"phi1/D"); ……. T->Branch("ptype",ptype,"ptype[nchCMS]/I"); T->Branch("pjet",pjet,"pjet[nchCMS]/I"); T->Branch("ppx",ppx,"ppx[nchCMS]/D"); T->Branch("ppy",ppy,"ppy[nchCMS]/D"); T->Branch("ppz",ppz,"ppz[nchCMS]/D"); T->Branch("ppt",ppt,"ppt[nchCMS]/D"); T->Branch("peta",peta,"peta[nchCMS]/D"); T->AutoSave("SaveSelf"); T->SaveThreadBranches(thnumb); } if (fillTree && bigjet) { TThread::Lock(); T->SetThreadBranches(thnumb); T->Fill(); //every N events autosave if (event%1000==0) T->AutoSave(“SaveSelf”); TThread::UnLock(); } Main thread in initialisation thread thnumb Filling Tree in thread thnumb 15/12/2014

Trees & Threads (3) (what would be much faster and even simpler) Nuons && Threads -> Suggestions22 TTree *T = 0; if (!T && fillTree) { TFile::Open(TString::Format("/data/brun/collide_%d _events.root",processID),"recreate"); T = new TTree("T","selected collide events"); T->Branch("i1",&i1,"i1/I"); T->Branch("i2",&i2,"i2/I"); T->Branch("nch",&nch,"nch/I"); T->Branch("nchCMS",&nchCMS,"nchCMS/I"); T->Branch("njets",&njets,"njets/I"); T->Branch("njetsCMS",&njetsCMS,"njetsCMS/I"); T->Branch("phi1",&phi1,"phi1/D"); ……. T->Branch("ptype",ptype,"ptype[nchCMS]/I"); T->Branch("pjet",pjet,"pjet[nchCMS]/I"); T->Branch("ppx",ppx,"ppx[nchCMS]/D"); T->Branch("ppy",ppy,"ppy[nchCMS]/D"); T->Branch("ppz",ppz,"ppz[nchCMS]/D"); T->Branch("ppt",ppt,"ppt[nchCMS]/D"); T->Branch("peta",peta,"peta[nchCMS]/D"); T->AutoSave("SaveSelf"); T->SaveThreadBranches(thnumb); } if (fillTree && bigjet) { TThread::Lock(); T->SetThreadBranchesFill(thnumb, kAutoSave %( n%1000==0)); TThread::UnLock(); } Main thread in initialisation thread thnumb Filling Tree in thread thnumb Where SetThreadBranchesFill quickly copy the branch data to a circular buffer, return immediately the control to the calling thread and pass the data to another thread asynchronously to fill the TreeCache and disk I-O Where SetThreadBranchesFill quickly copy the branch data to a circular buffer, return immediately the control to the calling thread and pass the data to another thread asynchronously to fill the TreeCache and disk I-O 15/12/2014