Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 1 HLT Data Challenge - PC² - - Setup / Results – - Clusterfinder Benchmarks –

Slides:



Advertisements
Similar presentations
21/04/06KIP HeidelbergTorsten Alt & Jochen Thäder HLT Setup in RCU Lab HLT-Setup.
Advertisements

Slide 1 Insert your own content. Slide 2 Insert your own content.
SCARIe: Realtime software correlation Nico Kruithof, Damien Marchal.
0 - 0.
Addition Facts
QMUL e-Science Research Cluster Introduction (New) Hardware Performance Software Infrastucture What still needs to be done.
Welcome to Who Wants to be a Millionaire
P AR T EC Cluster Competence The Communication- and Management Software Company for HIGH PERFORMANCE CLUSTER COMPUTING Dr. Jochen Krebs Member of the Executive.
The Impact of Soft Resource Allocation on n-tier Application Scalability Qingyang Wang, Simon Malkowski, Yasuhiko Kanemasa, Deepal Jayasinghe, Pengcheng.
MySQL Users Conf MIT Lincoln Laboratory Measuring MySQL Server Performance for Sensor Data Stream Processing Jacob Nikom MIT Lincoln Laboratory.
© 2003 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Performance Measurements of a User-Space.
INTRODUCTION TO SIMULATION WITH OMNET++ José Daniel García Sánchez ARCOS Group – University Carlos III of Madrid.
1 Sizing the Streaming Media Cluster Solution for a Given Workload Lucy Cherkasova and Wenting Tang HPLabs.
KAIST Computer Architecture Lab. The Effect of Multi-core on HPC Applications in Virtualized Systems Jaeung Han¹, Jeongseob Ahn¹, Changdae Kim¹, Youngjin.
Addition 1’s to 20.
Test B, 100 Subtraction Facts
HLT Collaboration; High Level Trigger HLT PRR Computer Architecture Volker Lindenstruth Kirchhoff Institute for Physics Chair of Computer.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
HLT - data compression vs event rejection. Assumptions Need for an online rudimentary event reconstruction for monitoring Detector readout rate (i.e.
Timm Morten Steinbeck, Computer Science/Computer Engineering Group Kirchhoff Institute f. Physics, Ruprecht-Karls-University Heidelberg A Framework for.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
> IRTG – Heidelberg 2007 < Jochen Thäder – University of Heidelberg 1/18 ALICE HLT in the TPC Commissioning IRTG Seminar Heidelberg – January 2008 Jochen.
HLT Online Monitoring Environment incl. ROOT - Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 HOMER - HLT Online Monitoring.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg - DPG 2005 – HK New Test Results for the ALICE High Level Trigger.
HLT Collaboration (23-Jun-15) 1 High Level Trigger HLT PRR Planning Volker Lindenstruth Kirchhoff Institute for Physics Chair of Computer Science University.
The Publisher-Subscriber Interface Timm Morten Steinbeck, KIP, University Heidelberg Timm Morten Steinbeck Technical Computer Science Kirchhoff Institute.
CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Timm Morten Steinbeck, Computer Science.
M. Richter, University of Bergen & S. Kalcher, J. Thäder, T. M. Steinbeck, University of Heidelberg 1 AliRoot - Pub/Sub Framework Analysis Component Interface.
ALICE HLT High Speed Tracking and Vertexing Real-Time 2010 Conference Lisboa, May 25, 2010 Sergey Gorbunov 1,2 1 Frankfurt Institute for Advanced Studies,
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 Timm M. Steinbeck HLT Data Transport Framework.
Timm Morten Steinbeck, Computer Science/Computer Engineering Group Kirchhoff Institute f. Physics, Ruprecht-Karls-University Heidelberg Alice High Level.
Nor Asilah Wati Abdul Hamid, Paul Coddington. School of Computer Science, University of Adelaide PDCN FEBRUARY 2007 AVERAGES, DISTRIBUTIONS AND SCALABILITY.
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 1 HLT for TPC commissioning - Setup - - Status - - Experience -
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Test Configuration for Control For the test configuration we used a VME based control system, constituted by a VME crate with a VMPC4a from Cetia (CPU.
A performance analysis of multicore computer architectures Michel Schelske.
Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
The High-Level Trigger of the ALICE Experiment Heinz Tilsner Kirchhoff-Institut für Physik Universität Heidelberg International Europhysics Conference.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
Use of GPUs in ALICE (and elsewhere) Thorsten Kollegger TDOC-PG | CERN |
TPC online reconstruction Cluster Finder & Conformal Mapping Tracker Kalliopi Kanaki University of Bergen.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Data Acquisition Backbone Core J. Adamczewski-Musch, N. Kurz, S. Linev GSI, Experiment Electronics, Data processing group.
An Architecture and Prototype Implementation for TCP/IP Hardware Support Mirko Benz Dresden University of Technology, Germany TERENA 2001.
KIP Ivan Kisel JINR-GSI meeting Nov 2003 High-Rate Level-1 Trigger Design Proposal for the CBM Experiment Ivan Kisel for Kirchhoff Institute of.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
Normal text - click to edit HLT tracking in TPC Off-line week Gaute Øvrebekk.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
LNL 1 SADIRC2000 Resoconto 2000 e Richieste LNL per il 2001 L. Berti 30% M. Biasotto 100% M. Gulmini 50% G. Maron 50% N. Toniolo 30% Le percentuali sono.
FPGA Co-processor for the ALICE High Level Trigger Gaute Grastveit University of Bergen Norway H.Helstrup 1, J.Lien 1, V.Lindenstruth 2, C.Loizides 5,
Scientific Computing Facilities for CMS Simulation Shams Shahid Ayub CTC-CERN Computer Lab.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
CWG13: Ideas and discussion about the online part of the prototype P. Hristov, 11/04/2014.
L1/HLT trigger farm Bologna setup 0 By Gianluca Peco INFN Bologna Genève,
Matthias Richter, Sebastian Kalcher, Jochen Thäder & Timm M. Steinbeck
LHC experiments Requirements and Concepts ALICE
ALICE – First paper.
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Commissioning of the ALICE HLT, TPC and PHOS systems
TPC reconstruction in the HLT
ISAM 5338 Project Business Plan
Designing a PC Farm to Simultaneously Process Separate Computations Through Different Network Topologies Patrick Dreher MIT.
Presentation transcript:

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 1 HLT Data Challenge - PC² - - Setup / Results – - Clusterfinder Benchmarks – - Setup / Results –

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 2 PC² Paderborn PC² - Paderborn Center for Parallel Computing Architecture of the ARMINIUS cluster –200 nodes with Dual Intel Xeon 64-bit, 3.2 GHz –800 GByte main memory (4 GByte each) –InfiniBand network –Gigabit Ethernet network –RedHat Linux 4

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 3 General Test - Configuration Hardware Configuration –200 nodes with Dual 3.2 GHz Intel Xeon CPUs –Gigabit Ethernet Framework Configuration –HLT Data Framework with TCP Dump Subscriber processes (TDS) –HLT Online Display connecting to TDS Software Configuration –RHEL 4 update 1 –RHEL kernel version –2.6 bigphys area patch –PSI2 driver for 2.6

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 4 Full TPC (36 slices) on 188 nodes (I) Hardware Configuration –188 nodes with Dual 3.2 GHz Intel Xeon CPUs Framework Configuration –Compiled in debug mode, no optimizations –Setup per slice (6 incoming DDLs) 3 nodes for cluster finding each node with 2 filepublisher processes and 2 cluster finding processes 2 nodes for tracking each node with 1 tracking processes –8 Global Merger processes merging the tracks of the 72 tracking nodes

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 5 Full TPC (36 slices) on 188 nodes (II) Framework Setup HLT Data Framework setup for 1 slice Node GM Online Display Node GM Node GM CF Node CF DDL CF Patch CF DDL CF Patch Node TR CF Node CF DDL CF Patch CF DDL CF Patch Node TR CF Node CF DDL CF Patch CF DDL CF Patch Simulated TPC data......

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 6 Full TPC (36 slices) on 188 nodes (III) Empty Events –Real data format, empty events, no hits/tracks –Rate approx. 2.9 kHz after tracking –Limited by the filepublisher processes

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 7 Full TPC (36 slices) on 188 nodes (IV) Simulated Events –simulated pp data (14 TeV, 0.5 T) –Rate approx. 220 Hz after tracking –Limited by the tracking processes Solution: use more nodes

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 8 Conclusion of Full TPC Test Main bottleneck is the processing of the data itself The system is not limited by the HLT data transport framework Test limitations by number of available nodes

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 9 „Test Setup“

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 10 Clusterfinder Benchmarks (CFB) pp – Events 14 TeV, 0.5 T Number of Events: 1200 Iterations: 100 TestBench: SimpleComponentWrapper TestNodes: –HD ClusterNodes e304, e307 (PIII, 733 MHz) –HD ClusterNodes e106, e107 (PIII, 800 MHz) –HD GatewayNode alfa (PIII, 1.0 GHz) –HD ClusterNode eh001 (Opteron, 1.6 GHz) –CERN ClusterNode eh000 (Opteron, 1.8 GHz)

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 11 CFB – Signal Distribution per patch

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 12 CFB – Cluster Distribution per patch

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 13 CFB – PadRow / Pad Distribution

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 14 CFB – Timing Results (I)

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 15 CFB - Timing Results (II) CPU Patch 0 [ms] Patch 1 [ms] Patch 2 [ms] Patch 3 [ms] Patch 4 [ms] Patch 5 [ms] Average [ms] Opteron 1,6 GHz2,933,922,732,962,932,903,06 Opteron 1,8 GHz3,965,323,663,983,943,994,13 PIII 1,0 GHz4,956,654,514,904,874,815,11 PIII 800 MHz6,048,105,646,126,066,016,33 PIII 733MHz6,578,826,146,676,616,546,90

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 16 CFB – Conclusion / Outlook Learned about different needs for each patch Number of processing components have to be adjusted to particular patch