Monte Carlo Data Production and Analysis at Bologna LHCb Bologna.

Slides:



Advertisements
Similar presentations
5/2/2005MAGIC II Workshop – RömerTurm1 The Monte Carlo Center Presently 5 ( + 5) XEON bi-processors at 3.0 CNAF (Bologna) allocated to MAGIC. During.
Advertisements

Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
Introduction to Systems Architecture Kieran Mathieson.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
NPACI: National Partnership for Advanced Computational Infrastructure August 17-21, 1998 NPACI Parallel Computing Institute 1 Cluster Archtectures and.
Data Storage Willis Kim 14 May Types of storages Direct Attached Storage – storage hardware that connects to a single server Direct Attached Storage.
Gordon: Using Flash Memory to Build Fast, Power-efficient Clusters for Data-intensive Applications A. Caulfield, L. Grupp, S. Swanson, UCSD, ASPLOS’09.
RAID-x: A New Distributed Disk Array for I/O-Centric Cluster Computing Kai Hwang, Hai Jin, and Roy Ho.
File Systems and N/W attached storage (NAS) | VTU NOTES | QUESTION PAPERS | NEWS | VTU RESULTS | FORUM | BOOKSPAR ANDROID APP.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Shilpa Seth.  Centralized System Centralized System  Client Server System Client Server System  Parallel System Parallel System.
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
Operating System. Architecture of Computer System Hardware Operating System (OS) Programming Language (e.g. PASCAL) Application Programs (e.g. WORD, EXCEL)
April 2001HEPix/HEPNT1 RAL Site Report John Gordon CLRC, UK.
Copyright © 2010, Scryer Analytics, LLC. All rights reserved. Optimizing SAS System Performance − A Platform Perspective Patrick McDonald Scryer Analytics,
© Pearson Education Limited, Chapter 16 Physical Database Design – Step 7 (Monitor and Tune the Operational System) Transparencies.
HEAnet Centralised NAS Storage Justin Hourigan, Senior Network Engineer, HEAnet Limited.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Status of LHCb-INFN Computing CSN1, Catania, September 18, 2002 Domenico Galli, Bologna.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Large Scale Test of a storage solution based on an Industry Standard Michael Ernst Brookhaven National Laboratory ADC Retreat Naples, Italy February 2,
Computing for LHCb-Italy Domenico Galli, Umberto Marconi and Vincenzo Vagnoni Genève, January 17, 2001.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
LFC Replication Tests LCG 3D Workshop Barbara Martelli.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
Data Management for Decision Support Session-4 Prof. Bharat Bhasker.
Optimizing I/O Performance for ESD Analysis Misha Zynovyev, GSI (Darmstadt) ALICE Offline Week, October 28, 2009.
Virtualization and Databases Ashraf Aboulnaga University of Waterloo.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
Efficiency of small size tasks calculation in grid clusters using parallel processing.. Olgerts Belmanis Jānis Kūliņš RTU ETF Riga Technical University.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
Network-Attached Storage D ISK A TTACHMENT. Computers access disk storage in two ways: 1) Host attached via an I/O port 2) Network attached via a network.
Computer and Network Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH-LBC RTTC meeting,
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Evaluating the performance of Seagate Kinetic Drives Technology and its integration with the CERN EOS storage system Ivana Pejeva openlab Summer Student.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Atlas Software Structure Complicated system maintained at CERN – Framework for Monte Carlo and real data (Athena) MC data generation, simulation and reconstruction.
Niko Neufeld, CERN. Trigger-free read-out – every bunch-crossing! 40 MHz of events to be acquired, built and processed in software 40 Tbit/s aggregated.
1.3 ON ENHANCING GridFTP AND GPFS PERFORMANCES A. Cavalli, C. Ciocca, L. dell’Agnello, T. Ferrari, D. Gregori, B. Martelli, A. Prosperini, P. Ricci, E.
Parallel IO for Cluster Computing Tran, Van Hoai.
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
PSM, Database requirements for POOL (File catalog performance requirements) Maria Girone, IT-DB Strongly based on input from experiments: subject.
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
G. Russo, D. Del Prete, S. Pardi Frascati, 2011 april 4th-7th The Naples' testbed for the SuperB computing model: first tests G. Russo, D. Del Prete, S.
11 October 2000Iain A Bertram - Lancaster University1 Lancaster Computing Facility zStatus yVendor for Facility Chosen: Workstations UK yPurchase Contract.
Canadian Bioinformatics Workshops
ANL T3g infrastructure S.Chekanov (HEP Division, ANL) ANL ASC Jamboree September 2009.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Dynamic Extension of the INFN Tier-1 on external resources
SAM at CCIN2P3 configuration issues
LHCb Software & Computing Status
Grid Computing.
TYPES OFF OPERATING SYSTEM
Status of LHCb-INFN Computing
The LHCb Computing Data Challenge DC06
Cluster Computers.
Presentation transcript:

Monte Carlo Data Production and Analysis at Bologna LHCb Bologna

Computing Resources CPU Resources 150 CPUs for the period July-September 20*(866 MHz) + 30*(1 GHz) + 100*(1.4 GHz) After summer to be negotiated with CNAF, but >= 80 Disk Storage 2 TB of disk space served by two Network Attached Storage (NAS) disk servers

Data Analysis at Bologna Technical Issues NAS disk-servers used for production can’t be efficiently used for data analysis: Access time will dominate the processing time. Reading 1 TB of data would take 10 5 s at the NAS sustainable throughput of about 10 MB/s We plan to make use of a parallel file system implementation which stripes the data files among several I/O nodes, which would allow to reach order of MB/s over Fast- Ethernet

Data Analysis Concerns In order to allow full interactive sessions we would need to reproduce the CERN environment Another possibility could be to produce executables at CERN and then just move the binaries to make massive analysis (similarly to what happens with MC production). We have to establish the optimal sharing of the resources to be used by the production and the analysis tasks