Data Clustering Research in CMS Koen Holtman CERN/CMS Eindhoven University of technology CHEP ’2000 Feb 7-11, 2000.

Slides:



Advertisements
Similar presentations
Building a Large Location Table to Find Replicas of Physics Objects Koen Holtman Heinz Stockinger CERN/CMS CHEP ’2000 Feb 7-11, 2000.
Advertisements

23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
File System Implementation
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
File Systems Implementation. 2 Recap What we have covered: –User-level view of FS –Storing files: contiguous, linked list, memory table, FAT, I-nodes.
Introspective Replica Management Yan Chen, Hakim Weatherspoon, and Dennis Geels Our project developed and evaluated a replica management algorithm suitable.
Predicting Sequential Rating Elicited from Humans Aviv Zohar & Eran Marom.
Hadoop tutorials. Todays agenda Hadoop Introduction and Architecture Hadoop Distributed File System MapReduce Spark 2.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Offline Software Status Jan. 30, 2009 David Lawrence JLab 1.
PROOF: the Parallel ROOT Facility Scheduling and Load-balancing ACAT 2007 Jan Iwaszkiewicz ¹ ² Gerardo Ganis ¹ Fons Rademakers ¹ ¹ CERN PH/SFT ² University.
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
BESIII computing 王贻芳. Peak Data volume/year Peak data rate at 3000 Hz Events/year: 1*10 10 Total data of BESIII is about 2*640 TB Event size(KB)Data volume(TB)
CERN - IT Department CH-1211 Genève 23 Switzerland t The High Performance Archiver for the LHC Experiments Manuel Gonzalez Berges CERN, Geneva.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
Hadoop tutorials. Todays agenda Hadoop Introduction and Architecture Hadoop Distributed File System MapReduce Spark Cluster Monitoring 2.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
Data & Storage Services CERN IT Department CH-1211 Genève 23 Switzerland t DSS From data management to storage services to the next challenges.
Large Scale Test of a storage solution based on an Industry Standard Michael Ernst Brookhaven National Laboratory ADC Retreat Naples, Italy February 2,
Building a Parallel File System Simulator E Molina-Estolano, C Maltzahn, etc. UCSC Lab, UC Santa Cruz. Published in Journal of Physics, 2009.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
Company name KUAS HPDS dRamDisk: Efficient RAM Sharing on a Commodity Cluster Vassil Roussev, Golden G. Richard Reporter :
Fulvio Galeazzi, CHEP 2003, Mar 24— A Monitoring System for the BaBar INFN Computing Cluster Moreno Marzolla Università “Ca' Foscari” di Venezia.
Fragmentation in Large Object Repositories Russell Sears Catharine van Ingen CIDR 2007 This work was performed at Microsoft Research San Francisco with.
1. Maria Girone, CERN  Q WLCG Resource Utilization  Commissioning the HLT for data reprocessing and MC production  Preparing for Run II  Data.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) Phase 3 Letter of Intent (1/2)  Short: N Pages è May Refer to MONARC Internal Notes to Document.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
DYNES Storage Infrastructure Artur Barczyk California Institute of Technology LHCOPN Meeting Geneva, October 07, 2010.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
Lana Abadie1 Conception et optimisation d’une base de données relationnelle pour la configuration d’expériences HEP Implementation and optimization of.
1 GCA Application in STAR GCA Collaboration Grand Challenge Architecture and its Interface to STAR Sasha Vaniachine presenting for the Grand Challenge.
Caitriana Nicholson, CHEP 2006, Mumbai Caitriana Nicholson University of Glasgow Grid Data Management: Simulations of LCG 2008.
Lee Lueking 1 The Sequential Access Model for Run II Data Management and Delivery Lee Lueking, Frank Nagy, Heidi Schellman, Igor Terekhov, Julie Trumbo,
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
CERN – IT Department CH-1211 Genève 23 Switzerland t Working with Large Data Sets Tim Smith CERN/IT Open Access and Research Data Session.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
PROOF tests at BNL Sergey Panitkin, Robert Petkus, Ofer Rind BNL May 28, 2008 Ann Arbor, MI.
Storage Classes report GDB Oct Artem Trunov
Handling of T1D0 in CCRC’08 Tier-0 data handling Tier-1 data handling Experiment data handling Reprocessing Recalling files from tape Tier-0 data handling,
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Data & Storage Services CERN IT Department CH-1211 Genève 23 Switzerland t DSS Data architecture challenges for CERN and the High Energy.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
12 Mars 2002LCG Workshop: Disk and File Systems1 12 Mars 2002 Philippe GAILLARDON IN2P3 Data Center Disk and File Systems.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Plenary December 9 Agenda u Introductions HN, LP15’ è Status of Actual CMS ORCA databases.
DCache/XRootD Dmitry Litvintsev (DMS/DMD) FIFE workshop1Dmitry Litvintsev.
A Data Handling System for Modern and Future Fermilab Experiments Robert Illingworth Fermilab Scientific Computing Division.
Management of Broadband Media Assets on Wide Area Networks Lars-Olof Burchard.
3 Copyright © 2006, Oracle. All rights reserved. Designing and Developing for Performance.
05/14/04Larry Dennis, FSU1 Scale of Hall D Computing CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
15.June 2004Bernd Panzer-Steindel, CERN/IT1 CERN Mass Storage Issues.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
Scalability to Hundreds of Clients in HEP Object Databases
Experiences and Outlook Data Preservation and Long Term Analysis
Ákos Frohner EGEE'08 September 2008
Oracle Storage Performance Studies
Case studies – Atlas and PVSS Oracle archiver
Presentation transcript:

Data Clustering Research in CMS Koen Holtman CERN/CMS Eindhoven University of technology CHEP ’2000 Feb 7-11, 2000

Introduction 3-year Ph.D. project on ‘prototyping of CMS storage management’ –Focus on disk/tape based physics analysis, objects >=1 KB –Focus on scalability and (re)clustering Clustering: placement of object data on physical storage media (disk, tape) Reclustering: rearranging the clustering –Clustering and reclustering are not specific to Objectivity or Object databases

I/O Risks for LHC 1 Risk of insufficient scalability –I/O scalability issues have been studied (240 clients, 172 MB/s) –Structure data into chunks (runs), use chunk-level subjobs –‘private’ DBs –Need a read-ahead optimization: ‘bursty sequential reading’

I/O risks for LHC 2 Risk of insufficient I/O performance –MB/s needs for interactive physics analysis??? –MB/s in 2005: GB/s sequential I/O on CERN CMS disk farm –Random I/O can be factor slower! –Well-understood Clustering is important –Subdetector clustering

HEP problem Main HEP problem: increasing selectivity over time, degrades performance to that of random reading Well under- stood by now Solution: recluster ‘by hand’ (DSTs) –By hand is good enough?? Issues: consistency, # of users, space, effort, on-demand reconstruction Research on automatic reclustering

Disk reclustering Developed on the fly + batch reclustering –Dynamically recluster data based on observing new access patterns –Implemented as ‘object store’ class Keeps I/O efficiency on disk good enough, automatically Supports on-demand reconstruction Scaling…..

Tape (re)clustering 1 Clustering on tape: HENP GC Cache filtering and chunk reclustering in a multiuser analysis system with disk and tape

Tape (re)clustering 2 Cache filtering yields factor 1-50 performance gain depending on workload parameters Compensates to some extent for low clustering efficiency on tape Chunk reclustering does not seem attractive, only performance gains for very small disk farm sizes So risks remain large Extension path... With cache filtering No cache filtering

Conclusions Existing practice: –Chunks/runs, subjobs, sequential access, subdetector based clustering In this project: –Validated existing practice, detailed investigation of disk performance, scalablility, read-ahead, disk reclustering, disk+tape system with cache filtering Remaining risks: –Don’t know how much I/O needed, clustering efficiency on tape, WAN issues –To investigate systems with large caching effects: access patterns needed Design for large parameter space through simulation

Access patterns Never know enough about access patterns –Known: object sizes, increasing selectivity, full reconstruction –We don’t know much about: user-level physics analysis In systems with large caching effects, these parameters have a large effect on performance Performance of a tape+disk based analysis system for various workload parameters –Strategy: design over large parameter space (simulation) –Strategy: investigate parameters and their importance