25-09-2006EXPReS FABRIC meeting at Poznan, Poland1 EXPReS FABRIC WP 2.2 Correlator Engine Meeting 25-09-2006 Poznan Poland JIVE, Ruud Oerlemans.

Slides:



Advertisements
Similar presentations
Dominik Stoklosa Poznan Supercomputing and Networking Center, Supercomputing Department EGEE 2007 Budapest, Hungary, October 1-5 Workflow management in.
Advertisements

SCARIe: Realtime software correlation Nico Kruithof, Damien Marchal.
SCARIe FABRIC A pilot study of distributed correlation Huib Jan van Langevelde Ruud Oerlemans Nico Kruithof Sergei Pogrebenko and many others…
Dominik Stokłosa Pozna ń Supercomputing and Networking Center, Supercomputing Department INGRID 2008 Lacco Ameno, Island of Ischia, ITALY, April 9-11 Workflow.
A Workflow Engine with Multi-Level Parallelism Supports Qifeng Huang and Yan Huang School of Computer Science Cardiff University
E-VLBI progress in the South Tasso Tzioumis ATNF, CSIRO September 2006 “Towards e-VLBI”  “e-VLBI”
Batch Production and Monte Carlo + CDB work status Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
ARCS Data Analysis Software An overview of the ARCS software management plan Michael Aivazis California Institute of Technology ARCS Baseline Review March.
Analysis and Performance Results of a Molecular Modeling Application on Merrimac Erez, et al. Stanford University 2004 Presented By: Daniel Killebrew.
Marcin Okoń Pozna ń Supercomputing and Networking Center, Supercomputing Department e-Science 2006, Amsterdam, Dec Virtual Laboratory as a Remote.
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
New correlator MicroPARSEC Igor Surkis, Vladimir Zimovsky, Violetta Shantyr, Alexey Melnikov Institute of Applied Astronomy Russian Academy of Science.
Roger Jones, Lancaster University1 Experiment Requirements from Evolving Architectures RWL Jones, Lancaster University Ambleside 26 August 2010.
Pursuing Faster I/O in COSMO POMPA Workshop May 3rd 2010.
Exploiting Data Parallelism in SELinux Using a Multicore Processor Bodhisatta Barman Roy National University of Singapore, Singapore Arun Kalyanasundaram,
1 Wenguang WangRichard B. Bunt Department of Computer Science University of Saskatchewan November 14, 2000 Simulating DB2 Buffer Pool Management.
Querying Large Databases Rukmini Kaushik. Purpose Research for efficient algorithms and software architectures of query engines.
SJSU SPRING 2011 PARALLEL COMPUTING Parallel Computing CS 147: Computer Architecture Instructor: Professor Sin-Min Lee Spring 2011 By: Alice Cotti.
Frank Casilio Computer Engineering May 15, 1997 Multithreaded Processors.
Operating Systems. Definition An operating system is a collection of programs that manage the resources of the system, and provides a interface between.
EE384y EE384Y: Packet Switch Architectures Part II Scaling Crossbar Switches Nick McKeown Professor of Electrical Engineering and Computer Science,
Jump to first page One-gigabit Router Oskar E. Bruening and Cemal Akcaba Advisor: Prof. Agarwal.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
Nov 3, 2009 RN - 1 Jet Propulsion Laboratory California Institute of Technology Current Developments for VLBI Data Acquisition Equipment at JPL Robert.
LOGO PROOF system for parallel MPD event processing Gertsenberger K. V. Joint Institute for Nuclear Research, Dubna.
DiFX Performance Testing Chris Phillips eVLBI Project Scientist 25 June 2009.
Group May Bryan McCoy Kinit Patel Tyson Williams.
INFORMATION SYSTEM-SOFTWARE Topic: OPERATING SYSTEM CONCEPTS.
An FX software correlator for VLBI Adam Deller Swinburne University Australia Telescope National Facility (ATNF)
Update on the Software Correlator Nico Kruithof, Huseyin Özdemir, Yurii Pydoprihora, Ruud Oerlemans, Mark Kettenis, JIVE.
LOGO Development of the distributed computing system for the MPD at the NICA collider, analytical estimations Mathematical Modeling and Computational Physics.
FASR Software Considerations Gordon Hurford SSL AUI – August 2007.
18-19 July, 2002Correlator Backend System OverviewTom Morgan 1 Correlator Backend System Overview Tom Morgan, NRAO.
Distributed FX software correlation Adam Deller Swinburne University/CSIRO Australia Telescope National Facility Supervisors: A/Prof Steven Tingay, Prof.
EURO-VO: GRID and VO Lofar Information System Design OmegaCEN Kapteyn Institute TARGET- Computing Center University Groningen Garching, 10 April 2008 Lofar.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
Derek Weitzel Grid Computing. Background B.S. Computer Engineering from University of Nebraska – Lincoln (UNL) 3 years administering supercomputers at.
Load Rebalancing for Distributed File Systems in Clouds.
Primitive Concepts of Distributed Systems Chapter 1.
New Workflow Manager Katarzyna Bylec PSNC. Agenda Introduction WLIN Workflows DEMO KIWI Design Workflow Manager System Components descripton ▫ KIWI Portal.
PRESENTATION DATEEXPReS- TITLE OF YOUR PRESENTATIONSlide #1 What is EXPReS? EXPReS = Express Production Real-time e-VLBI Service Three year project (March.
NEXPReS Period 3 Overview WP5 Cloud Correlation Arpad Szomoru, JIVE.
Scenario use cases Szymon Mueller PSNC. Agenda 1.General description of experiment use case. 2.Detailed description of use cases: 1.Preparation for observation.
Introduction to Fabric Kiwi Team PSNC. E-VLBI system – general idea.
Unit 2 VIRTUALISATION. Unit 2 - Syllabus Basics of Virtualization Types of Virtualization Implementation Levels of Virtualization Virtualization Structures.
WfMS and external systems Katarzyna Bylec PSNC. Agenda Introduction Pre-corelation ▫ North Star ▫ NRAO SCHED ▫ Vlbeer FTP ▫ Log2vex ▫ drudg Correlation.
Research and Service Support Resources for EO data exploitation RSS Team, ESRIN, 23/01/2013 Requirements for a Federated Infrastructure.
Title FABRIC Progress - Mark, Huib etc. T. Charles Yun Project Manager EXPReS/JIVE.
What is FABRIC? Future Arrays of Broadband Radio-telescopes on Internet Computing Huib Jan van Langevelde, JIVE Dwingeloo.
Distributed Correlation in Fabric Kiwi Team PSNC.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Advanced Network Administration Computer Clusters.
WPFL General Meeting, , Nikhef A. Belias1 Shore DAQ system - report on studies A.Belias NOA-NESTOR.
TOG, Dwingeloo, March 2006, A. Szomoru, JIVE Status of JIVE Arpad Szomoru.
SA1: overview, first results
PROOF system for parallel NICA event processing
EE384Y: Packet Switch Architectures Scaling Crossbar Switches
WP18, High-speed data recording Krzysztof Wrona, European XFEL
SparkBWA: Speeding Up the Alignment of High-Throughput DNA Sequencing Data - Aditi Thuse.
EGEE NA4 Lofar Lofar Information System Design OmegaCEN
CS 21a: Intro to Computing I
TYPES OFF OPERATING SYSTEM
Parallel I/O System for Massively Parallel Processors
Characteristics of Reconfigurable Hardware
Overview of big data tools
TeraScale Supernova Initiative
UNIBOARD : VLBI APPLICATION CHARACTERIZATION
User interaction and workflow management in Grid enabled e-VLBI experiments Dominik Stokłosa Poznań Supercomputing and Networking Center, Supercomputing.
The Main Features of Operating Systems
Presentation transcript:

EXPReS FABRIC meeting at Poznan, Poland1 EXPReS FABRIC WP 2.2 Correlator Engine Meeting Poznan Poland JIVE, Ruud Oerlemans

EXPReS FABRIC meeting at Poznan, Poland2 WP2.2 Correlator Engine Develop a Correlator Engine that can run on standard workstations, deployable on clusters and grid nodes 1.Correlator algorithm design (m5) 2.Correlator computational core, single node (m14) 3.Scaled up version for clusters (m23) 4.Distributed version, middle ware (m33) 5.Interactive visualization (m4) 6.Output definition (m15) 7.Output merge (m24)

EXPReS FABRIC meeting at Poznan, Poland3 Current broadband Software Correlator Station 1Station 2Station N Raw data 16 MHz, Mk4 format on linux disk Channel extraction Extracted data Delay corrections Delay corrected data Correlation. SFXC Data Product Pre-calculated,Delay tables From Mk5 to linux disk Raw data BW=16 MHz, Mk4 format on Mk5 disk DIM,TRM, CRM DCM,DMM, FR SU Correlator Chip EVN Mk4 equivalents

EXPReS FABRIC meeting at Poznan, Poland4 High level design Distributed Correlation Process VEX CALC DelayCCF VEX WFM SCHED schedule VEX Grid Node Grid Node Grid Node Field System Mark5 System Field System Mark5 System Field System Mark5 System Principle Investigator Central operator Telescope operator JIVE archive EOP

EXPReS FABRIC meeting at Poznan, Poland5 Grid considerations/aspects  Why use grid processing power? It is available, no hardware investment required It will be upgraded regularly  Degree of distribution is trade-off between Processing power at the grid nodes Data transport capacity to the grid nodes  Data logistics and coordination More complicated when more distributed  Processing at telescope and grid nodes Station related processing at telescope site and correlation elsewhere All processing at grid nodes

EXPReS FABRIC meeting at Poznan, Poland6 Data distribution over grid sites (1) Baseline slicing Pros Small nodes Simple implementation at node Cons Multiplication of large data rates, especially when number of baselines is large Data logistics complex Scalability complex

EXPReS FABRIC meeting at Poznan, Poland7 Data distribution over grid sites (2) All data to one siteAll data to different sites Time slicing Channel slicing Pros Simple data logistics Central processing Live processing easy Slicing at the grid site Dealing with only one site. Cons Powerful central processing site required Pros Smaller nodes Live processing possible Data slicing at nodes Cons Multiplication of large data rates Simultaneous availability of sites when processing live Pros Smaller nodes Live processing per channel Simple implementation Easy scalable Cons Channel extraction at telescope increases data rate Pros Smaller nodes Smaller data rates Simple implementation Easy scalable No data mulitplication Cons Complex data logistics after correlation Live correlation complex

EXPReS FABRIC meeting at Poznan, Poland8 Correlator architecture for file input SA SB SC SD Core1 CP1 Time slice 1 SA SB SC SD Core2 CP2 Time slice 2 SA SB SC SD Core3 CP3 Time slice 3 Core1 CP Processes data from one channel Easy scalable, because one application has all the functionality Can exploit multiple processors using MPI Code reuse through OO and C++ This software architecture can work for data distributions 1,2 and 3 Offline processing

EXPReS FABRIC meeting at Poznan, Poland9 Correlator architecture for data streams SASBSCSD Core1Buffer Core2Buffer 2 Core3Buffer 3 Core CP File on disk Memory buffer with short time slices Time Real time processing

EXPReS FABRIC meeting at Poznan, Poland10 Other issues  Swinburne University, Adam Deller Last summer exchange of expertise on their software correlator  New EXPReS employee: Yurii Pidopryhora, Astronomy background Data analysis and testing  New SCARIe employee: Nico Kruithof Computer science background Scari, NWO funded project aimed at sw correlator on Dutch Grid

EXPReS FABRIC meeting at Poznan, Poland11 WP 2.2.? Status Work PackageMStatus 1.Correlator algorithm design5almost finished 2.Correlator computational core14active 3.Scaled up version for clusters23active 4.Distributed version33pending 5.Interactive visualization4pending 6.Output definition15designing 7.Output merge24designing