Distributed Correlation in Fabric Kiwi Team PSNC.

Slides:



Advertisements
Similar presentations
PIONIER 2003, Poznan, , PROGRESS Grid Access Environment for SUN Computing Cluster Poznań Supercomputing and Networking Center Cezary Mazurek.
Advertisements

DATE: 2008/03/11 NCHC-Grid Computing Portal (NCHC-GCE Portal) Project Manager: Dr. Weicheng Huang Developed Team: Chien-Lin Eric Huang Chien-Heng Gary.
Dominik Stoklosa Poznan Supercomputing and Networking Center, Supercomputing Department EGEE 2007 Budapest, Hungary, October 1-5 Workflow management in.
SCARIe: Realtime software correlation Nico Kruithof, Damien Marchal.
Dominik Stokłosa Pozna ń Supercomputing and Networking Center, Supercomputing Department INGRID 2008 Lacco Ameno, Island of Ischia, ITALY, April 9-11 Workflow.
FACULTY OF ENGINEERING & INFORMATION TECHNOLOGIES A Pareto Frontier for Optimizing Data Transfer vs. Job Execution in Grids Albert Y. Zomaya | Professor.
European VLBI Network of Radio Telescopes - a killer application for Grid ? - Mike Garrett FP6 GRID Meeting, March 2003.
Teaming Up. Teams A group of people working together to accomplish a task.
6th Biennial Ptolemy Miniconference Berkeley, CA May 12, 2005 Distributed Computing in Kepler Ilkay Altintas Lead, Scientific Workflow Automation Technologies.
1 Trace-Based Characteristics of Grid Workflows Alexandru Iosup and Dick Epema PDS Group Delft University of Technology The Netherlands Simon Ostermann,
UNICORE UNiform Interface to COmputing REsources Olga Alexandrova, TITE 3 Daniela Grudinschi, TITE 3.
V. Tudose (UvA, AIRA), R. Fender (Soton, UvA), M. Garrett (JIVE), J. Miller-Jones (UvA), Z. Paragi (JIVE), R. Spencer (Jodrell), G. Pooley (Cambridge),
Company LOGO Development of Resource/Commander Agents For AgentTeamwork Grid Computing Middleware Funded By Prepared By Enoch Mak Spring 2005.
LOFAR Self-Calibration Using a Blackboard Software Architecture ADASS 2007Marcel LooseASTRON, Dwingeloo.
Poznań city, PSNC - who are we? Poznań city, PSNC - who are we? Introduction into Virtual Laboratory Introduction into Virtual Laboratory VLab architecture.
Marcin Okoń Pozna ń Supercomputing and Networking Center, Supercomputing Department e-Science 2006, Amsterdam, Dec Virtual Laboratory as a Remote.
Software Architecture
UNIT - 1Topic - 2 C OMPUTING E NVIRONMENTS. What is Computing Environment? Computing Environment explains how a collection of computers will process and.
Active Folder: Integrating all activities of simulation on file system(with DropBox) Suntae Hwang & Daeyoung Heo School.
Cloud Distributed Computing Platform 2 Content of this lecture is primarily from the book “Hadoop, The Definite Guide 2/e)
Crossgrid kick-off meeting, Cracow, March 2002 Santiago González de la Hoz, IFIC1 Task 3.5 Test and Integration (
Parallel Computing with Matlab CBI Lab Parallel Computing Toolbox TM An Introduction Oct. 27, 2011 By: CBI Development Team.
High Data Rate Transfer for HEP and VLBI Ralph Spencer, Richard Hughes-Jones and Simon Casey The University of Manchester Netwrokshop33 March 2005.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
The PROGRESS Grid Service Provider Maciej Bogdański Portals & Portlets 2003 Edinburgh, July 14th-17th.
VLBI/eVLBI with the 305-m Arecibo Radio Telescope Chris Salter Tapasi Ghosh Emmanuel Momjian Arun Venkataraman Jon Hagen.
Project Management Process Groups and Knowledge Areas Source: PMBOK 4 th Edition Prepared
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
Presented by: La Min Htut Saint Petersburg Marine Technical University Authors: Alexander Bogdanov, Alexander Lazarev, La Min Htut, Myo Tun Tun.
PROGRESS: ICCS'2003 GRID SERVICE PROVIDER: How to improve flexibility of grid user interfaces? Michał Kosiedowski.
What is Triana?. GAPGAP Triana Distributed Work-flow Network Action Commands Workflow, e.g. BPEL4WS Triana Engine Triana Controlling Service (TCS) Triana.
Update on the Software Correlator Nico Kruithof, Huseyin Özdemir, Yurii Pydoprihora, Ruud Oerlemans, Mark Kettenis, JIVE.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Applications.
Westerbork Netherlands Dedicated Gbit link Onsala Sweden Gbit link Jodrell Bank UK Dwingeloo DWDM link Cambridge UK MERLIN Medicina Italy Chalmers University.
WSV207. Cluster Public Cloud Servers On-Premises Servers Desktop Workstations Application Logic.
 Does the addition of computing cores increase the performance of the CReSIS Synthetic Aperture Radar Processor (CSARP)?  What MATLAB toolkits and/or.
EURO-VO: GRID and VO Lofar Information System Design OmegaCEN Kapteyn Institute TARGET- Computing Center University Groningen Garching, 10 April 2008 Lofar.
Mihai Lucian Cristea, on behalf of SCARIe team University of Amsterdam TERENA CONFERENCE ‘10, Vilnius, 1 June 2010.
E. Momjian, T. Ghosh, C. Salter, & A. Venkataraman (NAIC-Arecibo Observatory) eVLBI with the 305 m Arecibo Radio Telescope ABSTRACT Using the newly acquired.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
The HDF Group January 8, ESIP Winter Meeting Data Container Study: HDF5 in a POSIX File System or HDF5 C 3 : Compression, Chunking,
Matthew Farrellee Computer Sciences Department University of Wisconsin-Madison Condor and Web Services.
PROGRESS: GEW'2003 Using Resources of Multiple Grids with the Grid Service Provider Michał Kosiedowski.
1 eVLBI Developments at Jodrell Bank Observatory Ralph Spencer, Richard Hughes- Jones, Simon Casey, Paul Burgess, The University of Manchester.
Current Network Architecture. First eVLBI with JIVE: September 2004 The first eVLBI test observations took place on Sept. 10, The source, ,
D.Spiga, L.Servoli, L.Faina INFN & University of Perugia CRAB WorkFlow : CRAB: CMS Remote Analysis Builder A CMS specific tool written in python and developed.
Project 1 Data Communication Spring 2010, ICE Stephen Kim, Ph.D.
New Workflow Manager Katarzyna Bylec PSNC. Agenda Introduction WLIN Workflows DEMO KIWI Design Workflow Manager System Components descripton ▫ KIWI Portal.
NEXPReS Period 3 Overview WP 6: High Bandwidth on Demand Paul Boven, JIVE.
2007 April 17EXPReS Annual ReviewSlide #: 1. EXPReS Annual Review April 17 EXPReS Project Team JIVE, Coordinating Institution.
Scenario use cases Szymon Mueller PSNC. Agenda 1.General description of experiment use case. 2.Detailed description of use cases: 1.Preparation for observation.
Introduction to Fabric Kiwi Team PSNC. E-VLBI system – general idea.
EXPReS FABRIC meeting at Poznan, Poland1 EXPReS FABRIC WP 2.2 Correlator Engine Meeting Poznan Poland JIVE, Ruud Oerlemans.
WP7: Computation in a shared infrastructure Mark Kettenis JIVE Presented by Norbert Meyer PSNC.
WfMS and external systems Katarzyna Bylec PSNC. Agenda Introduction Pre-corelation ▫ North Star ▫ NRAO SCHED ▫ Vlbeer FTP ▫ Log2vex ▫ drudg Correlation.
WP7: Computation in a shared infrastructure Mark Kettenis JIVE Presented by Norbert Meyer PSNC.
What is FABRIC? Future Arrays of Broadband Radio-telescopes on Internet Computing Huib Jan van Langevelde, JIVE Dwingeloo.
TOG, Dwingeloo, March 2006, A. Szomoru, JIVE Status of JIVE Arpad Szomoru.
9/6/2017 2:01 PM9/6/2017 2:01 PM9/6/2017 2:01 PM1 Restaurant Service A student in.
WP18, High-speed data recording Krzysztof Wrona, European XFEL
EGEE NA4 Lofar Lofar Information System Design OmegaCEN
NGS computation services: APIs and Parallel Jobs
MIK 2.1 DBNS - introduction to WS-PGRADE, 2013
Introducing – SAS® Grid Manager for Hadoop
Cloud Distributed Computing Environment Hadoop
Mike Becher and Wolfgang Rehm
SCARIe: eVLBI Software Correlation Over Dynamic Lambda Grids
The Design of a Grid Computing System for Drug Discovery and Design
User interaction and workflow management in Grid enabled e-VLBI experiments Dominik Stokłosa Poznań Supercomputing and Networking Center, Supercomputing.
Presentation transcript:

Distributed Correlation in Fabric Kiwi Team PSNC

Experiment creation phase

Correlation phase

E-VLBI System – communication

2. Submitting the observation workflow

3. Workflow execution Divided into the following phases: 1.Chunking data from radio telescopes 2.Submitting chunked data sets for correlation 3.Archiving the correlated data sets

(1) Wf execution: chunking

(1) Wf execution: preparing environment

(1) Wf execution: sending chunks

(2) Wf execution: correlation First data chunks are available Step, where the real correlation takes place Computation on the Grid resources

(2) Wf execution: correlation

(2) Wf execution: scheduling scheduling correlation jobs with Correlation Node..

(2) Wf execution: correlation

(3) Wf execution: archive First data sets have been correlated Store correlated data with Correlated Data Service (CDS) Clean up the Grid resources

(3) Wf execution: archive

Fabric results 25 minutes of data Successful correlation with 2 clusters (PSNC) One File Server (Jive) Stations: Westerbork, Medicina, Torun, Cambridge

Correlation testbed

Correlation time vs chunk size