LOFAR project Astroparticle Physics workshop 26 April 2004.

Slides:



Advertisements
Similar presentations
1 Location of Partners and customers Who are our customers? MSSL Centre for Process engineering European Space Agency JOANNEUM RESEARCH Swedish Research.
Advertisements

Dominik Stoklosa Poznan Supercomputing and Networking Center, Supercomputing Department EGEE 2007 Budapest, Hungary, October 1-5 Workflow management in.
Dominik Stokłosa Pozna ń Supercomputing and Networking Center, Supercomputing Department INGRID 2008 Lacco Ameno, Island of Ischia, ITALY, April 9-11 Workflow.
Prof. Natalia Kussul, PhD. Andrey Shelestov, Lobunets A., Korbakov M., Kravchenko A.
SYSTEMCONCEPT Practical Implementation of a Novel Wind Energy Harvesting Network N.R. Harris, D. Zhu, S.P. Beeby, J. Tudor, N. Grabham and N.M. White School.
SKADSMTAvA A. van Ardenne SKADS Coördinator ASTRON, P.O. Box 2, 7990 AA Dwingeloo The Netherlands SKADS; The.
SADC HPC Workshop, 2 Dec 2013, Cape Town
Paul Alexander DS3 & DS3-T3 SKADS Review 2006 DS3 The Network and its Output Data Paul Alexander.
A new Network Concept for transporting and storing digital video…………
Implementation of a streaming database management system on a Blue Gene architecture for measurement data processing. Erik Zeitler Uppsala data base lab.
GENI: Global Environment for Networking Innovations Larry Landweber Senior Advisor NSF:CISE Joint Techs Madison, WI July 17, 2006.
Questions from the European colleagues & Answers (only) from the Chinese engineers present for Y.-W. Zhang and S.-G. Yuan Chinese Academy of Space Technology.
SKA South Africa Overview Thomas Kusel MeerKAT System Engineering Manager April 2011.
ASKAP Central Processor: Design and Implementation Calibration and Imaging Workshop 2014 ASTRONOMY AND SPACE SCIENCE Ben Humphreys | ASKAP Software and.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Netherlands Institute for Radio Astronomy 1 ASTRON is part of the Netherlands Organisation for Scientific Research (NWO) From LOFAR design to SKA1 System.
1 BGL Photo (system) BlueGene/L IBM Journal of Research and Development, Vol. 49, No. 2-3.
Distributed Processing of Future Radio Astronomical Observations Ger van Diepen ASTRON, Dwingeloo ATNF, Sydney.
Big Data Imperial June 2013 Dr Paul Calleja Director HPCS The SKA The worlds largest big-data project.
Only Sky is a limit, So lets the real job start S. Pogrebenko, , &
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
ADASS XI Sept30-Oct3, 2001 The ALMA Common Software (ACS) as a basis for a distributed software development G.Raffi, G.Chiozzi (ESO), B.Glendenning (NRAO)
Dale E. Gary Professor, Physics, Center for Solar-Terrestrial Research New Jersey Institute of Technology 1 03/15/2012Preliminary Design Review.
. Bartosz Lewandowski. Center of e-Infrastructure National Research and Education Network PIONIER National Research and Education Network PIONIER Research.
Marcin Okoń Pozna ń Supercomputing and Networking Center, Supercomputing Department e-Science 2006, Amsterdam, Dec Virtual Laboratory as a Remote.
SKA Introduction Jan Geralt Bij de Vaate Andrew Faulkner, Andre Gunst, Peter Hall.
M. McCorquodale University of Michigan Electrical Engineering & Computer Science Mobius Integrated Systems Corporation Ann Arbor, MI January, 2001 Michael.
© 2010 IBM Corporation IBM InfoSphere Streams Enabling a smarter planet Roger Rea InfoSphere Streams Product Manager Sept 15, 2010.
Date donald smits center for information technology Centre of Information Technology RUG Robert Janz Centre of Information Technology University.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Paul Alexander & Jaap BregmanProcessing challenge SKADS Wide-field workshop SKA Data Flow and Processing – a key SKA design driver Paul Alexander and Jaap.
XNTD/SKAMP/LFD Correlator 4th RadioNet Engineering Forum Workshop Next Generation Correlators for Radio Astronomy and Geodesy June 2006, Groningen,
Leiden Embedded Research Center Prof. Dr. Ed F. Deprettere, Dr. Bart Kienhuis, Dr. Todor Stefanov Leiden Embedded Research Center (LERC) Leiden Institute.
CRISP & SKA WP19 Status. Overview Staffing SKA Preconstruction phase Tiered Data Delivery Infrastructure Prototype deployment.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
AAs in PEP Jan Geralt Bij de Vaate. PEP Project Execution Plan for the pre construction phase of the SKA Consortium to be formed AA-low ready for construction.
SURFnet. We make innovation work0. 1 State-of-the-art Network IT InnovationLicensing.
Research Networks and Astronomy Richard Schilizzi Joint Institute for VLBI in Europe
Eugenia Hatziangeli Beams Department Controls Group CERN, Accelerators and Technology Sector E.Hatziangeli - CERN-Greece Industry day, Athens 31st March.
EISCAT-3D Ian McCrea Rutherford Appleton Laboratory, UK On behalf of the EISCAT-3D Design Team.
Status of ITS research May Peter Sweatman David Kapp.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Large Area Surveys - I Large area surveys can answer fundamental questions about the distribution of gas in galaxy clusters, how gas cycles in and out.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
EURO-VO: GRID and VO Lofar Information System Design OmegaCEN Kapteyn Institute TARGET- Computing Center University Groningen Garching, 10 April 2008 Lofar.
Lofar Information System on GRID A.N.Belikov. Lofar Long Term Archive Prototypes: EGEE Astro-WISE Requirements to data storage Tiers Astro-WISE adaptation.
EC Review – 01/03/2002 – WP9 – Earth Observation Applications – n° 1 WP9 Earth Observation Applications 1st Annual Review Report to the EU ESA, KNMI, IPSL,
WIRELESS INTEGRATED NETWORK SENSORS
1 The ILC Control Work Packages. ILC Control System Work Packages GDE Oct Who We Are Collaboration loosely formed at Snowmass which included SLAC,
Rosie Bolton 2 nd SKADS Workshop October 2007 SKADS System Design and Costing: Update and next steps Rosie Bolton University of Cambridge.
Netherlands Institute for Radio Astronomy Big Data Radio Astronomy A VRC for SKA and its pathfinders Hanno Holties March 28 th 2012.
NASA Earth Exchange (NEX) A collaborative supercomputing environment for global change science Earth Science Division/NASA Advanced Supercomputing (NAS)
Square Kilometre Array eInfrastructure: Requirements, Planning, Future Directions Duncan Hall SPDO Software and Computing EGEE 2009.
Supporting the “Solving Business Problems with Environmental Data” Competition 24 th October 2013 Vlad Stoiljkovic.
Information Distribution Connected infrastructure enabling free flow of useful information Assimilation & Distribution Methods of delivery Mobile phone.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
EInfrastructures: Usage Policy Mike Garrett Joint Institute for VLBI in Europe (JIVE, Dwingeloo, The Netherlands) eInfrastructure meeting, Dublin, April.
Metadata for the SKA - Niruj Mohan Ramanujam, NCRA.
What is FABRIC? Future Arrays of Broadband Radio-telescopes on Internet Computing Huib Jan van Langevelde, JIVE Dwingeloo.
EGI-InSPIRE RI EGI Compute and Data Services for Open Access in H2020 Tiziana Ferrari Technical Director, EGI.eu
EISCAT-3D Lassi Roininen Sodankyla Geophysical Observatory.
FP6 and research infrastructures Main instruments Integrated infrastructure initiative (I3), several M€/year, 4-5 years communication network development,
Andrew Faulkner April 2016 STFC Industry Day: Low Frequency Aperture Array Andrew Faulkner Project Engineer.
The Science Data Processor and Regional Centre Overview Paul Alexander UK Science Director the SKA Organisation Leader the Science Data Processor Consortium.
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
The Low Frequency Array (LOFAR)
Mid Frequency Aperture Arrays
Pasquale Migliozzi INFN Napoli
EGEE NA4 Lofar Lofar Information System Design OmegaCEN
Technical Foundations & Enabling Technologies – DS4
Presentation transcript:

LOFAR project Astroparticle Physics workshop 26 April 2004

LOFAR concept Combine advances in enabling IT: inexpensive environmental sensors10.000’s of sensors wide area optical broadband networkscustom+GigaPort/Géant high performance computingIBM BlueGene/L  to make a ‘shared aperture multi-telescope’ but also:  to sense and interpret the environment in innovative ways System spec driver

LOFAR Sensors Sensor typeApplications HF-antenna:astrophysics astro-particle physics VHF-antenna:cosmology, early Universe solar effects on Earth, space weather. Geophones:ground subsidence. gas/oil extraction Weather:micro-climate prediction precision agriculture wind energy. Water:precision agriculture habitat management public safety Infra-sound:atmospheric turbulence meteors, explosions, sonic booms

LOFAR Phase 1 - Radio telescope - Seismic imager - Precision weather for agriculture, wind energy Sensor fieldCentral processor Fibre data transport Integrate LOFAR network into regional fibre network, sharing costs with schools, health centres etc.

Radio Telescope Specifications Frequency range: – 20 – 80 MHz, 120 – 240 MHz Angular resolution – few – 10 arcsec Sensitivity – 100x previous instruments at these frequencies Shared aperture multi-telescope – up to 8 independent telescopes plus geophone, weather etc arrays – operated from remote Science Operations Centers similar to LHC ‘tier-1’ centers

One day in the life of LOFAR, the radio telescope Telescope nr.

Challenges Data rate ~ 15 Tbits / sec total data generated (increasing later) ~ 330 Gbits / sec input data rate to central processor ~ 1 Gbit / sec to distributed Science Operations Centres Computational resources ~ 34 TFLOP/s in custom co-processor (IBM BG/L) ~ 500 TBytes on-line temporary storage Calibration adaptive multi-patch all-sky phase correction 10 sec duty cycle Store: 25 Gbps Store: 25 Gbps Input rate > 300 Gbps Input rate > 300 Gbps Product s ~1 Gbps Product s ~1 Gbps 3 T-ops 5 T-flops 2 T-ops Transpose ~300 Gbps Storage: >500 TB Within correlator: 20 Tbps 15 T-ops

IBM BlueGene/L IBM – 1 st research machine on road to multi-peta-FLOP/s – 3 BG/L machines under construction LLNL, LOFAR, IBM Research – numbers 1-10 of Top-500 supercomputers in one machine (LLNL) – SOC technology, standard components for reliability – dual PowerPC 440 chips per node with 700 MHz clock – scalability – to many times nodes – low power, air cooled –~ 20W per node

IBM BlueGene/L LOFAR – BG/L is our 1 st non-custom central processor total CPU power is ‘interesting’ ( 34 TFLOP/s) and scalable component failure rate: one every 3 months, DRAM dominated – BG/L is embedded co-processor in LINUX cluster – stripped down LINUX kernal on-chip – general purpose capability allows complex modelling on-line, real time – efficient for complex arithmetic, streaming applications 330 Gb/s input data rate initially; 768 Gb/s max – low power 150 kW for LOFAR ( 6k nodes ) – scalable beyond LOFAR to SKA requirements

Tier-0 computing LHC, LOFAR in 2006 CPU (SPECint95) No. of Processors Disk storage (TB) Tape storage (PB) LAN throughput (Gb/s) LHC / exp’t x 4 exp’ts ( Tier-0 ) 2, / (?) LOFAR ( EOC ) 3, / ~ 500 ?? > 330

LOFAR with Bsik financing Central core - plus - 45 stations 150 km max baseline

Mid-LOFAR would extend into Lower Saxony, Schleswig-Holstein, Northrhein-Westphalen Max-LOFAR would have stations from Cambridge UK to Potsdam DE, from Nançay FR to Växjö SE

1-10 Gbps China USA South Africa Russia Post-2005: JIVE + LOFAR data processing centre 30 Gbps – 2 Tbps LOFAR, the Sensor Network is under consideration as FP7 ‘Technology Platform’

LOFAR project timeline PDR in June/Oct 2003: M€ 14 expended Dutch funding end 2003: M€ 52 for ‘infrastructure’ funding must be matched by ‘partners’ –18 member consortium: additional partners possible formal goal is economic positioning w.r.t. ‘adaptive sensor networks’ – RF, seismic, infra-sound, wind-energy sensors prototyping of a full station is in progress 100 low frequency antennas in field, now are making all-sky videos end 2004, expect 2 beam web-based system on-line (to gain experience) – issues: calibration, RFI, adaptive re-allocation of resources BlueGene/L delivery in 1Q-2005 FDR start in mid-2004, complete mid-2005 procurement start mid-2004, end mid-2006 Initial operational status: end-2006 (solar minimum) full operational status: mid-2008

Remaining tasks for which partners are being sought Array configuration size: new stations ! – extension of array size to 400+ km is highly desirable cost is ~ € 500k per station fiber connections through Géant, national academic networks Definition, designation of operations centers – Science Operations Centers are remote, on-line basic data taking and archiving of observations financing mostly local, plus contribution to common services – Engineering Operations Center in Dwingeloo monitor system, perform maintenance integrated operations team (with WSRT, possibly JIVE) Operational modelling and User interface use of (quasi-real-time) GRID technologies foreseen work packages not funded / manned yet Where?

User involvement Test User Group Heino Falcke, leader – Lars Bähren, Michiel Breintjens, Stefan Wijholds etc ‘open’, ‘remote’ access to developing system – step-wise functionality improvements until st user workshop Dwingeloo, May 24-25, 2004 ASTRON is ready to host a (limited) number of young researchers to test, help develop the system Formal operations from 2007 scheduling will be an ‘interesting’ problem

LOFAR Research Consortium UniversitiesResearch Institutes Commercial Univ. of AmsterdamASTRON (management org.) Ordina Technical Automation bv TU DelftCWIDutch Space bv TU EindhovenIMAG Twente Institute for Wireless and Mobile Communications bv Univ. of GroningenKNMIScience[&]Technology bv Leiden Univ.TNO-NITG Nijmegen Univ.LOPES Consortium Uppsala Univ.MPIfR-Bonn