LSST VAO Meeting March 24, 2011 Tucson, AZ. Headquarters Site Headquarters Facility Observatory Management Science Operations Education and Public Outreach.

Slides:



Advertisements
Similar presentations
Systems Engineering Breakouts George Angeli. Tuesday 11am Current Commissioning Plans – Chuck Claver Revised commissioning timeline Development plans.
Advertisements

Astronomy of the Next Decade: From Photons to Petabytes R. Chris Smith AURA Observatory in Chile CTIO/Gemini/SOAR/LSST.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Jeff Kantor LSST Data Management Systems Manager LSST Corporation Institute for Astronomy University of Hawaii Honolulu, Hawaii June 19, 2008 LSST Data.
Building a Framework for Data Preservation of Large-Scale Astronomical Data ADASS London, UK September 23-26, 2007 Jeffrey Kantor (LSST Corporation), Ray.
Kian-Tat Lim Offline Computing November 12 th, LCLS Offline Data Management.
Knowledge Environments for Science and Engineering: Current Technical Developments James French, Information and Intelligent Systems Division, Computer.
LIGO- GXXXXXX-XX-X Advanced LIGO Data & Computing Material for the breakout session NSF review of the Advanced LIGO proposal Albert Lazzarini Caltech,
Simo Niskala Teemu Pasanen
Introduction to LSST Data Management Jeffrey Kantor Data Management Project Manager.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Scientific Data Infrastructure in CAS Dr. Jianhui Scientific Data Center Computer Network Information Center Chinese Academy of Sciences.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Control Software Integration German Schumacher T&S Software Lead.
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
National Center for Supercomputing Applications Observational Astronomy NCSA projects radio astronomy: CARMA & SKA optical astronomy: DES & LSST access:
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
1 New Frontiers with LSST: leveraging world facilities Tony Tyson Director, LSST Project University of California, Davis Science with the 8-10 m telescopes.
Ocean Observatories Initiative Common Execution Infrastructure (CEI) Overview Michael Meisinger September 29, 2009.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Rackspace Analyst Event Tim Bell
NOAO Brown Bag Tucson, AZ March 11, 2008 Jeff Kantor LSST Corporation Requirements Flowdown with LSST SysML and UML Models.
1 Jacek Becla, XLDB-Europe, CERN, May 2013 LSST Database Jacek Becla.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
LSST: Preparing for the Data Avalanche through Partitioning, Parallelization, and Provenance Kirk Borne (Perot Systems Corporation / NASA GSFC and George.
DC2 Post-Mortem/DC3 Scoping February 5 - 6, 2008 DC3 Goals and Objectives Jeff Kantor DM System Manager Tim Axelrod DM System Scientist.
LHC Open Network Environment LHCONE David Foster CERN IT LCG OB 30th September
EScience May 2007 From Photons to Petabytes: Astronomy in the Era of Large Scale Surveys and Virtual Observatories R. Chris Smith NOAO/CTIO, LSST.
Test and Integration Robyn Allsman LSST Corp DC3 Applications Design Workshop IPAC August , 2008.
The North American ALMA Science Center Interim Director, Paul A. van den Bout.
Doing EPO Now. Constraints on use of MREFC (construction) funds During construction we can build infrastructure to enable EPO during operations, but not.
CISN: Draft Plans for Funding Sources: OES/FEMA/ANSS/Others CISN-PMG Sacramento 10/19/2004.
NEES Cyberinfrastructure Center at the San Diego Supercomputer Center, UCSD George E. Brown, Jr. Network for Earthquake Engineering Simulation NEES TeraGrid.
A Data Centre for Science and Industry Roadmap. INNOVATION NETWORKING DATA PROCESSING DATA REPOSITORY.
1 NGVLA WORKSHOP – DECEMBER 8, 2015 – NRAO SOCORRO, NM Name of Meeting Location Date - Change in Slide Master Computing for ngVLA: Lessons from LSST Jeffrey.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
The Large Synoptic Survey Telescope Project Bob Mann LSST:UK Project Leader Wide-Field Astronomy Unit, Edinburgh.
Ray Plante for the DES Collaboration BIRP Meeting August 12, 2004 Tucson Fermilab, U Illinois, U Chicago, LBNL, CTIO/NOAO DES Data Management Ray Plante.
1 Flexible, High-Speed Intrusion Detection Using Bro Vern Paxson Computational Research Division Lawrence Berkeley National Laboratory and ICSI Center.
December 10, 2003Slide 1 International Networking and Cyberinfrastructure Douglas Gatchell Program Director International Networking National Science Foundation,
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
R2 Themes and Historical Context for Observatory Management & Data Product Generation.
UlvestadEVLA Advisory Committee Meeting September 6-7, Future EVLA Operations Jim Ulvestad.
National Center for Supercomputing Applications Dark Energy Survey Collaboration Meeting Data Management Status December 11, 2006 Chicago Cristina Beldica.
1 Scientific Advisory Committee Tucson, AZ November 16, 2015 LSST Communication to Scientists - New Developments and Plans Looking Ahead Suzanne Jacoby,
Gustaaf van MoorselEVLA Advisory Committee Meeting December 14-15, 2004 EVLA Computing End-to-end (E2e) software.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
GridShell/Condor: A virtual login Shell for the NSF TeraGrid (How do you run a million jobs on the NSF TeraGrid?) The University of Texas at Austin.
NEXPReS Period 3 Overview WP8 FlexBuff High-Bandwidth, High-Capacity Networked Storage Ari Mujunen Aalto University Metsähovi Radio Observatory Finland.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
SAN DIEGO SUPERCOMPUTER CENTER SDSC Resource Partner Summary March, 2009.
1 Board Telescope & Site Tucson, Arizona July 18-20, 2011 LLST Control Software Review Resources Tucson, January
1 OBSERVATORY CONTROL SYSTEM (OCS) FRANCISCO DELGADO OCS CAM.
Commissioning Planning
AmLight-ExP: Collaboration at 100G
National Research Platform Meeting August 6-7, 2017
CI Updates and Planning Discussion
What to Expect at the LSST Archive: The LSST Science Platform Mario Juric, University of Washington LSST Data Management Subsystem Scientist for the.
02C.08 International Communications and Base Site
for the Offline and Computing groups
What are DM folks working on?
Chuck F Claver LSST System Scientist & Commissioning Lead
LQCD Computing Operations
LDF “Highlights,” May-October 2017 (1)
CHILEAN NETWORKS La Serena to Santiago Fiber. Contract 1. $5,760,000 IRU Base to AURA Gatehouse Fiber. Contract 1a. IRU AURA Gatehouse.
What is a Grid? Grid - describes many different models
Presentation transcript:

LSST VAO Meeting March 24, 2011 Tucson, AZ

Headquarters Site Headquarters Facility Observatory Management Science Operations Education and Public Outreach Archive Site Archive Center Nightly Reprocessing Data Release Production Long-term Storage (copy 2) Data Access Center Data Access and User Services TFLOPS PB Disk PB Tape Base Site Base Facility Alert Production Long-term storage (copy 1) Data Access Center Data Access and User Services TFLOPS PB Disk PB Tape Summit Site Summit Facility Telescope and Camera Data Acquisition Crosstalk Correction Data Management System Sites and Centers

Data Management Presentation to LSSTC Board 3 Chile DAC Tucson La Serena Pachon Santiago Sao Paolo Miami NCSA Data 80Gbps TCS/OCS Control 20Gbps 10/100Gbps Reuna 10/100Gbps Lauren/Clara 10/100Gbps Amlight 10/100Gbps NLR/I2 TeraGrid (Telefonica&Telmex) DMS long-haul network design Mountain summit – Base is only new fiber, 100 Gbps capacity Inter - site Long-Haul links on existing fibers – Chile-US – Archive – HQ – DAC – End Users 1 Gbps 2011, 10 Gbps 2015, 100 Gbps 2025 Site End Equipment is available today Network Design Document-7354 Panama Los Angeles

Data Management Presentation to LSSTC Board 4 DMS cyber-security plan Threats –Denial of service attacks –Break-in attempts –Internal attacks –Code injection –Personal computer malware, viruses, etc. Best practices –Monitor infrastructure –Partition resources –Grant limited privileges –Simulate threats Cyber-security Plan-9733

LSST DM R&D Plan To Date

LSST DM R&D Plan To Be Done

DMS Data Challenge Infrastructure SPIE 2010 June 30, 2010 San Diego, CA

Data Management Presentation to LSSTC Board 8 DMS Middleware - Pipeline processing Middleware runs in full production mode as operational system will do Have executed in parallel on hundreds of processing cores Performance can be increased by adding more hardware, input/output remains most constraining Cross-project workshops conducted with DES, JWST, ODI, UWis Condor group to leverage lessons learned Provenance and Fault-Tolerance demonstrations by PDR

DMS Middleware – Database Architecture Current relational database technology cannot handle LSST data volumes Solution: make many databases look like one to users and applications Scale by adding database instances/servers without software changes Full prototype now implemented 9 Supercomputing 2010

NSF Review December 15-17, 2009 Tucson, AZ 10 Data Challenge Applications Scope DC3a DC2 DC3b DC4 DC1

Data Management Presentation to LSSTC Board 11 DMS Applications Status Since the start of R&D, spent on software development and integration –$8M (out of $10M total R&D spent) –90 FTE years (funded and in-kind, average 40% on project) By comparison –Terapix (unknown, 7 people, started in 1997) –2MASS (165 FTE years) –SDSS (145 FTE years low estimate, 215 FTE years high estimate) –Above estimates are from respective project managers/technical leads