A U.S. Department of Energy laboratory managed by UChicago Argonne, LLC. Introduction APS Engineering Support Division –Beamline Controls and Data Acquisition.

Slides:



Advertisements
Similar presentations
Experiment Workflow Pipelines at APS: Message Queuing and HDF5 Claude Saunders, Nicholas Schwarz, John Hammonds Software Services Group Advanced Photon.
Advertisements

© 2006 MVTec Software GmbH Press Colloquium Part II Building Technology for the Customer’s Advantage.
Norman D. Peterson Director, Government Relations September 9, 2013
Teaching Courses in Scientific Computing 30 September 2010 Roger Bielefeld Director, Advanced Research Computing.
Materials by Design G.E. Ice and T. Ozaki Park Vista Hotel Gatlinburg, Tennessee September 5-6, 2014.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Summary Role of Software (1 slide) ARCS Software Architecture (4 slides) SNS -- Caltech Interactions (3 slides)
Authors: Thilina Gunarathne, Tak-Lon Wu, Judy Qiu, Geoffrey Fox Publish: HPDC'10, June 20–25, 2010, Chicago, Illinois, USA ACM Speaker: Jia Bao Lin.
Astrophysics, Biology, Climate, Combustion, Fusion, Nanoscience Working Group on Simulation-Driven Applications 10 CS, 10 Sim, 1 VR.
Introduction to DANSE Brent Fultz Prof. Materials Science and Applied Physics California Institute of Technology Distributed Data Analysis Architecture.
What is it? Hierarchical storage software developed in collaboration with five US department of Energy Labs since 1992 Allows storage management of 100s.
Application of robotics methods to Neutron and Synchrotron diffraction instrumentation Jon James, Nov 2008 Department of Design, Development, Environment.
Data Acquisition at the NSLS II Leo Dalesio, (NSLS II control group) Oct 22, 2014 (not 2010)
© 2008 The MathWorks, Inc. ® ® Parallel Computing with MATLAB ® Silvina Grad-Freilich Manager, Parallel Computing Marketing
Effective User Services for High Performance Computing A White Paper by the TeraGrid Science Advisory Board May 2009.
Research Support Services Research Support Services.
DISTRIBUTED COMPUTING
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program NERSC Users Group Meeting Department of Energy Update September.
ITR: Collaborative research: software for interpretation of cosmogenic isotope inventories - a combination of geology, modeling, software engineering and.
The Research Computing Center Nicholas Labello
INTRODUCTION SOFTWARE HARDWARE DIFFERENCE BETWEEN THE S/W AND H/W.
Scientific computing in x-ray microscopy F. Meirer 1, Y. Liu 2, J.C. Andrews 2, A. Mehta 2, P. Pianetta 2 1 MiNALab, CMM-irst, Fondazione Bruno Kessler,
Chapter 4 Realtime Widely Distributed Instrumention System.
Invitation to Computer Science 5 th Edition Chapter 6 An Introduction to System Software and Virtual Machine s.
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
INVITATION TO COMPUTER SCIENCE, JAVA VERSION, THIRD EDITION Chapter 6: An Introduction to System Software and Virtual Machines.
October 21, 2015 XSEDE Technology Insertion Service Identifying and Evaluating the Next Generation of Cyberinfrastructure Software for Science Tim Cockerill.
BESAC Dec Outline of the Report I. A Confluence of Scientific Opportunities: Why Invest Now in Theory and Computation in the Basic Energy Sciences?
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
Thoughts on Data Management Nicholas Schwarz Software Services Group Advanced Engineering Support (AES) Division Advanced Photon Source (APS) 25 June 2013.
CCGrid 2014 Improving I/O Throughput of Scientific Applications using Transparent Parallel Compression Tekin Bicer, Jian Yin and Gagan Agrawal Ohio State.
ESFRI & e-Infrastructure Collaborations, EGEE’09 Krzysztof Wrona September 21 st, 2009 European XFEL.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
A Data Centre for Science and Industry Roadmap. INNOVATION NETWORKING DATA PROCESSING DATA REPOSITORY.
VAPoR: A Discovery Environment for Terascale Scientific Data Sets Alan Norton & John Clyne National Center for Atmospheric Research Scientific Computing.
Near Real-Time Verification At The Forecast Systems Laboratory: An Operational Perspective Michael P. Kay (CIRES/FSL/NOAA) Jennifer L. Mahoney (FSL/NOAA)
GEOSCIENCE NEEDS & CHALLENGES Dogan Seber San Diego Supercomputer Center University of California, San Diego, USA.
CPSC 171 Introduction to Computer Science System Software and Virtual Machines.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Jonathan Carroll-Nellenback.
Snapshot of DAQ challenges for Diamond Martin Walsh.
| nectar.org.au NECTAR TRAINING Module 2 Virtual Laboratories and eResearch Tools.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Computing at SSRL: Experimental User Support Timothy M. McPhillips Stanford Synchrotron Radiation Laboratory.
Lawrence Livermore National Laboratory 1 Science & Technology Principal Directorate - Computation Directorate Scalable Fault Tolerance for Petascale Systems.
Experiences With gRAVI Brian Tieman Beamline Controls and Data Acquisition Advanced Photon Source.
Data Management and Software Centre Mark Hagen Head of DMSC IKON7, September 15th 2014.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
Building PetaScale Applications and Tools on the TeraGrid Workshop December 11-12, 2007 Scott Lathrop and Sergiu Sanielevici.
Get Data to Computation eudat.eu/b2stage B2STAGE How to shift large amounts of data Version 4 February 2016 This work is licensed under the.
05/14/04Larry Dennis, FSU1 Scale of Hall D Computing CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Centre of Excellence in Physics at Extreme Scales Richard Kenway.
VisIt Project Overview
A Brief Introduction to NERSC Resources and Allocations
22/04/2018 SOLEIL PROGRESS IN PROVIDING REMOTE DATA ANALYSIS SERVICES Majid OUNSY: Data Analysis Software Project Leader.
Data Management at the Advanced Photon source (APS)
Scientific Computing Department
SuperB and its computing requirements
Performance Technology for Scalable Parallel Systems
for the Offline and Computing groups
Advanced Photon Source
Scientific computing in x-ray microscopy
TeraScale Supernova Initiative
Energy-Efficient Storage Systems
Brian Matthews STFC EOSCpilot Brian Matthews STFC
Software for Neutron Imaging Analysis
Bird of Feather Session
Defining the Grid Fabrizio Gagliardi EMEA Director Technical Computing
Presentation transcript:

A U.S. Department of Energy laboratory managed by UChicago Argonne, LLC. Introduction APS Engineering Support Division –Beamline Controls and Data Acquisition Pete Jemian (Group Leader) Kenneth Evans, Jr. (Scientific Software Section Leader) Brian Tieman (Software Engineer) –Information Technology Ken Sidorowicz (Group Leader) Roger Sersted (Engineer) X_Ray Sciences Division Gabrielle Long (Division Director) –Chemisty, Environmental and Polymer Science Peter Chupas (Beamline Scientist) –Materials Characterization Ulrich Lienert (Beamline Scientist) Jon Tischler (ORNL Resident Scientist) –Time Resolved Research Michael Sprung (Beamline Scientist) Alec Sandy (Beamline Scientist) –X-Ray Microcopy and Research Francesco DeCarlo (Beamline Scientist) Wah Keat Lee (Beamline Scientist)

A U.S. Department of Energy laboratory managed by UChicago Argonne, LLC. Current Operational Workflow

A U.S. Department of Energy laboratory managed by UChicago Argonne, LLC. Preferred Operational Workflow

A U.S. Department of Energy laboratory managed by UChicago Argonne, LLC. Local HPC Resources--What Tomo –32 processor cluster –12TB disk –Sector 2 dedicated to tomography Blacklab –16 processor cluster –2TB disk –Development Orthros –58 processor cluster –30TB disk –On demand data reduction

A U.S. Department of Energy laboratory managed by UChicago Argonne, LLC. Local HPC Resources--Why On Demand Processing –Data reduction immediate part of workflow –May waste CPU cycles-bad for larger clusters Alignment (10s of minutes) Between Samples (<5 minutes) Between Shifts (??) High Throughput –Dozens of samples per day Automated sample changers Sometimes Unattended –Multiple beamlines Semi-Long Term Storage –3 to 6 months Low Processor Demand per Application Low Latency—True Real-Time Processing

A U.S. Department of Energy laboratory managed by UChicago Argonne, LLC. Remote HPC Resources--Why Modeling applications are generally more demanding –Trickier algorithms—more advanced math –Scale to many more processors –Need to be run many times More need for remote collaboration –Interpret results –Compare with theory/other results Beyond APS Resources –It’s all about efficient use of money Manpower Hardware Space Etc…

A U.S. Department of Energy laboratory managed by UChicago Argonne, LLC. A Survey of Use Cases Tomography –TomoMPI –Paraview/MCS solution –DEJ_TextureAnalysis –Local Tomography –Laminography 3D X-Ray Diffraction Microscopy –xdmmpi –Near Field Peak Finder –ImageD11/Fable –Grainspotter/Fable –Box Scans X-Ray Photon Correlation Spectroscopy –xpcsmpi X-Ray Micro-Diffraction –Reconstruct –Euler –Rindex

A U.S. Department of Energy laboratory managed by UChicago Argonne, LLC. Common Features of Our Use Cases Need for HPC resources –On demand clusters –Large scale clusters Most have need for large data volumes –Archival –Transport Most still need algorithm development –Parallelism –Optimization –Robustness/Portability Many used by many unrelated scientific disciplines –Open access –Intuitive interfaces –Tailored interfaces

A U.S. Department of Energy laboratory managed by UChicago Argonne, LLC. Where MCS Can Help… HPC resources –Help target appropriate systems How to find them How to develop for them How to generate proposals for them –Help understand management of HPC resources How to use HPC for On Demand computation How to tuning for performance What to buy

A U.S. Department of Energy laboratory managed by UChicago Argonne, LLC. Where MCS Can Help… Dealing with the Data –Archival solutions Central repository Nearline/Offline storage –Fast/Reliable data transfer HPC resources End users –Ethernet –Sneakernet

A U.S. Department of Energy laboratory managed by UChicago Argonne, LLC. Where MCS Can Help… Algorithm Development –Help enable scientists to parallelize code More training sessions Assistance with initial parallelization –Help with code optimization Maybe codes exist Maybe new routines need development –Robustness/Portability Libraries we should be using Languages we should be using Operating Systems we should target

A U.S. Department of Energy laboratory managed by UChicago Argonne, LLC. Ways MCS Can Help… User access –Service Oriented Architecture Hide complexity Intuitive interfaces Remote access Collaboratory Experience –Help users set up HPC software on their systems

A U.S. Department of Energy laboratory managed by UChicago Argonne, LLC. Ways APS Can Help… Look for Joint MCS/APS LDRDs Explore possibility of APS providing operational funding for MCS –New Hardware –R&D Effort Station APS FTE in MCS –Information exchange –Provide MCS effort on projects of direct benefit to APS Conduit to end users –Collect new use cases –Explore potential new funding opportunities –Scheduling Meetings Training