NGS computation services: APIs and.

Slides:



Advertisements
Similar presentations
Web 2.0 Programming 1 © Tongji University, Computer Science and Technology. Web Web Programming Technology 2012.
Advertisements

CSF4 Meta-Scheduler Tutorial 1st PRAGMA Institute Zhaohui Ding or
DATE: 2008/03/11 NCHC-Grid Computing Portal (NCHC-GCE Portal) Project Manager: Dr. Weicheng Huang Developed Team: Chien-Lin Eric Huang Chien-Heng Gary.
Tom Sugden EPCC OGSA-DAI Future Directions OGSA-DAI User's Forum GridWorld 2006, Washington DC 14 September 2006.
GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
5-Dec-02D.P.Kelsey, GridPP Security1 GridPP Security UK Security Workshop 5-6 Dec 2002, NeSC David Kelsey CLRC/RAL, UK
User Board - Supporting Other Experiments Stephen Burke, RAL pp Glenn Patrick.
The National Grid Service Mike Mineter.
The Storage Resource Broker and.
A Workflow Engine with Multi-Level Parallelism Supports Qifeng Huang and Yan Huang School of Computer Science Cardiff University
NGS computation services: API's,
Data services on the NGS.
A PPARC funded project AstroGrid Framework Consortium meeting, Dec 14-15, 2004 Edinburgh Tony Linde Programme Manager.
The National Grid Service and OGSA-DAI Mike Mineter
Legacy code support for commercial production Grids G.Terstyanszky, T. Kiss, T. Delaitre, S. Winter School of Informatics, University.
INFSO-RI Enabling Grids for E-sciencE EGEE and the National Grid Service Mike Mineter
MyProxy Guy Warner NeSC Training.
Induction to Grid Computing and.
The Storage Resource Broker and.
E-Science Update Steve Gough, ITS 19 Feb e-Science large scale science increasingly carried out through distributed global collaborations enabled.
SCARF Duncan Tooke RAL HPCSG. Overview What is SCARF? Hardware & OS Management Software Users Future.
Themes from First Day 3 Meetings this summer –TeraGrid: Access through Gateways? –SDSC: Broad Summer School –NCSA: Supporting CI Substantial interest in.
1 Bogotá, EELA-2 1 st Conference, On the Co-existence of Service and Opportunistic Grids Francisco Brasileiro Universidade Federal.
COMP1214 Systems & Platforms: Operating Systems Concepts Dr. Yvonne Howard – Rikki Prince – 1.
7 april SP3.1: High-Performance Distributed Computing The KOALA grid scheduler and the Ibis Java-centric grid middleware Dick Epema Catalin Dumitrescu,
MINJAE HWANG THAWAN KOOBURAT CS758 CLASS PROJECT FALL 2009 Extending Task-based Programming Model beyond Shared-memory Systems.
1 Generic logging layer for the distributed computing by Gene Van Buren Valeri Fine Jerome Lauret.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
RCAC Research Computing Presents: DiaGird Overview Tuesday, September 24, 2013.
Visual Solution to High Performance Computing Computer and Automation Research Institute Laboratory of Parallel and Distributed Systems
Parallel Programming Models and Paradigms
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Slide 1 of 9 Presenting 24x7 Scheduler The art of computer automation Press PageDown key or click to advance.
DIRAC API DIRAC Project. Overview  DIRAC API  Why APIs are important?  Why advanced users prefer APIs?  How it is done?  What is local mode what.
Task Farming on HPCx David Henty HPCx Applications Support
Parallel Processing LAB NO 1.
CoG Kit Overview Gregor von Laszewski Keith Jackson.
Grids and Portals for VLAB Marlon Pierce Community Grids Lab Indiana University.
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
MATRIX MULTIPLY WITH DRYAD B649 Course Project Introduction.
NGS Innovation Forum, Manchester4 th November 2008 Condor and the NGS John Kewley NGS Support Centre Manager.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
NGS Portal.
Institute For Digital Research and Education Implementation of the UCLA Grid Using the Globus Toolkit Grid Center’s 2005 Community Workshop University.
OGSA-DAI.
1 Media Grid Initiative By A/Prof. Bu-Sung Lee, Francis Nanyang Technological University.
Interactive Workflows Branislav Šimo, Ondrej Habala, Ladislav Hluchý Institute of Informatics, Slovak Academy of Sciences.
GridFTP Richard Hopkins
George Goulas, Christos Gogos, Panayiotis Alefragis, Efthymios Housos Computer Systems Laboratory, Electrical & Computer Engineering Dept., University.
Next Steps.
Creating and running an application.
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
Introduction to The Storage Resource.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Grid2Win : gLite for Microsoft Windows Roberto.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
The National Grid Service Mike Mineter.
Susanna Guatelli Geant4 in a Distributed Computing Environment S. Guatelli 1, P. Mendez Lorenzo 2, J. Moscicki 2, M.G. Pia 1 1. INFN Genova, Italy, 2.
NGS computation services: APIs and.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Practical using WMProxy advanced job submission.
Monitoring Guy Warner NeSC Training.
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
Interstage BPM v11.2 1Copyright © 2010 FUJITSU LIMITED INTERSTAGE BPM ARCHITECTURE BPMS.
OGSA-DAI.
GridWay Overview John-Paul Robinson University of Alabama at Birmingham SURAgrid All-Hands Meeting Washington, D.C. March 15, 2007.
Next Steps.
Duncan MacMichael & Galen Deal CSS 534 – Autumn 2016
GWE Core Grid Wizard Enterprise (
Creating and running applications on the NGS
NGS computation services: APIs and Parallel Jobs
Presentation transcript:

NGS computation services: APIs and Parallel Jobs Richard Hopkins NGS Induction – Rutherford Appleton Laboratory, 2 nd / 3 rd November 2005

NGS Induction, RAL 2 nd /3 rd Nov 2005 – Computation Services – Richard Hopkins 2 Policy for re-use This presentation can be re-used, in part or in whole, provided its sources are acknowledged. However if you re-use a substantial part of this presentation please inform We need to gather statistics of re-use: number of events and number of people trained. Thank you!!

NGS Induction, RAL 2 nd /3 rd Nov 2005 – Computation Services – Richard Hopkins 3 Overview The C and Java APIs to the low-level tools Using multiple processors

NGS Induction, RAL 2 nd /3 rd Nov 2005 – Computation Services – Richard Hopkins 4 Job submission so far GLOBUS, etc. Command-line interfaces Users Interface to the grid GLOBUS, etc.

NGS Induction, RAL 2 nd /3 rd Nov 2005 – Computation Services – Richard Hopkins 5 Application-specific tools GLOBUS, etc. Command-line interfaces Application Specific and / or Higher generic tools e.g. BRIDGES Users Interface to the grid

NGS Induction, RAL 2 nd /3 rd Nov 2005 – Computation Services – Richard Hopkins 6 GLOBUS, etc. Application-specific tools Command-line interfaces Users Interface to the grid APIs: Java C … Application Specific and / or Higher generic tools e.g. BRIDGES

NGS Induction, RAL 2 nd /3 rd Nov 2005 – Computation Services – Richard Hopkins 7 Available APIs C reference.htmlhttp:// reference.html Community Grid CoG –Java, Python, Matlab –(very limited functionality on Windows – no GSI)

NGS Induction, RAL 2 nd /3 rd Nov 2005 – Computation Services – Richard Hopkins 8 Non-communicating Processes UI Internet Head processors of clusters Worker processors of clusters Globus_job_submit Processes run without any communication between them

NGS Induction, RAL 2 nd /3 rd Nov 2005 – Computation Services – Richard Hopkins 9 Communicating Processes UI Internet Head processors of clusters Worker processors of clusters Globus_job_submit Processes send messages to each other – Must run on same cluster

NGS Induction, RAL 2 nd /3 rd Nov 2005 – Computation Services – Richard Hopkins 10 Communicating Processes UI Internet Head processors of clusters Worker processors of clusters MPI Processes send messages to each other – Must run on same cluster

NGS Induction, RAL 2 nd /3 rd Nov 2005 – Computation Services – Richard Hopkins 11 Modes of Parallelism –Communicating processes: on NGS, you run one globus- job-submit command – but need to code and build program so it is parallelised - MPI –Communicating processes: on NGS, multiple executables run from a script on the UI The NGS nodes open these routes to you – but you have to do a bit of work! (Grid is not magic!...)

NGS Induction, RAL 2 nd /3 rd Nov 2005 – Computation Services – Richard Hopkins 12 MPI notes How could the task be split into sub-tasks? –By functions that could run in parallel??! –By sending different subsets of data to different processes? More usual ! Overheads of scatter and gather Need to design and code carefully: be alert to –sequential parts of your program (if half your runtime is sequential, speedup will never be more than 2) –how load can be balanced (64 processes with 65 tasks will achieve no speedup over 33 processes) –Deadlock! MPI functions are usually invoked from C, Fortran programs, but also Java Several example patterns are given in the practical. Many MPI tutorials are on the Web!

NGS Induction, RAL 2 nd /3 rd Nov 2005 – Computation Services – Richard Hopkins 13 Practical 1.C API Example 2.Java API usage 3.Concurrent processing 4.MPI example Follow link from