SCI-BUS project Pre-kick-off meeting University of Westminster Centre for Parallel Computing Tamas Kiss, Stephen Winter, Gabor.

Slides:



Advertisements
Similar presentations
Legacy code support for commercial production Grids G.Terstyanszky, T. Kiss, T. Delaitre, S. Winter School of Informatics, University.
Advertisements

Operating Systems Manage system resources –CPU scheduling –Process management –Memory management –Input/Output device management –Storage device management.
Investigating Approaches to Speeding Up Systems Biology Using BOINC-Based Desktop Grids Simon J E Taylor (1) Mohammadmersad Ghorbani (1) David Gilbert.
P-GRADE and WS-PGRADE portals supporting desktop grids and clouds Peter Kacsuk MTA SZTAKI
MTA SZTAKI Hungarian Academy of Sciences Grid Computing Course Porto, January Introduction to Grid portals Gergely Sipos
WS-PGRADE: Supporting parameter sweep applications in workflows Péter Kacsuk, Krisztián Karóczkai, Gábor Hermann, Gergely Sipos, and József Kovács MTA.
The UK National Grid Service Using the NGS. Outline NGS Background Getting Certificates Acceptable usage policies Joining VO’s What resources will be.
Grid Execution Management for Legacy Code Applications Exposing Application as Grid Services Porto, Portugal, 23 January 2007.
Paper on Best implemented scientific concept for E-Governance Virtual Machine By Nitin V. Choudhari, DIO,NIC,Akola By Nitin V. Choudhari, DIO,NIC,Akola.
To run the program: To run the program: You need the OS: You need the OS:
1 portal.p-grade.hu További lehetőségek a P-GRADE Portállal Gergely Sipos MTA SZTAKI Hungarian Academy of Sciences.
Volunteer Computing and Hubs David P. Anderson Space Sciences Lab University of California, Berkeley HUBbub September 26, 2013.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
Building service testbeds on FIRE D5.2.5 Virtual Cluster on Federated Cloud Demonstration Kit August 2012 Version 1.0 Copyright © 2012 CESGA. All rights.
1 port BOSS on Wenjing Wu (IHEP-CC)
AHM /09/05 AHM 2005 Automatic Deployment and Interoperability of Grid Services G.Kecskemeti, Yonatan Zetuny, G.Terstyanszky,
Using the WS-PGRADE Portal in the ProSim Project Protein Molecule Simulation on the Grid Tamas Kiss, Gabor Testyanszky, Noam.
1 portal.p-grade.hu Further information on P-GRADE Gergely Sipos MTA SZTAKI Hungarian Academy of Sciences.
A Distributed Computing System Based on BOINC September - CHEP 2004 Pedro Andrade António Amorim Jaime Villate.
Nicholas LoulloudesMarch 3 rd, 2009 g-Eclipse Testing and Benchmarking Grid Infrastructures using the g-Eclipse Framework Nicholas Loulloudes On behalf.
A General and Scalable Solution of Heterogeneous Workflow Invocation and Nesting Tamas Kukla, Tamas Kiss, Gabor Terstyanszky.
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
G. Terstyanszky, T. Kukla, T. Kiss, S. Winter, J.: Centre for Parallel Computing School of Electronics and Computer Science, University of.
1 Overview of the Application Hosting Environment Stefan Zasada University College London.
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
Parameter Sweep Workflows for Modelling Carbohydrate Recognition ProSim Project Tamas Kiss, Gabor Terstyanszky, Noam Weingarten.
Sharing Workflows through Coarse-Grained Workflow Interoperability : Sharing Workflows through Coarse-Grained Workflow Interoperability G. Terstyanszky,
Introduction to SHIWA Technology Peter Kacsuk MTA SZTAKI and Univ.of Westminster
Distributed Video Rendering using Blender, VirtualBox, and BOINC. Christopher J. Reynolds. Centre for Parallel Computing, University of Westminster.
Cliff Addison University of Liverpool Campus Grids Workshop October 2007 Setting the scene Cliff Addison.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Code Applications Tamas Kiss Centre for Parallel.
Finding the minimum number of Sudoku clues Yu-Ting CHEN Academia Sinica Grid Computing.
Workflow Level Grid Interoperability By GEMLCA and the P-GRADE Portal.
P-GRADE and GEMLCA.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Applications.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Services for advanced workflow programming.
A scalable and flexible platform to run various types of resource intensive applications on clouds ISWG June 2015 Budapest, Hungary Tamas Kiss,
Introduction to Taverna Online and Interaction service Aleksandra Pawlik University of Manchester.
1 Practical information for the GEMLCA / P-GRADE hands-on Tamas Kiss University of Westminster.
SHIWA and Coarse-grained Workflow Interoperability Gabor Terstyanszky, University of Westminster Summer School Budapest July 2012 SHIWA is supported.
AHM04: Sep 2004 Nottingham CCLRC e-Science Centre eMinerals: Environment from the Molecular Level Managing simulation data Lisa Blanshard e- Science Data.
11 Introduction to EDGI Peter Kacsuk, MTA SZTAKI Start date: Duration: 27 months EDGI.
Building an European Research Community through Interoperable Workflows and Data ER-flow project Gabor Terstyanszky, University of Westminster, UK EGI.
SHIWA: Is the Workflow Interoperability a Myth or Reality PUCOWO, June 2011, London Gabor Terstyanszky, Tamas Kiss, Tamas Kukla University of Westminster.
The National Grid Service Mike Mineter.
The National Grid Service Mike Mineter
Predrag Buncic (CERN/PH-SFT) Software Packaging: Can Virtualization help?
1 Porting applications to the NGS, using the P-GRADE portal and GEMLCA Peter Kacsuk MTA SZTAKI Hungarian Academy of Sciences Centre for.
Grid Execution Management for Legacy Code Architecture Exposing legacy applications as Grid services: the GEMLCA approach Centre.
The EDGeS project receives Community research funding 1 Support services for desktop grids and service grids by the EDGeS project Tamas Kiss – University.
SHIWA Simulation Platform (SSP) Gabor Terstyanszky, University of Westminster EGI Community Forum Munnich March 2012 SHIWA is supported by the FP7.
Usage of WS-PGRADE and gUSE in European and national projects Peter Kacsuk 03/27/
Hadoop on the EGI Federated Cloud Dr Tamas Kiss, CloudSME Project Director University of Westminster, London, UK Carlos Blanco – University.
Creating Grid Resources for Undergraduate Coursework John N. Huffman Brown University Richard Repasky Indiana University Joseph Rinkovsky Indiana University.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Applications.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Evaluation of Liferay modules EGI-InSPIRE mini-project Gergely Sipos EGI.eu.
Centre for Parallel Computing Tamas Kiss Centre for Parallel Computing A Distributed Rendering Service Tamas Kiss Centre for Parallel Computing Research.
CernVM and Volunteer Computing Ivan D Reid Brunel University London Laurence Field CERN.
Building an European Research Community through Interoperable Workflow and Data Gabor Terstyanszky University of Westminster.
High Performance Computing (HPC)
P-GRADE and GEMLCA.
CernVM Status Report Predrag Buncic (CERN/PH-SFT).
MIK 2.1 DBNS - introduction to WS-PGRADE, 2013
The National Grid Service
University of Westminster Centre for Parallel Computing
The National Grid Service Mike Mineter NeSC-TOE
Implementation of a small-scale desktop grid computing infrastructure in a commercial domain    
Introduction to the SHIWA Simulation Platform EGI User Forum,
Presentation transcript:

SCI-BUS project Pre-kick-off meeting University of Westminster Centre for Parallel Computing Tamas Kiss, Stephen Winter, Gabor Terstyanszky

University of Westminster in SCI-BUS Leaders of SA3: Application and User Support Service Major contribution to task JRA2.7: Blender rendering community gateway (together with Laurea University)

Centre for Parallel Computing School of Electronics and Computer Science Research in Cluster, Grid and Cloud computing 5 academic staff, 5 researchers, 6 PhD students Well funded by European and UK research grants (nearly £1 million funding last year) Main research focus: User friendly interfaces/science gateways to Grids and Clouds Making Desktop and Service Grids interoperable (EDGeS, EDGI and DEGISCO projects) Grid workflow systems and their interoperability (SHIWA project) Application support – porting applications to clusters, Grids and Clouds (Westminster Grid Application Support Service)

The University of Westminster Local DG Over 1500 Windows PCs from 6 different campuses Lifecycle of a node: 1. 1.PCs basically used by students/staff 2. 2.If unused, switch to Desktop Grid mode 3. 3.No more work from DG server -> shutdown (green solution)

5 Service Grid Resources 96 node dedicated computing cluster (part of the UK National Grid Service)

6 The UK National Grid Service (NGS) Core members: Manchester CCLRC RAL Oxford Leeds HPCx Partner sites Cardiff Lancaster Westminster Queens University of Belfast University of Glasgow +17 affiliate sites Data clusters Compute clusters Supercomputer stable highly-available production quality Grid service for the UK research community Westminster

7 NGS P-GRADE GEMLCA Portal User friendly access to NGS resources portal website: portal.cpc.wmin.ac.uk Operated by the University of Westminster as NGS Partner Site Westminster

8 Supporting users and application developers to run their applications on the Grid developing the Grid application deploying on the Grid Grid W-GRASSsupport Requirements Domain expertise user knowledge & effort W-GRASS Westminster GRid Application Support Service

Rendering Portal Service for the Blender User Community

What is Rendering? Rendering: the process of generating an image from a model, by means of computer programs the process of calculating effects in a video editing file to produce final video output Model: description of three-dimensional objects in a strictly defined language or data structure geometry, viewpoint, texture, lighting, and shading information, etc.

The rendering problem Rendering images/animations on stand-alone PCs is very time consuming. Even relatively short rendering tasks can run for hours/days. How to speed up the rendering process? Parallelize the task and run on computer farms instead of standalone PCs frames can be rendered independently

What is Blender? 3D graphics application that can be used for rendering (amongst other tasks) Open source - GNU GPL license large community behind it actively developed under the supervision of the Blender Foundation Linux, MAC OS and Windows versions

Rendering portal implementation Based on P-GRADE portal 2.5 (latest release at the time) Totally stripped down version – very simple user interface No user certificate management - free access No settings or customisations Compute resources are completely hidden from user No workflow editor – specific workflows are created automatically Functionality: Create a new rendering task - upload input file Submit rendering workflow Download results

Rendering portal functionalities Creating new rendering job Give unique name, define frames to be rendered Select Blender input file Workflow created by pressing this button

Rendering portal functionalities Execute workflow – download results

Some statistics December 2008 – September registered user 9000 workflows 464,000 frames 28,000 CPU hours ~ 1167 CPU days 95 Gbyte input files – 180 Gbyte results Job submission on portal currently suspended to allow major upgrade

Issues addressed Dealing with the “tail” problem in Desktop Grids The problem: Whole workflow can be delayed by a few late jobs (due to job suspensions/interruptions). We term these late jobs the “tail” problem. Reduce the workflow makespan by replicating late jobs on the cluster.

Issues Addressed Security & Memory Demands The problem: Blender can execute potentially malicious Python scripts which can be embedded within input (.blend) files. Users typically requested huge memory resources on cluster nodes.

Virtualizing the DG The solution: Sandboxing. Use the Desktop Grid with Blender running in a system Virtual Machine (VM). Assign the guest OS physical memory from host and as much swap as desired (using virtual disks). Combination of Boinc and VirtualBox.