BOSCO Architecture Derek Weitzel University of Nebraska – Lincoln.

Slides:



Advertisements
Similar presentations
PRAGMA BioSciences Portal Raj Chhabra Susumu Date Junya Seo Yohei Sawai.
Advertisements

Jaime Frey Computer Sciences Department University of Wisconsin-Madison OGF 19 Condor Software Forum Routing.
GXP in nutshell You can send jobs (Unix shell command line) to many machines, very fast Very small prerequisites –Each node has python (ver or later)
Current methods for negotiating firewalls for the Condor ® system Bruce Beckles (University of Cambridge Computing Service) Se-Chang Son (University of.
M. Muztaba Fuad Masters in Computer Science Department of Computer Science Adelaide University Supervised By Dr. Michael J. Oudshoorn Associate Professor.
Dan Bradley Computer Sciences Department University of Wisconsin-Madison Schedd On The Side.
Overview of Wisconsin Campus Grid Dan Bradley Center for High-Throughput Computing.
Bosco: Enabling Researchers to Expand Their HTC Resources The Bosco Team: Dan Fraser, Jaime Frey, Brooklin Gore, Marco Mambelli, Alain Roy, Todd Tannenbaum,
Building Campus HTC Sharing Infrastructures Derek Weitzel University of Nebraska – Lincoln (Open Science Grid Hat)
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Campus High Throughput Computing (HTC) Infrastructures (aka Campus Grids) Dan Fraser OSG Production Coordinator Campus Grids Lead.
Internet Gateway Device (IGD)

Lesson 1: Configuring Network Load Balancing
Network+ Guide to Networks, Fourth Edition Chapter 1 An Introduction to Networking.
1 Chapter Overview Introduction to Windows XP Professional Printing Setting Up Network Printers Connecting to Network Printers Configuring Network Printers.
CN1260 Client Operating System Kemtis Kunanuraksapong MSIS with Distinction MCT, MCITP, MCTS, MCDST, MCP, A+
© 2012 IBM Corporation Tivoli Workload Automation Informatica Power Center.
In-Line Cell Manager Configuration and Loading Date: Octobre2014.
Installation and Integration of Virtual Clusters onto Pragma Grid NAIST Nara, Japan Kevin Lam 06/28/13.
Nimbus & OpenNebula Young Suk Moon. Nimbus - Intro Open source toolkit Provides virtual workspace service (Infrastructure as a Service) A client uses.
Three steps to sell Office Always ask every customer the following questions to get them interested in buying Office: Did you know that Office.
INSTALLING MICROSOFT EXCHANGE SERVER 2003 CLUSTERS AND FRONT-END AND BACK ‑ END SERVERS Chapter 4.
Microsoft Active Directory(AD) A presentation by Robert, Jasmine, Val and Scott IMT546 December 11, 2004.
Cognos TM1 Satya Mobile:
Microsoft Internet Information Services 5.0 (IIS) By: Edik Magardomyan Fozi Abdurhman Bassem Albaiady Vince Serobyan.
Portable SSH Brian Minton EKU, Dept. of Technology, CEN/CET)‏
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
Campus Grids Report OSG Area Coordinator’s Meeting Dec 15, 2010 Dan Fraser (Derek Weitzel, Brian Bockelman)
Scalable Systems Software Center Resource Management and Accounting Working Group Face-to-Face Meeting October 10-11, 2002.
CSF4 Meta-Scheduler Name: Zhaohui Ding, Xiaohui Wei
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
Grid job submission using HTCondor Andrew Lahiff.
Remote Cluster Connect Factories David Lesny University of Illinois.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
Turning science problems into HTC jobs Wednesday, July 29, 2011 Zach Miller Condor Team University of Wisconsin-Madison.
The Roadmap to New Releases Derek Wright Computer Sciences Department University of Wisconsin-Madison
The BioBox Initiative: Bio-ClusterGrid Maddie Wong Technical Marketing Engineer Sun APSTC – Asia Pacific Science & Technology Center.
Institute For Digital Research and Education Implementation of the UCLA Grid Using the Globus Toolkit Grid Center’s 2005 Community Workshop University.
1October 9, 2001 Sun in Scientific & Engineering Computing Grid Computing with Sun Wolfgang Gentzsch Director Grid Computing Cracow Grid Workshop, November.
REMOTE LOGIN. TEAM MEMBERS AMULYA GURURAJ 1MS07IS006 AMULYA GURURAJ 1MS07IS006 BHARGAVI C.S 1MS07IS013 BHARGAVI C.S 1MS07IS013 MEGHANA N. 1MS07IS050 MEGHANA.
Homework 02 NAT 、 DHCP 、 Firewall 、 Proxy. Computer Center, CS, NCTU 2 Basic Knowledge  DHCP Dynamically assigning IPs to clients  NAT Translating addresses.
Experiment Management System CSE 423 Aaron Kloc Jordan Harstad Robert Sorensen Robert Trevino Nicolas Tjioe Status Report Presentation Industry Mentor:
 Load balancing is the process of distributing a workload evenly throughout a group or cluster of computers to maximize throughput.  This means that.
Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center February.
GLIDEINWMS - PARAG MHASHILKAR Department Meeting, August 07, 2013.
Page 1 Printing & Terminal Services Lecture 8 Hassan Shuja 11/16/2004.
How to use WS_FTP A Step by Step Guide to File Transfer.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
ALCF Argonne Leadership Computing Facility GridFTP Roadmap Bill Allcock (on behalf of the GridFTP team) Argonne National Laboratory.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
LSF Universus By Robert Stober Systems Engineer Platform Computing, Inc.
Grid Compute Resources and Job Management. 2 Grid middleware - “glues” all pieces together Offers services that couple users with remote resources through.
CATI Pitié-Salpêtrière CATI: A national platform for advanced Neuroimaging In Alzheimer’s Disease Standardized MRI and PET acquisitions Across a wide network.
4000 Imaje 4020 – Software Imaje 4020 – Content ■ Content of Chapter Software: 1. Flash Up 2. Netcenter 3. FTP 4. Active X 5. XCL commands 6. Exercise.
HTCondor’s Grid Universe Jaime Frey Center for High Throughput Computing Department of Computer Sciences University of Wisconsin-Madison.
Chapter 4: server services. The Complete Guide to Linux System Administration2 Objectives Configure network interfaces using command- line and graphical.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Campus Grid Technology Derek Weitzel University of Nebraska – Lincoln Holland Computing Center (HCC) Home of the 2012 OSG AHM!
Five todos when moving an application to distributed HTC.
NAT、DHCP、Firewall、FTP、Proxy
Introduction to Operating Systems
Services DFS, DHCP, and WINS are cluster-aware.
Outline Expand via Flocking Grid Universe in HTCondor ("Condor-G")
GWE Core Grid Wizard Enterprise (
Lab A: Installing and Configuring the Network Load Balancing Driver
XSEDE’s Campus Bridging Project
rvGAHP – Push-Based Job Submission Using Reverse SSH Connections
Production Manager Tools (New Architecture)
Presentation transcript:

BOSCO Architecture Derek Weitzel University of Nebraska – Lincoln

May 1, 2013 Goals We want an easy to use method for users to do computational research It should be easy to install, use, and maintain It should be simple for the user

May 1, 2013 Methods Use what’s already at clusters  Their identity management  Their access methods Present a consistent interface to users If demand increases, expand organically, cluster to cluster

May 1, 2013 User Scenario 1 What they have:  A computer  Access to one cluster  Processing for their research What they want:  Simple job submission / management  Their processing to be completed… now!

May 1, 2013 Bosco Working

May 1, 2013 Bosco Working

May 1, 2013 Bosco Working

May 1, 2013 Technology Uses HTCondor job submission for user jobs Uses SSH to connect to clusters Uses Glite’s BLAHP for interface to cluster scheduler Auto detection of remote cluster OS and appropriate BOSCO installation

May 1, 2013 User Benefits from BOSCO 1.Throttled submission to remote cluster  Automatically detected 2.Job data transferred back to local computer after job completion 3.Do not care about remote OS version  Automatically detect and install

May 1, 2013 User Scenario 2 What they have:  A computer  Access to one (or more) clusters  Processing for their research What they want:  Simple job submission / management  Their processing to be completed… now!

May 1, 2013 Bosco Working

May 1, 2013 Bosco Working

May 1, 2013 Technology Everything as before plus… Submit Glideins to remote clusters  Glideins are dynamic HTCondor worker nodes  Provides consistent interface for user jobs  Full output transferred back

May 1, 2013 User Benefits from BOSCO 2 1.Throttled submission to remote cluster  Automatically detected 2.Job data transferred back to local computer after job completion 3.Do not care about remote OS version 4.Transparent multi-cluster load balancing 5.Consistent interface to worker nodes 6.Ability to Flock remote HTCondor clusters

May 1, 2013 Job Throttling Detection and throttle of submitted jobs Detects the number of jobs that can be submitted to a PBS cluster Uses HTCondor to throttle the number of jobs that can be submitted to that cluster

May 1, 2013 Mobile Can suspend laptop and Bosco will survive When Bosco starts back up, it will resume checking the status of the job. Can submit jobs offline to be submitted when the network is available again.

May 1, 2013 Job Data Transferred Job data is transferred to and from to submit host The submit host could be a laptop, output data transferred back to the reseacher’s home! Important if further analysis is needed on the data

May 1, 2013 Multi-Cluster Submitting glideins to multiple clusters at once Jobs are load balanced between the clusters Clusters can be spread out across institutional boundries:  Example: Clusters at Nebraska and Wisc.

May 1, 2013 Multi-OS Support Remote OS detected at install time BOSCO version installed from the ‘cloud’ All OS’s can communicate with each other through the GAHP protocol.

May 1, 2013 Requirements Requirements on clusters are limited  Running PBS, LSF, HTCondor, or SGE  Shared home file system  Outbound internet connectivity Requirements on submit host  For scenario 1, none  For scenario 2  Public IP address  1 port open (11000)

May 1, 2013 Compatibility Tested by Pegasus team to be compatible Can use Dagman workflow management If it can run on HTCondor, it can run on BOSCO

May 1, 2013 Benefits Joe the Biologist Benefits  Simple access to campus clusters from laptop  Cluster configuration is transparent Power User Benefits  Built-in pilot factory for load balancing across multiple clusters

May 1, 2013 Future We’ve found users don’t want to see HTCondor, they want to see Matlab, R, Galaxy, … Bosco is now integrating directly with software projects Starting with R, specifically the GridR package

May 1, 2013 Where to learn more? Visit bosco.opensciencegrid.orgbosco.opensciencegrid.org Bosco is integrated into HTCondor relelases.  If you have HTCondor , you have Bosco!