© 2007 IBM Corporation Snehal S. Antani, WebSphere XD Technical Lead SOA Technology Practice IBM Software WebSphere.

Slides:



Advertisements
Similar presentations
Copyright © 2010, SAS Institute Inc. All rights reserved. SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks.
Advertisements

SSRS 2008 Architecture Improvements Scale-out SSRS 2008 Report Engine Scalability Improvements.
PowerVM Live Partitioned Mobility A feature of IBM Virtualization Presented by Group 3 Mayra Longoria Mehdi Jafry Ken Lancaster PowerVM Live Partitioned.
Copyright © 2007, SAS Institute Inc. All rights reserved. SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks.
Chap 5 Process Scheduling. Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a.
® IBM Software Group © 2010 IBM Corporation Marco Borgianni May 9-12, 2006 IBM Tivoli Workload Scheduler for Applications.
Application Models for utility computing Ulrich (Uli) Homann Chief Architect Microsoft Enterprise Services.
Dr Mohamed Menacer College of Computer Science and Engineering Taibah University CS-334: Computer.
6/2/20071 Grid Computing Sun Grid Engine (SGE) Manoj Katwal.
Understanding Operating Systems 1 Overview Introduction Operating System Components Machine Hardware Types of Operating Systems Brief History of Operating.
Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 1: The new mainframe.
1 Operating Systems Ch An Overview. Architecture of Computer Hardware and Systems Software Irv Englander, John Wiley, Bare Bones Computer.
Parallel Reconstruction of CLEO III Data Gregory J. Sharp Christopher D. Jones Wilson Synchrotron Laboratory Cornell University.
© 2009 IBM Corporation ® IBM Software Group Introduction to Cloud Computing Vivek C Agarwal IBM India Software Labs.
Chapter 21: Mobile Virtualization Infrastracture and Related Security Issues Guide to Computer Network Security.
Workload management in data warehouses: Methodology and case studies presentation to Information Resource Management Association of Canada January 2008.
Layers and Views of a Computer System Operating System Services Program creation Program execution Access to I/O devices Controlled access to files System.
Domino MailDomino AppsQuickPlace Sametime Domino WebHub / SMTP Top Ten Reasons to Consolidate Lotus  Workloads on IBM eServer  iSeries  and eServer.
© 2012 IBM Corporation Tivoli Workload Automation Informatica Power Center.
Ch 4. The Evolution of Analytic Scalability
GridGain – Java Grid Computing Made Simple Dmitriy Setrakyan
GridGain – Java Grid Computing Made Simple Nikita Ivanov
Word Wide Cache Distributed Caching for the Distributed Enterprise.
+ CS 325: CS Hardware and Software Organization and Architecture Cloud Architectures.
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
Principles of Scalable HPC System Design March 6, 2012 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
DISTRIBUTED COMPUTING
CHEP 2000 Smart Resource Management Software in High Energy Physics Wolfgang Gentzsch and Lothar Lippert Gridware GmbH & Inc. Padua, 9 February 2000.
Ashish Patro MinJae Hwang Thanumalayan S. Thawan Kooburat.
An Intro to AIX Virtualization Philadelphia CMG September 14, 2007 Mark Vitale.
© 2009 IBM Corporation Best Practices in making production - grade applications -A Performance Architect’s View Archanaa Panda, Bharathraj – IBM, HiPODS,
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
© 2008 IBM Corporation Snehal S. Antani WebSphere XD Technical Lead SOA Technology Practice, ISSW, IBM WebSphere XD Compute Grid Developing Tooling Story.
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
 Apache Airavata Architecture Overview Shameera Rathnayaka Graduate Assistant Science Gateways Group Indiana University 07/27/2015.
Lecture 8: 9/19/2002CS149D Fall CS149D Elements of Computer Science Ayman Abdel-Hamid Department of Computer Science Old Dominion University Lecture.
GVis: Grid-enabled Interactive Visualization State Key Laboratory. of CAD&CG Zhejiang University, Hangzhou
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
VMware vSphere Configuration and Management v6
© 2008 IBM Corporation AIX Workload Partion Manger.
Introduction to z/OS Basics © 2006 IBM Corporation Chapter 7: Batch processing and the Job Entry Subsystem (JES) Batch processing and JES.
© 2008 IBM Corporation Swiss Re Case Study: Bringing Together Batch Processing and OLTP in a Unified Computing Environment Chris Vignola STSM, XD Compute.
WSV207. Cluster Public Cloud Servers On-Premises Servers Desktop Workstations Application Logic.
DynamicMR: A Dynamic Slot Allocation Optimization Framework for MapReduce Clusters Nanyang Technological University Shanjiang Tang, Bu-Sung Lee, Bingsheng.
Introduction to the new mainframe © Copyright IBM Corp., All rights reserved. 1 Main Frame Computing Objectives Explain why data resides on mainframe.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 5 CPU Scheduling Slide 1 Chapter 5 CPU Scheduling.
© 2008 IBM Corporation Snehal S. Antani WebSphere XD Technical Lead SOA Technology Practice, ISSW, IBM TSM-1073 Enterprise Grid and Batch Computing with.
OOAD Using the UML - Describe Concurrency, v 4.0 Copyright  Rational Software, all rights reserved 1 R Thread Process X Thread Process ZProcess.
® IBM Software Group © IBM Corporation 應用程式基礎設備虛擬化 — WebSphere Virtual Enterprise Lilian Wang( 王漪萍 ) WebSphere Technical Sales Support IBM Taiwan Software.
FIFE Architecture Figures for V1.2 of document. Servers Desktops and Laptops Desktops and Laptops Off-Site Computing Off-Site Computing Interactive ComputingSoftware.
WebSphere XD Compute Grid High Performance Architectures
Consulting Services JobScheduler Architecture Decision Template
GWE Core Grid Wizard Enterprise (
Consulting Services JobScheduler Architecture Decision Template
CRESCO Project: Salvatore Raia
Chapter 21: Virtualization Technology and Security
Chapter 6: CPU Scheduling
Chapter 22: Virtualization Security
Module 5: CPU Scheduling
Ch 4. The Evolution of Analytic Scalability
Chapter 6: CPU Scheduling
IBM Workload Automation
Wide Area Workload Management Work Package DATAGRID project
Building and running HPC apps in Windows Azure
Chapter 6: CPU Scheduling
Positioning XD Compute Grid- Role of JZOS
Chapter 6: CPU Scheduling
Presentation transcript:

© 2007 IBM Corporation Snehal S. Antani, WebSphere XD Technical Lead SOA Technology Practice IBM Software WebSphere XD Compute Grid Infrastructure Topology Considerations

Overview  Key Components of WebSphere XD Compute Grid –Job Scheduler [formerly named the Long Running Scheduler (LRS)] –Parallel Job Manager (PJM) –Grid Endpoints [formerly named Long Running Execution Environment (LREE)]  Topology Considerations and Architectures

XD Compute Grid Components  Job Scheduler (JS) –The job entry point to XD Compute grid –Job life-cycle management (Submit, Stop, Cancel, etc) and monitoring –Dispatches workload to either the PJM or LREE –Hosts the Job Management Console (JMC)  Parallel Job Manager (PJM)- –Breaks large batch jobs into smaller partitions for parallel execution –Provides job life-cycle management (Submit, Stop, Cancel, Restart) for the single logical job and each of its chunks –Is *not* a required component in compute grid  Grid Endpoints (GEE) –Executes the actual business logic of the batch job

XD Compute Grid Components User Load Balancer -EJB -Web Service -JMS -Command Line -Job Console GEE WAS GEE WAS PJM WAS JS WAS

Topology Questions…  First, is the Parallel Job Manager (PJM) needed, will you run highly-parallel jobs?  What are the high availability requirements for the JS, PJM, and GEE? –Five 9’s? Continuous?  What are the scalability requirements for the JS, PJM, GEE? –Workloads are predictable and system resources are static? –Workloads can fluctuate and system resources are needed on-demand?  What are the performance requirements for the batch jobs themselves? –They must complete within some constrained time window?  What will the workload be on the system? –How many concurrent jobs? How many highly-parallel jobs? Submission rate of jobs?

Topology Considerations…  If the Job Scheduler (JS) does not have system resources available when under load, managing jobs, monitoring jobs, and using the JMC will be impacted.  If the PJM does not have system resources available when under load, managing highly parallel jobs and monitoring the job partitions will be impacted.  If the GEE does not have system resources available when under load, the execution time of the business logic will be impacted.  The most available and scalable production environment will have: –Redundant JS. JS clustered across two datacenters. –Redundant PJM. PJM clustered across two datacenters. –n GEE’s, where n is f(workload goals). Clustered across two datacenters

Cost Considerations…  GEE will most likely require the most CPU resources. The total number of CPU’s needed is dependent on: the workload goals max number of concurrent jobs in the system.  PJM will require fewer CPU’s than the GEE. The total number of CPU’s needed is dependent on: Rate at which highly-parallel jobs are submitted Max number of concurrent parallel partitions running in the system.  Job Scheduler will require fewer CPU resources than the GEE, and perhaps the PJM too. The total number of CPU’s needed is dependent on: Rate at which jobs will be submitted Max number of concurrent jobs in the system

Example Production Topology- Highly Available/Scalable Compute Grid GEE JVM LPAR JS JVM CPU LPAR Frame 1 CPU PJM LPAR CPU GEE JVM LPAR JS CPU LPAR Frame 2 CPU PJM LPAR CPU DB Load Balancer JVM

GEE JVM LPAR Frame 1 CPU PJM JVM LPAR CPU GEE JVM LPAR Frame 2 CPU DB Load Balancer JS PJM JVM LPAR CPU JS Pro: Faster interaction between JS and PJM due to co-location and ejb-local-home optimizations Con: Possibility of starving JS or PJM due to workload fluctuations Example Production Topology- Co-locate the Job Scheduler and PJM

Frame 1Frame 2 DB Load Balancer PJM JVM LPAR JS Con: Possibility of starving JS, PJM, and GEE due to workload fluctuations Con: Not scalable Example Production Topology- Co-locate the Job Scheduler, PJM, and GEE GEE CPU PJM JVM LPAR JS GEE CPU

© 2007 IBM Corporation WebSphere XD Compute Grid Infrastructure Topology Considerations