/ Copyright © Siemens AG 2006. All rights reserved. Corporate Technology Performance Prediction of Client-Server Systems by High-Level Abstraction Models.

Slides:



Advertisements
Similar presentations
Network II.5 simulator ..
Advertisements

Cultural Heritage in REGional NETworks REGNET Project Meeting Content Group
Performance Testing - Kanwalpreet Singh.
Tool support for Distributed Object Technology
Copyright © 2005 Department of Computer Science CPSC 641 Winter PERFORMANCE EVALUATION Often in Computer Science you need to: – demonstrate that.
Capacity Planning and Predicting Growth for Vista Amy Edwards, Ezra Freeloe and George Hernandez University System of Georgia 2007.
Ó 1998 Menascé & Almeida. All Rights Reserved.1 Part IV Capacity Planning Methodology.
1 Part IV Capacity Planning Methodology © 1998 Menascé & Almeida. All Rights Reserved.
Petri net modeling of biological networks Claudine Chaouiya.
XML Documentation of Biopathways and Their Simulations in Genomic Object Net Speaker : Hungwei chen.
GridFlow: Workflow Management for Grid Computing Kavita Shinde.
1 PERFORMANCE EVALUATION H Often in Computer Science you need to: – demonstrate that a new concept, technique, or algorithm is feasible –demonstrate that.
1 Multiple class queueing networks Mean Value Analysis - Open queueing networks - Closed queueing networks.
Chapter 1 Introduction to C Programming. 1.1 INTRODUCTION This book is about problem solving with the use of computers and the C programming language.
Course Instructor: Aisha Azeem
Copyright Arshi Khan1 System Programming Instructor Arshi Khan.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 12 Slide 1 Distributed Systems Design 1.
Ekrem Kocaguneli 11/29/2010. Introduction CLISSPE and its background Application to be Modeled Steps of the Model Assessment of Performance Interpretation.
Research on cloud computing application in the peer-to-peer based video-on-demand systems Speaker : 吳靖緯 MA0G rd International Workshop.
Performance of Web Applications Introduction One of the success-critical quality characteristics of Web applications is system performance. What.
Capacity analysis of complex materials handling systems.
Chapter 6 Operating System Support. This chapter describes how middleware is supported by the operating system facilities at the nodes of a distributed.
Automatic Software Testing Tool for Computer Networks ADD Presentation Dudi Patimer Adi Shachar Yaniv Cohen
Victor Mushkatin, MCSE, MCSD CORPORATION Alexander Zakonov, MCSE, MCSD Stephen Pelletier, MCSE.
Performance Evaluation of Computer Systems Introduction
1 Performance Evaluation of Computer Systems and Networks Introduction, Outlines, Class Policy Instructor: A. Ghasemi Many thanks to Dr. Behzad Akbari.
5 - 1 Copyright © 2006, The McGraw-Hill Companies, Inc. All rights reserved.
Architecture-based Reliability of web services Presented in SRG Group meeting January 24, 2011 Cobra Rahmani.
Performance evaluation of component-based software systems Seminar of Component Engineering course Rofideh hadighi 7 Jan 2010.
Layered Queuing Network Solver Navdeep Kaur Kapoor Department of Systems and Computer Engineering March 6, 2008.
1 Qualitative Reasoning of Distributed Object Design Nima Kaveh & Wolfgang Emmerich Software Systems Engineering Dept. Computer Science University College.
1 Unobtrusive Performance Analysis – Where is the QoS in TAPAS? University College London James Skene –
Elmasri and Navathe, Fundamentals of Database Systems, Fourth Edition Copyright © 2004 Pearson Education, Inc. Slide 2-1 Data Models Data Model: A set.
Apache JMeter By Lamiya Qasim. Apache JMeter Tool for load test functional behavior and measure performance. Questions: Does JMeter offers support for.
Handling Session Classes for Predicting ASP.NET Performance Metrics Ágnes Bogárdi-Mészöly, Tihamér Levendovszky, Hassan Charaf Budapest University of Technology.
I Copyright © 2007, Oracle. All rights reserved. Module i: Siebel 8.0 Essentials Training Siebel 8.0 Essentials.
1 Copyright  2001 Pao-Ann Hsiung SW HW Module Outline l Introduction l Unified HW/SW Representations l HW/SW Partitioning Techniques l Integrated HW/SW.
QPE A Graphical Editor for Modeling using Queueing Petri Nets Christofer Dutz.
A Grid-enabled Multi-server Network Game Architecture Tianqi Wang, Cho-Li Wang, Francis C.M.Lau Department of Computer Science and Information Systems.
Course: COMS-E6125 Professor: Gail E. Kaiser Student: Shanghao Li (sl2967)
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
CSC 480 Software Engineering High Level Design. Topics Architectural Design Overview of Distributed Architectures User Interface Design Guidelines.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
Chapter 1: The Queueing Paradigm The behavior of complex electronic system  Statistical prediction Paradigms are fundamental models that abstract out.
From UML to Performance Models 29 June 2004 Dorina C. Petriu, Gordon Gu page 1 From UML to Performance Models: High Level View Dorina C. Petriu Gordon.
Introduction to Performance Testing Performance testing is the process of determining the speed or effectiveness of a computer, network, software program.
2 Copyright © Oracle Corporation, All rights reserved. Basic Oracle Net Architecture.
Introduction to Performance Tuning Chia-heng Tu PAS Lab Summer Workshop 2009 June 30,
1 Design and Implementation of a High-Performance Distributed Web Crawler Polytechnic University Vladislav Shkapenyuk, Torsten Suel 06/13/2006 석사 2 학기.
OPERATING SYSTEMS CS 3502 Fall 2017
Threads vs. Events SEDA – An Event Model 5204 – Operating Systems.
Chapter 2 Database System Concepts and Architecture
Database System Concepts and Architecture
Software Architecture in Practice
CHAPTER 3 Architectures for Distributed Systems
Copyright © 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 2 Database System Concepts and Architecture.
Systems Analysis and Design 5th Edition Chapter 8. Architecture Design
Stochastic Activity Networks
Service-centric Software Engineering
Data, Databases, and DBMSs
Operating Systems : Overview
Chapter 3 Hardware and software 1.
Chapter 3 Hardware and software 1.
Operating Systems : Overview
Chapter 1: The Database Environment
The Database Environment
Chapter 5 Architectural Design.
Performance And Scalability In Oracle9i And SQL Server 2000
Exercises for Chapter 15: Distributed Multimedia Systems
Presentation transcript:

/ Copyright © Siemens AG All rights reserved. Corporate Technology Performance Prediction of Client-Server Systems by High-Level Abstraction Models SEC(R) 2008 Presented by Alexander Pastsyak and Yana Rebrova Service Queue

Page 2 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Outline  Motivation  Formalisms to describe system performance  Layered Queuing Networks (LQN)  Performance Evaluation Process Algebra (PEPA)  Queuing Petri Nets (QPN)  Architecture of the Test System  Experiments  Results  Conclusion

Page 3 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Motivation in out Two place buffer Buf2 inout in Database Service Software Performance models Real systems composed from many components Application server Web server Queue

Page 4 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana What do we want to do? What kind of predictions can be obtained from the models for response time and throughput characteristics? What is the method to calibrate the models? How to map different system architecture entities to the model elements (build the models)?

Page 5 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Used formalisms and tools Stochastic Process Algebras (SPA) Stochastic Petri Nets (SPN) Layered Queuing Networks (LQN) Formalisms QPN Modeling Environment (QPME) PEPA Workbench LQNSolver Tools ACTUALITYACTUALITY AVAILABILITYAVAILABILITY

Page 6 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Formalisms: Layered Queuing Networks Request processing Queue Server Client Request Queuing Network Layered Queuing Network Request processing Queue Server1 Client Request1 Request processing Queue Server2 Request2

Page 7 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana LQN- Model Building Blocks Processors are used by activities within a performance model to consume time. Tasks are used in layered queuing networks to represent software resources Activities are the lowest- level of specification in the performance model. Entries service requests and are used to differentiate the service provided by a task. Service requests from one task to another Precedence is used to connect activities within a task to form an activity graph Processors Tasks Entries Activities RequestsPrecedence

Page 8 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana LQN – Model Example server users [0] workstation [5] net_2 [0.001] net_1 [0.001] server_2 [22] server_1 [10] disk_1 [0.01] printer [100] users{4} Z=3 workstations {4}network {inf} server_1server_2 printer disk_2 disk_1 disk_2 [0.01] disk_ 2

Page 9 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Formalisms: Layered Queuing Networks AdvantagesDisadvantages Models are not suitable for formal verification LQN is the most powerful formalism to describe client- server systems Packages for direct solution and simulation are available Lack of good model editor with graphical user interface

Page 10 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Formalisms: Queuing Petri Nets Generalized Colored Petri Net Queuing Petri Net T_EnterT_Service Tokens Places Immediate transitionTimed transition depositoryqueue

Page 11 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana QPN- Model Building Blocks Tokens are the “value” or “state” of a place Transitions Places Ordinary placesQueuing places Timed Transitions Immediate Transitions Tokens Tokens are inserted into the queue. After completion of its service, a token is immediately moved to the depository, where it becomes available for output transitions of the place. Tokens fired onto such a place are immediately available for the corresponding output transitions Immediate transition fires without any delay in zero time. An enabled timed transition fires after a certain delay. Transitions change the number of tokens in places

Page 12 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana QPN – Model Example Ordinary place Queuing place Immediate transition Clients Web Server 1DB Server DB connections pool

Page 13 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Formalisms: Queuing Petri Nets AdvantagesDisadvantages Easy to model distributed client- server systems Analysis suffers the state space explosion problem and this imposes a limit on the size of the models that are analyzable QPME tool offers convenient graphical user interface for models editing QPME tool is under development and some problems occur in the analysis package Difficult to model synchronous requests Only simulation technique is currently available in QPME tool Models are not suitable for formal verification Allows to handle several request types in the same model

Page 14 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Formalisms: Performance Evaluation Process Algebra Stochastic Process Algebra Performance Evaluation Process Algebra Duration is exponentially distributed Activity is called shared if several components synchronize over it The rate of the shared activity is defined by cooperation with another component. Process P1Process P2 a,r

Page 15 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana PEPA – Model Building Blocks Components carry out activities Activity Combinators Components Combinators allow building expressions to define behavior of components via activities. Each activity is characterized by an action type α and duration r which is exponentially distributed. This is written as a pair (α, r). CombinatorsSemantic Prefix(α, r).P ChoiceP1+P2 CooperationP1 P2 HidingP/L ConstantA def = P

Page 16 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Component PEPA – Model Example ClientWeb ServerDatabase Internal structure of component a,r a1,r1 a2,r2 a5,r5 a3,r3 a4,r4 Cooperation

Page 17 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Formalisms: Performance Evaluation Process Algebra AdvantagesDisadvantages Easy to model several synchronous components Analysis suffers the state space explosion problem and this imposes a limit on the size of the models that are analyzable PEPA Workbench supports only text models Some components (like Load Balancer) are not easy to model. Powerful tools for models analysis Models are suitable for formal verification Packages for direct solution and simulation are available It’s not easy to make graphical representation of the model

Page 18 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Architecture of the Test System TitleHardwareSoftware Web Load Balancer 2.16Ghz Core2Duo PC Apache Web Server v2.2 with enabled modules mod_proxy and mod_proxy_balancer Web Server 1 3.5Ghz Celeron PC Apache Web Server v2.2 with enabled module mod_fast_cgi, PHP v5.2.6 compiled with fast_cgi support; CMS Joomla, v1.5.3 Web Server Ghz Core2Duo PC DB Server 2.2 Ghz Core2Duo PC MySQL v 5.1

Page 19 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Modeling of the test system - LQN Clients Load Balancer Database WebServer1WebServer2 Request processing [256] users [40] Request processing [8] Request processing [8] Request processing [inf] Workstation with Users Web Server 1 Web Server 2 DB Server 2 Load Balancer

Page 20 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Modeling of the test system - QPN Clients Load Balancer Database WebServer1WebServer2 =? Thread pool

Page 21 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Modeling of the test system - PEPA Clients Database WebServer1WebServer2 Load Balancer =? Component Cooperation

Page 22 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Experiments Configuration 1 - LB Configuration 2 – PC1 Configuration 3 – PC2 Load Balancer WebServer1, WebServer2 Database WebServer1 Database WebServer2 Database Throughput and Response Time are measured for three different configurations with virtual users changing from 1 to 40

Page 23 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Models Calibration Service rates for Web Server1,Web Server2, DB server = ? WebServer 2 (2.2 Ghz Core2Duo) 1 User RT = 0.6sec UsageCPU = 35% WebServer 2 Tprocessor = RT*UsageCPU*Ncores= =0.6*0.35*2 = 0.42 sec WebServer 1 (3.5Ghz Celeron) Tprocessor = 0.42/3.5*2.2 = sec Rate WS1 = =2.38 sec -1 Rate WS2 = =3.78 sec -1 Rate DB = ( ) -1 =5.5 sec -1 Need to define parameters in the models Experiment with one webserver and special workload – 1 user. Get the time spent by CPU for request processing Identify the same time for another webserver by normalization to CPU frequency

Page 24 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Results: Throughput

Page 25 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Results: Response Time

Page 26 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Results: Model errors QPME model vs. experiment LQNS model vs. experiment PEPA model vs. experiment PC15.8%5.3%6.4% PC214.2%15.3%17.3% LB32.8%27.2%27.9% QPME model vs. experiment LQNS model vs. experiment PEPA model vs. experiment PC15.2%6.4%4.1% PC210.4%10.3%10.5% LB8.6% 8.5%

Page 27 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Conclusions All of the applied techniques are able to predict system behavior without detailed knowledge about the internal system structure Difference between model predictions and experimental results lies in the acceptable area: for throughput it’s less than 10%,for response time – less than 30% Such results make possible to use model predictions during early performance analysis of infrastructure for distributed business applications The investigation of errors caused by hidden structure of system components and methods to estimate them is the subject for further work

Page 28 SEC(R) 2008 © Siemens AG, Corporate TechnologyAlexander Pastsyak, Rebrova Yana Q&A Thank you for attention!