Download presentation
Presentation is loading. Please wait.
Published byPearl Shepherd Modified over 9 years ago
1
1 PerfCenter and AutoPerf: Tools and Techniques for Modeling and Measurement of the Performance of Distributed Applications Varsha Apte Faculty Member, IIT Bombay
2
2 Example: WebMail Application (ready to be deployed) IMAP server Ad Server Authentication Server, SMTP Server Web Server WAN User request Several interacting components
3
3 Several Usage Scenarios Example: Login BrowserWebAuthenticationIMAPSMTP User/Password Verify_passwd Send_to_auth GenerateHtml list_message Call list_message of IMAP server GenerateHtml 0.2 0.8 Response Time Performance Goals during deployment: User perceived measures: response time, request drops (minimize) System measures: throughput, resource utilizations (maximize)
4
4 Deploying the application in a Data Center What should be the configuration of the Web server? (Number of threads, buffer size,…) On which machines should IMAP server be deployed? The Web Server? How will the network affect the performance? (LAN vs WAN) How many machines? Machine configuration? (how many CPUs, what speed, how many disks?) Determining host and network architecture
5
5 Input specifications Machines and Devices Software Components Network Params Deployments Scenarios Parser Queuing Model Simulation ToolAnalytical Tool Ref MASCOTS 07 Output Analysis PerfCenter: Modeling Tool Inbuilt functions and constructs aid datacenter architect to analyze and modify the model PerfCenter generates underlying queuing network model PerfCenter solves the model Architect specifies the model PerfCenter code can be downloaded from http://www.cse.iitb.ac.in/perfnet/perfcenter *
6
6 Capacity analysis for WebMail Maximum throughput achieved is 30 requests/sec Graph for Response time performance with increase in number of users
7
7 Autoperf: a capacity measurement and profiling tool Focusing on needs of a performance modeling tool
8
8 Input requirement for modeling tools Usage Scenarios Deployment details Resource Consumption Details – e.g. “login transaction takes 20 ms CPU on Web server” Usually requires measured data
9
9 Performance measurement of multi-tier systems Two goals: Capacity Analysis: Maximum number of users supported, transaction rate supported, etc Fine grained profiling for use in performance models
10
10 Measurement for Capacity analysis Clients running load generators System Test Environment Servers running system under test Generate Requests E.g.: Httperf Flood Silk Performer LoadRunner
11
11 ….Measurement for capacity analysis – Answers provided
12
12 Given such data, models can “extrapolate” and predict performance at volume usage (e.g. PerfCenter). Measurement for modeling Web Server App Server 1 App Server 2 Resource Consumption profile 10m s 20m s 40m s 45m s LA N Clients running load generators Generate Requests
13
13 Generate Load Profile Servers 12 3 Collect client statistics Collect server statistics Client Correlate & display Introducing: AutoPerf Servers AutoPerf
14
14 AutoPerf Deployment Information of servers Web transaction workload description Fine grained server side Resource profiles
15
15 Future enhancements PerfCenter/AutoPerf: Various features which make the tools more user- friendly Capability to model/measure performance of virtualized data centers Many other minor features Skills that need to be learned/liked: Java programming (both tools are in Java) Discipline required to maintain/improve large software Working with quantitative data
16
16 What is fun about this project? Working on something that will (should) get used. New focus on energy and virtualization – both exciting fields Many, many, algorithmic challenges Running simulation/measurement in efficient ways
17
17 Work to be done by RA Code maintenance Feature enhancement Write paper(s) for publication, go to conferences, present them Creating web-pages and user groups, answering questions Help in popularizing the tool, demos, etc Pick a challenging problem within this domain as M.Tech. project, write paper (s), go to conferences!
18
18 Thank you/Questions This research was sponsored by MHRD, Intel Corp, Tata Consultancy Services and IBM faculty award 2007-2009 PerfCenter code can be downloaded from http://www.cse.iitb.ac.in/perfnet/perfcenter
19
19 Request Arrival Get SoftServerGet Device Get Network Link Instance Free Buffer Full Drop Service Request EnQueue Y YN N SoftServer Device Service Req NetworkLink Service Req Queue Class Simulator: Queue Class All resources like devices, soft servers and network link are abstracted queues Discrete event simulator implemented in Java Supports both open and closed arrivals
20
20 Simulator: Synchronous calls Server1Server2Server3Server4 Thread Busy Thread waiting Server1-t Server2-t Server3-t PerfCenter Stack User
21
21 Simulator Parameters PerfCenter Simulates both open and closed systems loadparms arate 10 end loadparms noofusers 10 thinktime exp(3) end Model parameters modelparms method simulation type closed noofrequest 10000 confint false replicationno 1 end Independent replication method for output analysis
22
22 Deployments dply3 and dply4 H1 Cpu% H2 Cpu% H3 Cpu% H4 CPU% IMAP Host Disk% H2 Disk% dply377.617.723.9NA44.218.7 dply453.818.427.7NA47.119.5
23
23 Deployment summary H1 Cpu % H2 Cpu % H3 Cpu % H4 CPU % IMAP Host Disk % H2 Disk % dply198.18.2NA 41.28.8 dply267.515.948.675.080.617.0 dply377.617.723.9NA44.218.7 dply453.818.427.7NA47.119.5
24
24 Simulator: Dynamic loading of Scheduling policy /Queue/SchedulingStartergy/ /FCFS.class /LCFS.class /RR.class host host[2] cpu count 1 cpu schedp fcfs cpu buffer 9999 end host host[2] cpu count 1 cpu schedp rr cpu buffer 9999 end
25
25 Using PerfCenter for “what-if” analysis Scaling up Email Application To support requests arriving at rate 2000 req/sec
26
26 Step 1 Step 2 diskspeedupfactor2 =20 diskspeedupfactor3=80 deploy web H4 set H1:cpu:count 32 set H4:cpu:count 32 set H3:cpu:count 12 set H2:cpu:count 12 cpuspeedupfactor1=2 cpuspeedupfactor3=4 cpuspeedupfactor4=2 host H5 cpu count 32 cpu buffer 99999 cpu schedP fcfs cpu speedup 2 end deploy web H5 set H2:cpu:count 32 set H3:cpu:count 18
27
27 Summary H1H2H3H4H5 Step1CPU Count3212 32 Cpu Utilization8810075.887.7 CPUSpeedUp2143 Disk Speedup2080 Disk utilization51.646.5 Step2CPU Count32 1832 Cpu Utilization64.555.957.063.158.7 CPUSpeedUp21422 Disk Speedup2080 Disk utilization5852.6
28
28 Identifying Network Link Capacity 256Kbps1Mbps Lan1 to lan220.1%5.1% Lan2 to lan118.7%4.8%
29
29 Limitations of standard tools Do not perform automated capacity analysis Need range of load levels to be specified Need duration of load generation to be specified Need the steps in which to vary load to be specified Report only the throughput at a given load level, but not the maximum achievable throughput and saturation load level. Tools should take as input a better workload description (CBMG) rather than just the percentage of virtual users requesting each type of transaction. Do not perform automated fine grained server side resource profiling.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.