Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Part VII Component-level Performance Models for the Web © 1998 Menascé & Almeida. All Rights Reserved.

Similar presentations


Presentation on theme: "1 Part VII Component-level Performance Models for the Web © 1998 Menascé & Almeida. All Rights Reserved."— Presentation transcript:

1 1 Part VII Component-level Performance Models for the Web © 1998 Menascé & Almeida. All Rights Reserved.

2 2 Part VII: Learning Objectives Characterize component-level models Compute demands on system resources. Compute waiting times, response times, throughputs. –open models (e.g., web servers) –closed models (e.g., intranets)

3 3 Component-level Models The internal components of a server (e.g., processors, disks) as well as network links are modeled explicitly. Changes in server architecture, component upgrades (e.g., use of a faster CPU or faster network connections) can be evaluated with component-level models.

4 4 Component-level models server incoming link outgoing link server

5 5 Component-level models server cpu disk 1 disk 2 incoming link outgoing link server

6 6 Component-level models server cpu disk 1 disk 2 incoming link outgoing link server

7 7 Component-level Models Each component is represented by a resource (e.g. CPU, disk, communication link) and a queue of requests waiting for the resource. resource queue

8 8 Component-level Models Parameters Service demand of a request at a resource: total service time of the request at the device. cpu D cpu = total CPU time of a request

9 9 Computing Service Demands cpu disk 1 disk 2 incoming link outgoing link server The average size of a file retrieved per request is 20KBytes. The average disk service time per KByte accessed is 10msec. 40% of the files are on disk 1 and 60% on disk 2. The speed of the link connecting the server to the Internet is 1.5 Mbps (a T1 link). The CPU processing time per request is 2 msec + 0.05 msec per KByte accessed. The average size of an HTTP request is 200 bytes.

10 10 Disk Service Demand cpu disk 1 disk 2 incoming link outgoing link server The average size of a file retrieved per request is 20KBytes. The average disk service time per KByte accessed is 10 msec. 40% of the files are on disk 1 and 60% on disk 2. D disk1 = 0.4*10 msec/Kbyte * 20KBytes = 80 msec = 0.080 sec D disk2 = 0.6*10 msec/Kbyte * 20KBytes = 120 msec = 0.120 sec

11 11 CPU Service Demand cpu disk 1 disk 2 incoming link outgoing link server The average size of a file retrieved per request is 20KBytes. The CPU processing time per request is 2 msec + 0.05 msec per KByte accessed. D cpu = 2 msec + 0.05 msec/KByte * 20 Kbyte = 3 msec = 0.003 sec

12 12 Incoming Link Service Demand The speed of the link connecting the server to the Internet is 1.5 Mbps (a T1 link). The average size of an HTTP request is 200 bytes. D IncLink = 200 * 8 bits / 1,500,000 bps = 0.00107 sec cpu disk 1 disk 2 incoming link outgoing link server

13 13 Outgoing Link Service Demand cpu disk 1 disk 2 incoming link outgoing link server The average size of a file retrieved per request is 20KBytes. The speed of the link connecting the server to the Internet is 1.5 Mbps (a T1 link). D OutLink = 20 * 1024 * 8 bits / 1,500,000 bps = 0.109 sec

14 14 Computing Service Demands cpu disk 1 disk 2 incoming link outgoing link server 0.109 sec 0.00107 sec 0.003 sec 0.08 sec 0.12 sec Service demands do not include any queuing time! It is just service time.

15 15 Your company decided to add multimedia content to the Web-based training application. The average size of a Web document will increase from 10 Kbytes to 80Kbytes. Assuming the same disk will be used, what changes need to be made to the model to predict the performance in this new scenario? Practice Drill Practice Drill Using Models for Decision Making

16 16 The web-based human resources application is being rewritten. CGI scripts will be replaced by Java applets. Early prototyping showed that the Java -based solution uses 80% less CPU than its CGI-based counterpart. How should the model be changed to reflect the new software architecture? Practice Drill Practice Drill Using Models for Decision Making

17 17 Computing Waiting Times Waiting times depend on the load (arrival rate of requests) and on the service demands. cpu disk 1 disk 2 incoming link outgoing link server 0.109 sec 0.00107 sec 0.003 sec 0.08 sec 0.12 sec

18 18 Computing Residence Times n transactions seen, on the avg, by arriving request J. Each of the n requests found by J need D sec of total service. So, J has to wait for n * D seconds before being served. If we add J’s total service time D we get J’s residence time R at the resource: D J n R R = D + n*D From Little’s Law: n = R * So, R = D + R * * D R = D / (1 - * D )

19 19 Computing Residence Times DiDi RiRi Service demand at resource i Utilization of resource i (U i )

20 20 Residence Time at Incoming Link cpu disk 1 disk 2 incoming link outgoing link Web server 0.109 sec 0.00107 sec 0.003 sec 0.08 sec 0.12 sec  req/sec

21 21 Residence Time at Outgoing Link cpu disk 1 disk 2 incoming link outgoing link Web server 0.109 sec 0.00107 sec 0.003 sec 0.08 sec 0.12 sec  req/sec

22 22 Residence Time at the CPU cpu disk 1 disk 2 incoming link outgoing link Web server 0.109 sec 0.00107 sec 0.003 sec 0.08 sec 0.12 sec  req/sec

23 23 Residence Time at Disk 1 cpu disk 1 disk 2 incoming link outgoing link Web server 0.109 sec 0.00107 sec 0.003 sec 0.08 sec 0.12 sec  req/sec

24 24 Residence Time at Disk 2 cpu disk 1 disk 2 incoming link outgoing link Web server 0.109 sec 0.00107 sec 0.003 sec 0.08 sec 0.12 sec  req/sec

25 25 Summary of Results Average Response Time

26 26 Response vs. Arrival Rate

27 27 Your company’s Web site is becoming popular. It is expected that the load will double in the next two months from its current load of 6 HTTP requests/sec. Management is considering installing an additional T1 link. Before making the investment, the company wants to know what is the performance gain? Practice Drill Practice Drill Using Models for Decision Making

28 28 Which model parameters need to be changed? Practice Drill Practice Drill Using Models for Decision Making

29 29 Open vs. Closed QN Models The models presented so far are open QN models because there is no limit on the number of requests in the system. When the number of requests in the system is limited, we need closed QN models. –e.g., servers with limited degree of MPL. –C/S systems with known number of clients.

30 30 Open Model Equations

31 31 Multiple Classes of Requests Different HTTP requests may have different file sizes, frequency of arrival and different resource service demands.

32 32 Equations for Open Multiple Class QN Models

33 33 Multiclass Example A Web server has one CPU and two disks. It receives two types of HTTP requests: for small text files and for large images. The average arrival rates are 5 requests/sec for text and 2 requests/sec for images. What is the response times for each type of request?

34 34

35 35

36 36

37 37 A Complete Web Server Example Web server 10 Mbps Ethernet router (50  sec/packet) T1 link ISPInternet 6 HTTP req/sec

38 38 A Complete Web Server Example (cont’d) incoming link outgoing link routerLAN cpu disk Web server

39 39 A Complete Web Server Example (cont’d) Workload

40 40 A Web Server Example (cont’d)

41 41 A Web Server Example (cont’d)

42 42 A Web Server Example (cont’d)

43 43 A Web Server Example (cont’d)

44 44 A Web Server Example (cont’d)

45 45 major contribution to service demand A Web Server Example (cont’d)

46 46 Closed QN Model A Web server has one CPU and one disk. Assume that n requests are in execution concurrently. Each request takes 3 msec of CPU and 10 msec of disk time. What is the throughput and response time of the Web server?

47 47 Closed QN Model cpu disk Web server n

48 48 Closed QN Model Residence Time Equations: my service time avg. number of requests in the CPU found upon my arrival my total waiting time at the CPU

49 49 Closed QN Model Residence Time Equations:

50 50 Closed QN Model Throughput Equation. Using Little’s Law: total response time throughput

51 51 Closed QN Model Throughput Equation:

52 52 Closed QN Model Queue Length Equations. Applying Little’s Law and the Forced Flow Law to the CPU and disk.

53 53 Putting It All Together

54 54 Putting It All Together

55 55 Putting It All Together

56 56 Solving the Model

57 57 Solving the Model

58 58 Solution to the Closed QN model

59 59 Solution to the Closed QN model

60 60 Closed Models Equations

61 61 An Intranet Example with Proxy Cache Server Clients Proxy Server External Web Servers router (50  sec/packet) Internet LAN (10 Mbps Ethernet)......

62 62 An Intranet Example (cont’d) Cache Hit routerLAN cpu disk proxy cache server incoming link outgoing link ISP Internet web server clients

63 63 An Intranet Example (cont’d) Cache Miss routerLAN cpu disk proxy cache server incoming link outgoing link ISP Internet web server clients

64 64

65 65 largest service demand throughput in HTTP req/sec

66 66

67 67 Intranet Example Increasing the Bandwidth of the link to the ISP

68 68 Part VII: Summary Component-level models are used to represent the various components of a networked system. Parameters for component-level models include the service demands on system resources, i.e. total time spent by a request receiving service from a resource.

69 69 Part VII: Summary (cont’d) Waiting times, response times, throughputs can be computed using –open models (e.g., web servers) –closed models (e.g., intranets)


Download ppt "1 Part VII Component-level Performance Models for the Web © 1998 Menascé & Almeida. All Rights Reserved."

Similar presentations


Ads by Google