Application Software Platform Services Graphics Data Interchange

Slides:



Advertisements
Similar presentations
Ó 1998 Menascé & Almeida. All Rights Reserved.1 Part IV Capacity Planning Methodology.
Advertisements

Performance Engineering Methodology Chapter 4. Performance Engineering Performance engineering analyzes the expected performance characteristics of a.
1 Part IV Capacity Planning Methodology © 1998 Menascé & Almeida. All Rights Reserved.
1Adapted from Menascé & Almeida. Capacity Planning Methodology.
EEC-484/584 Computer Networks Lecture 6 Wenbing Zhao
1 Part II Web Performance Modeling: basic concepts © 1998 Menascé & Almeida. All Rights Reserved.
Application Layer  We will learn about protocols by examining popular application-level protocols  HTTP  FTP  SMTP / POP3 / IMAP  Focus on client-server.
Chapter 2: Application Layer
EEC-484/584 Computer Networks Discussion Session for HTTP and DNS Wenbing Zhao
Measuring Performance Chapter 12 CSE807. Performance Measurement To assist in guaranteeing Service Level Agreements For capacity planning For troubleshooting.
1 Multiple class queueing networks Mean Value Analysis - Open queueing networks - Closed queueing networks.
1Adapted from Menascé & Almeida. Capacity Planning Methodology.
1 Internet Management and Security We will look at management and security of networks and systems. Systems: The end nodes of the Internet Network: The.
1 Part VI System-level Performance Models for the Web © 1998 Menascé & Almeida. All Rights Reserved.
Application Layer  We will learn about protocols by examining popular application-level protocols  HTTP  FTP  SMTP / POP3 / IMAP  Focus on client-server.
Client – Server Architecture A Basic Introduction Kathleen R. Murray, Ph.D. May 2002.
Lect2..ppt - 08/11/04 CIS 4100 Systems Performance and Evaluation Lecture 1 by Zornitza Genova Prodanoff.
Performance of Web Applications Introduction One of the success-critical quality characteristics of Web applications is system performance. What.
1 Distributed Systems: an Introduction G53ACC Chris Greenhalgh.
2: Application Layer1 Chapter 2 outline r 2.1 Principles of app layer protocols r 2.2 Web and HTTP r 2.3 FTP r 2.4 Electronic Mail r 2.5 DNS r 2.6 Socket.
Client – Server Architecture. Client Server Architecture A network architecture in which each computer or process on the network is either a client or.
How computer’s are linked together.
Mainframe (Host) - Communications - User Interface - Business Logic - DBMS - Operating System - Storage (DB Files) Terminal (Display/Keyboard) Terminal.
© 1995 Daniel A. Menascé Capacity Planning in Client/Server Environments Daniel A. Menascé George Mason University Fairfax, VA 22030
Kiew-Hong Chua a.k.a Francis Computer Network Presentation 12/5/00.
1 Challenges in Scaling E-Business Sites  Menascé and Almeida. All Rights Reserved. Daniel A. Menascé Department of Computer Science George Mason.
Data Communications and Computer Networks Chapter 2 CS 3830 Lecture 8 Omar Meqdadi Department of Computer Science and Software Engineering University of.
Queueing Models with Multiple Classes CSCI 8710 Tuesday, November 28th Kraemer.
Ó 1998 Menascé & Almeida. All Rights Reserved.1 Part VIII Concluding Remarks.
Ó 1998 Menascé & Almeida. All Rights Reserved.1 Part V Workload Characterization for the Web.
2: Application Layer 1 Chapter 2 Application Layer Computer Networking: A Top Down Approach, 5 th edition. Jim Kurose, Keith Ross Addison-Wesley, April.
1 Part VII Component-level Performance Models for the Web © 1998 Menascé & Almeida. All Rights Reserved.
Measuring the Capacity of a Web Server USENIX Sympo. on Internet Tech. and Sys. ‘ Koo-Min Ahn.
Capacity Planning Plans Capacity Planning Operational Laws
Ó 1998 Menascé & Almeida. All Rights Reserved.1 Part VI System-level Performance Models for the Web (Book, Chapter 8)
Ó 1998 Menascé & Almeida. All Rights Reserved.1 Part II System Performance Modeling: basic concepts, operational analysis (book, chap. 3)
Client – Server Architecture A Basic Introduction 1.
09/13/04 CDA 6506 Network Architecture and Client/Server Computing Peer-to-Peer Computing and Content Distribution Networks by Zornitza Genova Prodanoff.
تجارت الکترونیک سیار جلسه هفتم مدرس : دکتررامین کریمی.
CSEN 404 Application Layer II Amr El Mougy Lamia Al Badrawy.
Ó 1998 Menascé & Almeida. All Rights Reserved.1 Part VIII Web Performance Modeling (Book, Chapter 10)
ISC321 Database Systems I Chapter 2: Overview of Database Languages and Architectures Fall 2015 Dr. Abdullah Almutairi.
تجارت الکترونیک سیار جلسه پنجم مدرس : دکتررامین کریمی.
Ó 1998 Menascé & Almeida. All Rights Reserved.1 Part VI System-level Performance Models for the Web.
Introduction to Networks
Software Architecture in Practice
Abhinav Kamra, Vishal Misra CS Department Columbia University
Department of Computer Science University of Calgary
System Architecture & Hardware Configurations
Database System Concepts and Architecture
HTTP request message: general format
The Client/Server Database Environment
Administrative Things
The Client/Server Database Environment
The Client/Server Database Environment
System Architecture & Hardware Configurations
Software Architecture in Practice
Department of Computer Science University of Calgary
Chapter 2 Introduction Application Requirements VS. Transport Services
Internet Queuing Delay Introduction
ECE 671 – Lecture 16 Content Distribution Networks
Capacity Analysis, cont. Realistic Server Performance
iSCSI-based Virtual Storage System for Mobile Devices
Tiers vs. Layers.
Admission Control and Request Scheduling in E-Commerce Web Sites
CHAPTER Introduction to LANs
Network+ Guide to Networks, Fourth Edition
EEC-484/584 Computer Networks
广西医科大学 Computer Networking 网络课件 双语教学 模拟实验 计算机网络教研室.
Presentation transcript:

Application Software Platform Services Graphics Data Interchange Data Management User Interface Software Engineering Communication Services Application Agent Application Program Interface (API) Platform (OS + Hardware) Application process Application process Application Program Interface (API) Platform (OS + Hardware) Inter-process Communication

Distributed Applications = Network Applications: Client/Server Application Software (Client Part) Application Software (Server Part) Client (user) Agent Server Agent Application Program Interface (API) Application Program Interface (API) System Communication Software & Hardware Communication Software & Hardware Platform (OS + Hardware) Platform (OS + Hardware) Client Agents Examples: Internet Explorer, Opera MS’s Outlook, Netscape’s Messenger, Eudora Server Agents Examples: Internet Information Sever, Appachi SQL query engines, … Communication Network

Layered Application Model-1 Presentation User Interface Business (Application Logic) Data (Database Access) Application Software Application Model Presentation: The client agent remains focused on presenting information to or receiving input from the user. User Interface: User’s access to the application logic via client agent. It can be dynamical and configured by user. It is build on the top of the user interface control. Dynamic User Interface: Customizing the look (example: www.cstore.com Customizing the content ( examples: my.yahoo.com , www.exite.com )

Layered Application Model-2 Presentation Presentation User Interface Application Interface User Interface User Interface Control and Related Logic Application Interface Control and Related Logic Business (Application Logic) Business Processes Data (Database Access) Core Business Services Business Rules (Application Logic) Units of processing or algorithms that represents concept of importance to the organization using database. Data (Database Access) Logic to connect to database; access/manipulate data held within databases.

Layered Application: 3-Tier Client/Server Model Runs by Application Server Agent Run by Client Agent Business (Application Logic) User Interface Presentation user Application Sever Runs by Database Server Agent Client Workstation (rich client) Communication Network Run by Client Agent Data (Data Access and Storage) User Interface Presentation Database user Mobile Client Workstation (thin client) Data Server

“Logical Tiers vs Physical Tiers Application Model Logical Tiers Presentation User Interface Business Data Physical Tiers Client workstation Application server Data Base Presentation Client Workstation User Interface Business (Application Logic) Application Server Data (Database Access) Database

Attendant Workstations Example: Car Rental Router Hub Router Reservation agent workstation Data Server Application Sever Database App. server WAN Attendant Workstations Car Reservation Center Router Base Station Car rental l& pickup locations App. server Attendant Workstations

Current Load Intensity Increasing the Load The load (transactions) Local reservation (at car rental location) Road assistance request Car pickup Phone reservation Transaction Current Load Intensity CLI +5% CLI +10% CLI +15% Local Res. 1.28 1.67 2.45 5.06 Road Ass. 0.64 0.87 1.37 3.20 Car Pickup 0.76 0.94 1.23 Phone Res. 0.85 1.16 1.82 4.24 Table 1.2. Response Time for Various Load Values (sec)

Caching Example (1) origin Assumptions average object size = L= 100,000 bits avg. request rate from institution’s browser to origin serves = a= 15reqs/sec delay from Internet router to any origin server and back to router = 2 sec Consequences utilization on LAN = 15% utilization on access link=100% total delay = Internet delay + access delay + LAN delay = 2 secs + minutes + 20 msecs origin servers public Internet Traffic Intensity on the access link = 15*100,000/1.5Mbps=100% R=1.5 Mbps access link institutional network R=10 Mbps LAN Traffic Intensity on the LAN = 15*100,000/10Mbps=15%

Link delay Traffic intensity La/R 100% 15% 60% Average queuing delay (secs) 100% 15% 60% 20 msecs 160 msecs

Caching Example (2) origin servers Possible solution public Internet institutional network 10 Mbps LAN 10 Mbps access link Possible solution increase bandwidth of access link to, say, 10 Mbps Consequences utilization on LAN = 15% utilization on access link = 15% total delay = Internet delay + access delay + LAN delay = 2secs + 20msecs + 20msecs often a costly upgrade Traffic Intensity on the access link = 15*100,000/10Mbps = 15% Traffic Intensity on the LAN = 15*100,000/10Mbps = 15%

Caching Example (3) Install cache origin servers Consequence suppose hit rate is 0.4 Consequence 40% requests will be satisfied almost immediately 60% requests satisfied by origin server utilization of access link reduced to 60%, resulting in delays (say 160 msec) total delay = Internet delay + access delay + LAN delay = 0.6×(2secs+160msecs+20msecs) + 0.4×20msecs = 1.316 secs origin servers public Internet institutional network 10 Mbps LAN 1.5 Mbps access link cache Traffic Intensity on the access link = 0.6*15*100,000/1.5Mbps = 60% Traffic Intensity on the LAN = 15*100,000/10Mbps = 15%

L1=100,000bits, a1=15req/sec, h1=50%(hit rate) در شکل زير کاربران دو LAN فايل هايی بامشخصات زير را از سرورها درخواست می نمايند. متوسط زمان دريافت يک فايل L1 بيتی را توسط کاربران LAN1 حساب کنيد. هر دو LAN از يک Cache server استفاده می کنند. (2 نمره) L1=100,000bits, a1=15req/sec, h1=50%(hit rate) L2=50,000bits, a2=20req/sec, h2=30%(hit rate) Internet delay=2secs توجه: رابطه ی تاخيربرای LAN و link عبارت است از IL/[R(1-I)] d= L= File length (bits), I= Traffic intensity R= Bandwidth (bps) origin servers 1.5 Mbps access link1 public Internet 2 Mbps access link2 10 Mbps LAN1 Cache server 100 Mbps LAN2

Communication-Processing Delay S2SLAN W1Scpu S1Scpu W2SLAN W1Sio S1Ccpu S2Ccpu S1Sio time Client (C) Server (S) LAN B[bps] Figure 3.1. Communication-Processing delay diagram for 2-tier C/S system. Rr [r,m1] [r,m2] [rth request, m1 bytes] [rth response, m2 bytes]

The Times Service Time during jth visit: Sji Waiting Time during jth visit: Wji Residence Time during jth visit: R’i Resource i Sji Wji R’i Rr = R’Ccpu + R’Scpu + R’Sio + R’LAN Rr = response time of a request r R’Ccpu = residence times at the client cpu R’Scpu = residence times at the server cpu R’Sio = residence times at the server io R’LAN = residence times at the LAN (3.2.5) 3-15

Residence Time Service Demand (sum of all service time for a request at source i): Di Di = Σj Sji (3.2.2) Queuing Time (sum of all waiting time for a request at source i): Qi Qi = Σj Wji (3.2.3) R’i = Di + Qi (3.2.4) Rr = Σi R’i (3.2.5) Service demand at server’s cpu DScpu = S1Scpu + S2Scpu DLAN = 8(m1 + m2 )/B m1 = request length [B] m2 = reply length [B] QScpu = W1Scpu + W2Scpu Queuing time at server’s cpu 3-16

Example 3.1 S1Ccpu = 5ms, S1Scpu = 10ms AvgSeek = 9ms (Server’s Disk) AvgLatency = 4.17msec (Server’s Disk) TransferRate = 20 MB/s (Server’s Disk) DataReads = 10×2048 Byte Sd = 9+4.17+2048/20M = 13.3 ms (Disk Average Service Time) Dd = 10×Sd = 133 ms (Service Demand at Server’s Disk) m1 = 1,518, m2 = 7×1,518 Bytes DLAN = 8(m1 +m2 )/10Mbps= 9.7ms Rt > DCcpu+ DScpu+ Dd+ DLAN=158 ms The minimum value for response time to transaction t; all waiting times are ignored. S2Scpu W1Scpu W2Scpu W1Sio S1Ccpu S2Ccpu S1Sio time Client (C) Server (S) LAN B[bps] Rr [r,m1] [r,m2] 3-17

Three-Tier WAN Figure 3.3. Three-tier C/S system. LAN2 LAN1 Client B2[bps] WAN Client LAN1 B1[bps] Database Sever (Data Access Tier) Application Sever (Business Tier) Figure 3.3. Three-tier C/S system. 3-18

3-Tier Delay diagram Client LAN1 Application Server LAN2 WAN Database [r,m1] [r,m2] ● Scpu Wnet Qcpu Dcpu Qnet Ddisk Qdisk Rr Figure 3.4. Communications-processing delay diagram for a 3-tier C/S system. 3-19

Validating Workload Models2 Calibration NO Actual Workload C/S System measured response times, throughputs, utilizations, etc. Synthetic Workload C/S System measured response times, throughputs, utilizations, etc. acceptable? YES Figure 5.8. Workload model validation

A Methodology for CP in C/S Environment2 Understanding the Environment Cost Model Cost Prediction Development of a Cost Model Workload Characterization Workload Model Validation & Calibration Workload Forecasting Workload Model Cost/Performance Analysis Configuration Plan Personnel Plan Performance Model Development Performance Model Validation & Calibration Performance Prediction Performance Model Investment Plan Figure 5.2. A methodology for capacity planning

Figure 5. 5. Space for workload characterization (no Figure 5.5. Space for workload characterization (no. of I/Os, CPU time) 0 5 10 15 20 25 30 35 40 45 No. I/Os 200 180 160 140 120 100 80 60 40 20 CPU Time (msec) Cluster 2 average over all points Cluster 3 Cluster 1 19 4.5

Chapter3-Outlines Service Times and service demands Introduction Communication-Processing Delay Diagrams Service Times and service demands Service Times at Single Disks and Disk Arrays Service Times in Networks Service Time in Routers Queues and Contention Some Basic Performance Results Utilization Law Forced Flow Law Service Demand Law Little’s Law Summary of Basic Results Performance Metrics in C/S System Concluding Remarks References

Figure 5.3. Example C/S System for Capacity Planning Methodology. 16 Mbps Token Ring-LAN4 File Server SQL 100 NT Clients FDDI ring 100 Mbps LAN 5 10 Mbps Ethernet LAN1 120 NT LAN3 ftp proxy server, telnet proxy server, Web proxy server, E-mail server. 50 Unix Workstations RISC-type File Server 512 Gbytes RAID-5s Internet LAN2

Figures 5.10/11. High-level/Detailed QN of LAN 3 FDDI Internet Web server 100 Windows Clients LAN 3 Web server FDDI Internet 100 Windows Clients LAN 3 disks CPUs