Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS6320 – Performance more details L. Grewe 1. System Architecture Client Web Server Tier 2Tier 1Tier 3 Application Server Database Server DMS.

Similar presentations


Presentation on theme: "CS6320 – Performance more details L. Grewe 1. System Architecture Client Web Server Tier 2Tier 1Tier 3 Application Server Database Server DMS."— Presentation transcript:

1 CS6320 – Performance more details L. Grewe 1

2 System Architecture Client Web Server Tier 2Tier 1Tier 3 Application Server Database Server DMS

3 Performance Desires and Approaches Improving performance and reliability to provide – Higher throughput – Lower latency (i.e., response time) – Increase availability Some Approaches – Scaling/Replication How performance, redundancy, and reliability are related to scalability – Load balancing – Web caching 3

4 Where to Apply Scalability To the network To individual servers Make sure the network has capacity before scaling by adding servers 4

5 An example…but, first Hardware Review Firewall – Restricts traffic based on rules and can “protect” the internal network from intruders Router – Directs traffic to a destination based on the “best” path; can communicate between subnets Switch – Provides a fast connection between multiple servers on the same subnet Load Balancer – Takes incoming requests for one “virtual” server and redirects them to multiple “real” servers

6 Switch: Conencting More than 2 Machines

7 Case Study: Retail eBusiness 7 This is the initial design PROBLEM: site is growing and too many users- performance is inadequate

8 Solution - Scaling 8 Scaling through Replication of systems

9 Initial Redesign 9 Scaling mostly the web servers. Problem: still have one Entrance through firewall for clients. A bottleneck

10 The Redesign Again: 10 Last design: still bottleneck Coming in on one path …. Here we split into 2 “connected” Paths. Redundant Primary

11 Performance, Redundancy, and Scalability Scale for performance But what about redundancy? Site going down. 11

12 How to get rid of Single Points of Failure (SPOF): 12 Problem: Last design if services to the single geographical network go down…site is down. Answer: Replicate in different geographical locations

13 Scaling Servers: Out or Up Scale Out (Horizontal)..we saw this in previous design – Multiple servers – Add more servers to scale – Most commonly done with web servers Scale Up (Vertical) – Fewer larger servers to add more internal resources – Add more processors, memory, and disk space – Most commonly done with database servers 13

14 Some Approaches to Scalability Approaches – Farming – Cloning – RACS – Partitioning – RAPS Load balancing Web caching 14

15 Farming Farm - the collection of all the servers, applications, and data at a particular site. – Farms have many specialized services (i.e., directory, security, http, mail, database, etc.) 15 This is about the HW scaling

16 Simple Web Farm

17 Cloning A service can be cloned on many replica nodes, each having the same software and data. Cloning offers both scalability and availability. – If one is overloaded, a load-balancing system can be used to allocate the work among the duplicates. – If one fails, the other can continue to offer service. 17 This is about Service / SW replication

18 Two Clone Design Styles 18 Shared Nothing is simpler to implement and scales IO bandwidth as the site grows. Shared Disc design is more economical for large or update-intensive databases.

19 Reliable Array of Cloned Services (RACS) RACS (Reliable Array of Cloned Services) – a collection of clones for a particular service – shared-nothing RACS each clone duplicates the storage locally updates should be applied to all clone’ s storage – shared-disk RACS (cluster) all the clones share a common storage manager storage server should be fault-tolerant subtle algorithms need to manage updates (cache invalidation, lock managers, etc.) 19

20 Clones and RACS can be used for read-mostly applications with low consistency requirements. – i.e., Web servers, file servers, security servers… requirements of cloned services: – automatic replication of software and data to new clones – automatic request routing to load balance the work – route around failures – recognize repaired and new nodes 20

21 Some definitions - Partitions and Packs 21 Data Objects (mailboxes, database records, business objects,…) are partitioned among storage and server nodes. For availability, the storage elements may be served by a pack of servers.

22 Partition grows a service by – duplicating the hardware and software – dividing the data among the nodes (by object), e.g., mail servers by mailboxes should be transparent to the application – requests to a partitioned service are routed to the partition with the relevant data does not improve availability – the data is stored in only one place – partitions are implemented as a pack of two or more nodes that provide access to the storage 22

23 Taxonomy of Scaleability Designs 23

24 Reliable Array of Partitioned Services RAPS RAPS (Reliable Array of Partitioned Services) – nodes that support a packed-partitioned service – shared-nothing RAPS, shared-disk RAPS Update-intensive and large database applications are better served by routing requests to servers dedicated to serving a partition of the data (RAPS). 24

25 Some Approaches to Scalability Approaches – Farming – Cloning – RACS – Partitioning – RAPS Load balancing Web caching 25

26 Load Balancing / Sharing 26

27 Load Management Balancing loads (load balancer) can operate at different OSI layers – Round-robin DNS – Layer-4 (Transport layer, e.g. TCP) switches – Layer-7 (Application layer) switches

28 The 7 OSI (Open System Interconnection) Layers (a model of a network)

29 Load Balancing Strategies Flat architecture – DNS rotation, switch based, MagicRouter Hierarchical architecture Locality-Aware Request Distribution 29

30 DNS Rotation - Round Robin Cluster 30

31 Flat Architecture - DNS Rotation DNS rotates IP addresses of a Web site – treat all nodes equally Pros: – A simple clustering strategy Cons: – Client-side IP caching: load imbalance, connection to down node Hot-standby machine (failover) – expensive, inefficient Switching products – Cisco, Foundry Networks, and F5Labs – Cluster servers by one IP – Distribute workload (load balancing) – Failure detection Problem – Not sufficient for dynamic content 31

32 Load Balance Idea 2: Switch- based Cluster 32

33 Flat Architecture - Switch Based Switching products – Cluster servers by one IP – Distribute workload (load balancing) i.e. round-robin – Failure detection – Cisco, Foundry Networks, and F5Labs Problem – Not sufficient for dynamic content 33

34 Problems with DNS or Switch Load Balancing Problems – Not sufficient for dynamic content – Adding/Removing nodes can be involved Manual configuration required – limited load balancing in switch – Simple algorithms do not consider current loads 34

35 Load Sharing Strategies Flat architecture – DNS rotation, switch based, MagicRouter Hierarchical architecture Locality-Aware Request Distribution 35

36 Hierarchical Architecture Master/slave architecture Two levels – Level I Master: static and dynamic content – Level II Slave: only dynamic 36

37 Hierarchical Architecture 37 M/S Architecture

38 Hierarchical Architecture 38

39 Hierarchical Architecture Benefits – Better failover support Master restarts job if a slave fails – Separate dynamic and static content resource intensive jobs (CGI scripts) runs by slave Master can return static results quickly 39

40 Locality-Aware Request Distribution Content-based distribution – Improved hit rates – Increased secondary storage – Specialized back end servers Architecture – Front-end distributes request – Back-end process request 40

41 Load Sharing Strategies Flat architecture – DNS rotation, switch based, MagicRouter Hierarchical architecture Locality-Aware Request Distribution 41

42 Locality-Aware Request Distribution 42 Naïve Strategy

43 Some Approaches to Scalability Approaches – Farming – Cloning – RACS – Partitioning – RAPS Load balancing Web caching 43

44 Web Caching 44

45 Web Proxy Intermediate between clients and Web servers It is used to implement firewall To improve performance, proxy caching 45 Client (browser)Web server With Caching

46 Web Architecture Client (browser), Proxy, Web server 46 Web server Proxy Client (browser) Firewall

47 Web Caching not only at Proxy Servers Caching popular objects is one way to improve Web performance. Web caching at clients, proxies, and servers. 47 Proxy Client (browser) Web server

48 Advantages of Web Caching Reduces bandwidth consumption (decrease network traffic) Reduces access latency in the case of cache hit Reduces the workload of the Web server Enhances the robustness of the Web service Usage history collected by Proxy cache can be used to determine the usage patterns and allow the use of different cache replacement and prefetching policies. 48

49 Disadvantages of Web Caching Stale data can be serviced due to the lack of proper updating Latency may increase in the case of a cache miss A single proxy cache is always a bottleneck. A single proxy is a single point of failure Client-side and proxy cache reduces the hits on the original server. 49

50 Web Caching Issues Cache replacement Prefetching Cache coherency Dynamic data caching 50

51 Cache Replacement Characteristics of Web objects – different size, accessing cost, access pattern. Traditional replacement policies do not work well – LRU (Least Recently Used), LFU (Least Frequently Used), FIFO (First In First Out), etc There are replacement policies for Web objects: – key-based – cost-based 51

52 Caching -Two Replacement Schemes Key-based replacement policies: – Size: evicts the largest objects – LRU-MIN: evicts the least recently used object among ones with largest log(size) – Lowest Latency First: evicts the object with the lowest download latency Cost-based replacement policies – Cost function of factors such as last access time, cache entry time, transfer time cost, and so on – Least Normalized Cost Replacement: based on the access frequency, the transfer time cost and the size. – Server-assisted scheme: based on fetching cost, size, next request time, and cache prices during request intervals. 52

53 Caching -Prefetching The benefit from caching is limited. – Maximum cache hit rate - no more than 40-50% – to increase hit rate, anticipate future document requests and prefetch the documents in caches documents to prefetch – considered as popular at servers – predicted to be accessed by user soon, based on the access pattern It can reduce client latency at the expense of increasing the network traffic. 53

54 Cache Coherence Cache may provide users with stale documents. HTTP commands for cache coherence – GET : retrieves a document given its URL – Conditional GET: GET combined with the header IF- Modified-Since. – Progma: no-cache : this header indicate that the object be reloaded from the server. – Last-Modified : returned with every GET message and indicate the last modification time of the document. Two possible semantics – Strong cache consistency – Weak cache consistency 54

55 Strong cache consistency Client validation (polling-every-time) – sends an IF-Modified-Since header with each access of the resources – server responses with a Not Modified message if the resource does not change Server invalidation – whenever a resource changes, the server sends invalidation to all clients that potentially cached the resource. – Server should keep track of clients to use. – Server may send invalidation to clients who are no longer caching the resource. 55

56 Weak Cache Consistency – Adaptive TTL (time-to-live) adjust a TTL based on a lifetime (age) - if a file has not been modified for a long time, it tends to stay unchanged. This approach can be shown to keep the probability of stale documents within reasonable bounds ( < 5%). Most proxy servers use this mechanism. No strong guarantee as to document staleness – Piggyback Invalidation Piggyback Cache Validation (PCV) - whenever a client communicates with a server, it piggybacks a list of cached, but potentially stale, resources from that server for validation. Piggyback Server Invalidation (PSI) - a server piggybacks on a reply to a client, the list of resources that have changed since the last access by the client. If access intervals are small, then the PSI is good. But, if the gaps are long, then the PCV is good. 56

57 Dynamic Data Caching Non-cacheable data – authenticated data, server dynamically generated data, etc. – how to make more data cacheable – how to reduce the latency to access non-cacheable data Active Cache – allows servers to supply cache applets to be attached with documents. – the cache applets are invoked upon cache hits to finish necessary processing without contacting the server. – bandwidth savings at the expense of CPU costs – due to significant CPU overhead, user access latencies are much larger than without caching dynamic objects. 57

58 Dynamic Data Caching Web server accelerator – resides in front of one or more Web servers – provides an API which allows applications to explicitly add, delete, and update cached data. – The API allows static/dynamic data to be cached. – An example - the official Web site for the Olympic Winter Games whenever new content became available, updated Web reflecting these changes were made available quickly. Data Update Propagation (DUP, IBM Watson) is used for improving performance. 58

59 Dynamic Data Caching Data Update Propagation (DUP) – maintains data dependence information between cached objects and the underlying data which affect their values – upon any change to underlying data, determines which cached objects are affected by the change. – Such affected cached objects are then either invalidated or updated. – With DUP, about 100% cache hit rate at the 1998 Olympic Winter Games official Web site. – Without DUP, 80% cache hit rate at the 1996 Olympic Games official Web site. 59

60 Towards Large Scale system ….and need for clusering Large scale systems (think Yahoo!, YouTube, Ebay, Amazon, Google)… 60

61 One Large Scale Need – High Availability High availability is a major driving requirement behind large- scale system design. Basically, means the system is available (and responding) a high percentage of the time. – Uptime: typically measured in nines, and traditional infrastructure systems such as the phone system aim for four or five nines (“four nines” implies 0.9999 uptime, or less than 60 seconds of downtime per week). 61

62 High Availability – how to measure – Meantime-between-failure (MTBF) – Mean-time-to-repair (MTTR) – uptime = (MTBF – MTTR)/MTBF – yield = queries completed/queries offered – harvest = data available/complete data – DQ Principle: Data per query × queries per second →constant (total data delivered) System level physical bottleneck Total I/O bandwidth (disk or network) Optimization goal is to minimize the utilization of the bottleneck resource Fault tolerance: trade-off between D and Q Graceful Degradation is a goal

63 Using High Availability metrics to compare Replication vs. Partitioning Replication of data in 2 nodes – 1 failure: 100% harvest (D), 50% yield (Q) Partition of data in 2 nodes – 1 failure: 50% harvest (D), 100% yield (Q) 63

64 Cluster Example Smaller to mid-sized Cluster Example. Large examples like Amazon have in the thousands nodes

65 Some Tips Get the basics right. Start with a professional data center and layer-7 switches, and use symmetry to simplify analysis and management. Decide on your availability metrics. Everyone should agree on the goals and how to measure them daily. Remember that harvest and yield are more useful than just uptime. Focus on MTTR at least as much as MTBF. Repair time is easier to affect for an evolving system and has just as much impact. Understand load redirection during faults. Data replication is insufficient for preserving uptime under faults; you also need excess DQ. Graceful degradation is a critical part of a high-availability strategy. Intelligent admission control and dynamic database reduction are the key tools for implementing the strategy. Use DQ analysis on all upgrades. Evaluate all proposed upgrades ahead of time, and do capacity planning. Automate upgrades as much as possible. Develop a mostly automatic upgrade method, such as rolling upgrades. Using a staging area will reduce downtime, but be sure to have a fast, simple way to revert to the old version.


Download ppt "CS6320 – Performance more details L. Grewe 1. System Architecture Client Web Server Tier 2Tier 1Tier 3 Application Server Database Server DMS."

Similar presentations


Ads by Google