Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluation of Data and Request Distribution Policies in Clustered Servers Adnan Khaleel and A. L. Narasimha Reddy Texas A&M University

Similar presentations


Presentation on theme: "Evaluation of Data and Request Distribution Policies in Clustered Servers Adnan Khaleel and A. L. Narasimha Reddy Texas A&M University"— Presentation transcript:

1 Evaluation of Data and Request Distribution Policies in Clustered Servers Adnan Khaleel and A. L. Narasimha Reddy Texas A&M University adnan,reddy@ee.tamu.edu

2 2 Introduction n Internet use has skyrocketed – 74MB/month in ‘92, several gigabytes/hour today n Trend can be expected to grow in coming years n Increasing load has placed burdens on hardware and software beyond their original designs

3 3 Introduction (cont’d) n Clustered Servers are viable solutions

4 4 Issues in Clustered Servers n Need to present a single server image – DNS aliasing, magic routers etc n Multiplicity in Back-End Servers: – How should data be organized on back-end ? – How should incoming requests be distributed amongst the back-end servers ?

5 5 Issues in Clustered Servers (cont’d) n Data Organization – Disk Mirroring n Identical data maintained on all back-end servers n Every machine able to service requests without having to access files on other machines. n Several redundant machines present, good system reliability n Disadvantages – Inefficient use of disk space – Data cached on several nodes simultaneously

6 6 Issues in Clustered Servers (cont’d) n Data Organization (cont’d) – Disk Striping n Borrowed from Network File Servers n Entire data space divided over all the back-end servers n Portion of file may reside on several machines n Improve reliability through parity protection n For large file accesses, automatic load distribution n Better access times

7 7 Issues in Clustered Servers (cont’d) n Locality – Taking advantage of files already cached in back-end server’s memory – For clustered Server System n Requests accessing same data be sent to the same set of servers

8 8 Issues in Clustered Servers (cont’d) n Distribution Vs Locality ? – Load balanced system n Distribute requests evenly among back-end servers – Improve hit-rate and response time n Maximize locality n Current studies focus only on one aspect and ignore the other

9 9 Request Distribution Schemes (cont’d) n Round Robin Request Distribution

10 10 Request Distribution Schemes (cont’d) n Round Robin Request Distribution (cont’d) – Requests distributed in a sequential manner – Results in ideal distribution – Does not take server loading into account n Weighted Round Robin n Two Tier Round Robin – Cache Hits purely coincidental

11 11 Request Distribution Schemes (cont’d) n Round Robin Request Distribution (cont’d) – Every back-end server has to cache the entire content of the Server n Unnecessary duplication of files in cache n Inefficient use of cache space – Back-ends may see different queuing times due to uneven hit rates

12 12 Request Distribution Schemes (cont’d) n File Based Request Distribution

13 13 Request Distribution Schemes (cont’d) n File Based Request Distribution (cont’d) – Locality based distribution – Partition file-space and assign a partition to each back-end server – Advantages n Does not suffer from duplicated data on cache n Based on access patterns, can yield high hit rates

14 14 Request Distribution Schemes (cont’d) n File Based Request Distribution (cont’d) – Disadvantages n How to determine file-space partitioning ? – Difficult to partition so requests load back-ends evenly – Dependent on client access patterns, no one partition scheme can satisfy all cases n Some files will always be requested more than others n Locality is of primary concern, distribution ignored – Hope that partitioning achieves the distribution

15 15 Request Distribution Schemes (cont’d) n Client Based Request Distribution

16 16 Request Distribution Schemes (cont’d) n Client Based Request Distribution (cont’d) – Also locality based – Partition client-space and assign a partition to each back-end server – Advantages and disadvantages similar to file- based n Difficult to find ideal partitioning scheme n Ignores distribution

17 17 Request Distribution Schemes (cont’d) n Client Based Request Distribution (cont’d) – Slightly modified from DNS used in Internet n Allows flexibility in client-server mapping – TTL set during first resolution – On expiration, client expected to re-resolve name – Possibly different TTL could be used for different workload characteristics – However, clients ignore TTL – Hence a STATIC scheme

18 18 Request Distribution Schemes (cont’d) n Locality Aware Request Distribution [5] – Broadly based on file-based scheme – Addresses the issue of load balancing – Each file assigned a dynamic set of servers instead of just one server

19 19 Request Distribution Schemes (cont’d) n LARD (cont’d) – Technique – On first request for a file, assign least loaded back-end – On subsequent requests for the same file Determine Max/Min loaded servers in assigned set If (Max loaded server > High Threshold OR a server exists in cluster with load < Low Threshold ) then add the new least loaded server to set and assign to service request Else assign Min loaded server in set to service request If any server in set inactive > time T, remove from set

20 20 Request Distribution Schemes (cont’d) n LARD (cont’d) – File-space partitioning done on the fly – Disadvantages n Large amounts of processing needs to be performed by the front-end n Large amount of memory needed to maintain information on each individual file n Possible bottleneck as system is scaled

21 21 Request Distribution Schemes (cont’d) n Dynamic Client Based Request Distribution – Based on the premise that file reuse among clients is high – Complete ignorance of server loads – Propose a modification to the static client based distribution to make it actively modify distribution based on back-end loads.

22 22 Request Distribution Schemes (cont’d) n Dynamic Client Based (cont’d) – Use of time-to-live (TTL) for server mappings within cluster - TTL is continuously variable – In heavily loaded systems n RR type distribution preferable as queue times predominate n TTL values should be small – In lightly loaded systems n TTL values should be large in order to maximize benefits of locality

23 23 Request Distribution Schemes (cont’d) n Dynamic Client Based (cont’d) – On TTL expiration, assign client partition to least loaded back-end server in cluster n If more than one server has the same low load - choose randomly from that set – Allows server using an IPRP[4] type protocol to redirect client to other server if it aids load balancing n Unlike DNS, clients cannot void this mechanism n Hence - Dynamic

24 24 Request Distribution Schemes (cont’d) n Dynamic Client Based (cont’d) – Trend in server load essential to determine if TTL is to be increased or decreased – Need to average out the requests to smooth out transient activity – Moving Window Averaging Scheme – Only requests that come within the window period actively contribute towards load calculation

25 25 Simulation Model n Trace Driven simulation model n Based on CSIM [8] n Modelled an IBM OS/2 for various hardware parameters n Several parameters could be modified – # of servers, memory size, CPU capacity in MIPS (50), disk access times, Network communication time/packet, data organization - disk mirror or stripe

26 26 Simulation Model (cont’d) n In disk mirror and disk striping, data cached at request servicing nodes n In disk striping, data is also cached at disk- end nodes

27 27 Simulation Model (cont’d) n Traces – Representative of two arena where clustered servers are currently used n World Wide Web (WWW) Servers n Network File (NFS) Servers

28 28 Simulation Model (cont’d) n WEB Trace n ClarkNet WWW Server - ISP for Metro Baltimore - Washington DC area n Collected over a period of two weeks n Original trace had 3 million records n Weeded out non HTTP related records like CGI, ftp n Resulting trace had 1.4 million records n Over 90,000 clients n Over 24,000 files that had a total occupancy of slightly under 100 MBytes

29 29 Simulation Model (cont’d) n WEB Trace (cont’d) – Records had timestamps with 1 second resolution n Did not accurately represent real manner of request arrivals n Requests that arrived in the same second were augmented with a randomly generated microsecond extension

30 30 Simulation Model (cont’d) n NFS Trace – Obtained from Auspex [9] file server at UC Berkeley – Consists of post client-cache misses – Collected over a period of one week – Had 231 clients, over 68,000 files that had a total occupancy of 1,292 Mbytes

31 31 Simulation Model (cont’d) n NFS Trace (cont’d) – Original trace had a large amount of backup data at night and over weekends, only daytime records used in simulation – Records had timestamps with microsecond resolution n Cache allowed to WARM-UP prior to any measurements being made

32 32 Results - Effects of Memory Size n NFS Trace,Disk Stripe – Increase mem = increase cache space Response time for 4 back-end servers.

33 33 Results - Effects of Memory Size n NFS trace, Disk Stripe – FB better at extracting locality – RR hits are purely probabilistic Cache-hit ratio for 4 back-end servers.

34 34 Results - Effects of Memory Size n WEB trace, Disk Stripe – WEB trace has a smaller working set – Increase in memory as less of an effect Response time for 4 back-end servers.

35 35 Results - Effects of Memory Size n WEB trace, Disk Stripe – Extremely high hit rates, even at 32 Mbytes – FB able to extract maximum locality – Distribution scheme less of an effect on response time – Load distribution was acceptable for all schemes, best RR, worst FB Cache hit rates for 4 back-end system.

36 36 Results - Effects of Memory Size n WEB Trace,Disk Mirror – Very similar to DS – With smaller memory, hit rates slightly lower as no disk end caching Disk stripe vs. disk mirror. Disk StripeDisk Mirror

37 37 Results - Scalability Performance n NFS trace, Disk Stripe – RR shows least benefit n Due to probabilistic cache hits Number of servers on response time (128MB memory).

38 38 Results - Scalability Performance n NFS Trace,Disk Stripe – ROUND ROBIN n Drop in hit rates with more servers n Lesser “probabilistic” locality Cache hit rate vs. memory size and number of back-end servers

39 39 Results - Scalability Performance n NFS Trace,Disk Mirror – RR performance worsens with more servers – All other schemes perform similar to Disk Striping Number of servers on response time (128MB).

40 40 Results - Scalability Performance n NFS Trace,Disk Mirror – For RR, lower hit rates with more servers - lower response time – For RR, disk-end caching offers better hit rates in disk striping than in disk mirror Cache hit rates for RR under Disk striping vs. mirroring (128MB) Disk StripeDisk Mirror

41 41 Results - Effects of Memory Size n NFS trace, Disk Mirror – Similar effect of more memory – Stagnation of hit rates in FB, DM does better than DS due to caching of data at disk end – RR exhibits better hit rates with DS than DM, greater variety of files in cache Cache hit rates with disk mirror and disk striping.

42 42 Results - Disk Stripe Vs Disk Mirror n Implicit distribution of load in Disk striping produces low disk queues Queueing time in Disk stripe and disk mirror. NFS trace with a 4 back-end system used. Disk StripeDisk Mirror

43 43 Conclusion & Future Work n RR ideal distribution, poor response rates due to probabilistic nature of cache hit rates. n File -based was the best at extracting locality, complete lack of server loads, poor load distribution n LARD, similar to FB but better load distribution n For WEB Trace, cache hit rates were so high that distribution did not play a role in determining response time

44 44 Conclusion & Future Work n Dynamic CB addressed the problem of server load ignorance of static CB, better distribution in NFS trace, better hit rates in WEB Trace n Disk Striping distributed requests over several servers, relieved disk queues but increased server queues n In the process of evaluating a flexible caching approach with Round Robin distribution that can exploit the file- based caching methodology n Throughput comparisons of various policies n Impact of faster processors n Impact of Dynamically generated web page content


Download ppt "Evaluation of Data and Request Distribution Policies in Clustered Servers Adnan Khaleel and A. L. Narasimha Reddy Texas A&M University"

Similar presentations


Ads by Google