Presentation is loading. Please wait.

Presentation is loading. Please wait.

Topics ACID vs BASE Starfish Availability TACC Model Transend Measurements SNS Architecture.

Similar presentations


Presentation on theme: "Topics ACID vs BASE Starfish Availability TACC Model Transend Measurements SNS Architecture."— Presentation transcript:

1 Topics ACID vs BASE Starfish Availability TACC Model Transend Measurements SNS Architecture

2 Extensible Cluster Based Network Services Armando Fox Steven Gribble Yatin Chawathe Eric Brewer Paul Gauthier University of California Berkeley Inktomi Corporation Presenter: Ashish Gupta Advanced Operating Systems

3 Motivation Proliferation of network-based services Two critical issues must be addressed by Internet services:  System scalability  Incremental and linear scalability  Availability and fault tolerance  24x7 operation Clusters of workstations meet these requirements

4 Commodity PCs as unit of scaling Good Cost/performance Incremental Scalability “Embarrassingly parallel” workloads Map well onto workstations Redundancy of clusters Masks transient failures

5 Contribution of this work Isolate common requirements of cluster- based Internet apps into a reusable substrate the Scalable Network Services (SNS) framework Goal: complete separation of *ility concerns from application logic l Legacy code encapsulation l Insulate programmers from nasty engineering

6 Contribution of this work Architecture for SNS, exploiting the strength of cluster computing Separation of content of network services from implementation Encapsulation of low level functions in a lower layer Example of a new service A Programming Model to go with the architecture

7 The SNS architecture C $ LB/FT Interconnect FE $$ W W W T W W W A GUI Front Ends Front Ends Front Ends Front EndsCachesCaches User Profile Database User Profile Database User Profile Database User Profile Database Workers Manager: Load Balancing & Fault Tolerance Load Balancing & Fault Tolerance Manager: Load Balancing & Fault Tolerance Load Balancing & Fault Tolerance Administration Interface Administration Interface Administration Interface Administration Interface Workers and Front-ends All control decisions for satisfying user requests localized in the front-ends: Which Servers to invoke, access profile database, notify the end-user etc. Workers simple and stateless  Behaviour of service defined entirely at the front-end  Analogy of processes in a Unix pipeline: ls –l | grep.pl | wc

8 The SNS architecture C $ LB/FT Interconnect FE $$ W W W T W W W A GUI Front Ends CachesCaches User Profile Database WorkersWorkers Manager: Load Balancing & Fault Tolerance Manager: Administration Interface Front-ends User Interface to SNS Queue requests for service Can Maintain State for many simultaneous outstanding requests

9 The SNS architecture C $ LB/FT Interconnect FE $$ W W W T W W W A GUI Front Ends CachesCaches User Profile Database WorkersWorkers Manager: Load Balancing & Fault Tolerance Manager: Administration Interface User Profile Allows Mass customization of request processing

10 The SNS architecture C $ LB/FT Interconnect FE $$ W W W T W W W A GUI Front Ends CachesCaches User Profile Database WorkersWorkers Manager: Load Balancing & Fault Tolerance Manager: Administration Interface Workers Caches, Service specific Modules Multiple Instantiation possible Themselves just perform a specific task, not responsible for load balancing, fault tolerance

11 The SNS architecture C $ LB/FT Interconnect FE $$ W W W T W W W A GUI Front Ends CachesCaches User Profile Database WorkersWorkers Manager: Load Balancing & Fault Tolerance Manager: Administration Interface Administrative Interface Tracking and Visualization of system’s behaviour Administrative actions

12 The SNS architecture C $ LB/FT Interconnect FE $$ W W W T W W W A GUI Front Ends CachesCaches User Profile Database WorkersWorkers Manager: Load Balancing & Fault Tolerance Manager: Administration Interface Manager Collects load information from the workers Balances load across workers Spawn additional workers on increased load, faults

13 The SNS architecture C $ LB/FT Interconnect FE $$ W W W T W W W A GUI Front Ends CachesCaches WorkersWorkers Manager: Load Balancing & Fault Tolerance Manager: Administration Interface Workers and Front-ends All control decisions for satisfying user requests localized in the front-ends: Which Servers to invoke, access profile database, notify the end-user etc. Workers simple and stateless  Behaviour of service defined entirely at the front-end  Analogy of processes in a Unix pipeline: ls –l | grep.pl | wc User Profile Database

14 Separating the content from implementation Layered Software model Previous Components SNS Scalable Network Service Support TACC Transformation, Aggregation, Caching, Customization Service Service Specific Code SNS Provides Scalability Load Balancing Fault tolerance High Availability

15 The SNS Layer Scalability  Replicate well-encapsulated components  Prolonged Bursts: Notion of Overflow Pool Load Balancing  Centralized: Simple to implement and predicable

16 The SNS Layer Soft State for fault-tolerance and availability  Process peers watch each other  Because of no hard state, “recovery” == “restart” Load balancing, hot updates, migration are “easy”  Shoot down a worker, and it will recover  Upgrade == install new software, shoot down old  Mostly graceful degradation

17 “Starfish” Availability: LB Death FE detects via broken pipe/timeout, restarts LB C $ Interconnect FE $$ W W W T LB/FT W W W A

18 “Starfish” Availability: LB Death FE detects via broken pipe/timeout, restarts LB C $ Interconnect FE $$ W W W T LB/FT W W W A New LB announces itself (multicast), contacted by workers, gradually rebuilds load tables If partition heals, extra LB’s commit suicide FE’s operate using cached LB info during failure

19

20 Question: How do we build the services in the higher layers? Transformation Operation on a single data object that changes its content T T C Aggregation Collecting data from several sources and collating it Caching Storing/re-computing easier than moving across internet Can also store post- transformation (or post- aggregation) content Customization Per user: for content generation Per device: data delivery, content “packaging” The TACC Model a model for structuring services

21 The TACC Model a model for structuring services Programming model based on composable building blocks Many existing services fit well within the TACC model

22 A Meta-Search Engine In TACC Uses existing services to create a new service 2.5 hours to write using TACC franework Metasearch Web UI Internet

23 An Example Service TRANSEND

24 Datatype-Specific Distillation Lossy compression that preserves semantic content Tailor content for each client Reduce end-to-end latency when link is slow Meaningful presentation for range of clients 1.2 The Remote Queue Model We introduce Remote Queues (RQ), …. 1.2 The Remote Queue Model We introduce Remote Queues (RQ), …. 65x 6.8x

25 TranSend SNS Components Workers = Distillers here Simple restart mechanism for fault-tolerance Each distiller took 5-6 hrs to write SNS Fault tolerance removes worries about occasional bugs/crashes

26 Measurements Request Generation:  High performance HTTP request playback engine Burstiness  Handled by the overflow pool

27 Load Balancing Metric: Queue Length at distillers Load reaches threshold: Manager spawns a new distiller

28 Scalability Strategy: Begin with minimal instance Increase offered load until saturation Add more resources to eliminate saturation Observations: Nearly perfect linear growth 1 Distiller ~ 23 requests/sec Front end ~ 70 requests/sec Ultimate bottleneck: Shared components of the system (Manager and the SAN) SAN could be bottleneck for communication- intensive workloads (Example of 10Mb/s eth) Topic for future research

29 Conclusion A layered architecture for cluster-based scalable network services Authors shielded from software complexity of automatic scaling, high availability, and failure management New services as composition of stateless workers A useful paradigm for deploying new Internet services

30 ACID vs BASE semantics An approximate answer delivered quickly is more useful than the exact answer slowly ACIDBASE Strong consistency - data precise or NOT OK Weak consistency - stale data OK Availability??? (not always)Availability First Focus on “commit”Best Effort Guarantees accurate answersApproximate Answers OK Difficult evolutionEasy Evolution Conservative (pessimistic)Aggressive (optimistic) Simpler, Faster (?)

31 ACID vs BASE semantics Search Engine as a database 1 Big table Unknown but large growth Must be truly highly available An approximate answer delivered quickly is more useful than the exact answer slowly

32 A DBMS would be too slow Choose availability over consistency Graceful degradation: OK to temporarily lose small random subsets of data due to faults C onsistency A tomicity I solation D urability Replace with Availablity Graceful degradation Performance BASE Basically Available Soft-State Eventual Consistency Database research is about ACID

33 Why BASE ? Idea: focus on looser semantics rather than ACID semantics ACID => data unavailable rather than available but inconsistent BASE => data available, but could be stale, inconsistent or approximate Real systems use BOTH semantics Claim: BASE can lead to simpler systems and better performance  Performance: caching and avoidance of communication and some locks (e.g. ACID requires strict locking and communication with replicas for every write and any reads without locks)  Simpler: soft-state leads to easy recovery and interchangable components BASE fits clusters well due to partial failure

34 More BASE… Reduces complexity of service implementation, consistency for simplicity  Fault Tolerance  Availability Opportunities for better performance optimizations in the SNS framework  ACID : durable and consistent state across partial failures  This Is relaxed in the BASE model Example of HotBot

35 THANK You

36 Backup Slides

37 Question 1. Why are the cluster-based network service well suited to internet service

38 answer The requirements are highly parallel( many indepent simultaneous users) The grain size typically corresponds to at most a few CPU seconds on a commodity PC

39 Question 2 Why does the cluster-base network service use BASE semantics?

40 Answer: BASE semantics allow us to handle partial failure in clusters with less complexity and cost.

41 Question 3 When the overflow machines are being recruited unusually often, what should be done at this time?

42 Answer: It is time to add new machines.

43 Question 4 Does the Front-end crash not lost any information? If does, what kind information will be lost?

44 Answer: User requests will be lost and user need to handle timeout and resend request.

45

46 Clustering and Internet Workloads Internet vs. “traditional” workloads  e.g. Database workloads (TPC benchmarks)  e.g. traditional scientific codes (matrix multiply, simulated annealing and related simulations, etc.) Some characteristic differences  Read mostly  Quality of service (best-effort vs. guarantees)  Task granularity “Embarrasingly parallel”…why?  HTTP is stateless with short-lived requests  Web’s architecture has already forced app designers to work around this! (not obvious in 1990)

47 Meeting the Cluster Challenges Software & programming models Partial failure and application semantics System administration Two case studies to contrast programming models  GLUnix goal: support “all” traditional Unix apps, providing a single system image  SNS/TACC goal: simple programming model for Internet services (caching, transformation, etc.), with good robustness and easy administration


Download ppt "Topics ACID vs BASE Starfish Availability TACC Model Transend Measurements SNS Architecture."

Similar presentations


Ads by Google