Download presentation
Presentation is loading. Please wait.
1
© 2009 Infosys Technologies Limited Designing scalable applications for cloud Raghavan Subramanian, Associate Vice President, Head of Cloud-computing CoE, Infosys
2
© 2009 Infosys Technologies Limited Agenda Overview of scalability Scale-up and Scale-out Cloud design considerations Key principles of scale-out design A few techniques for transforming a scale-up design into a scale-out design Scale out Apps on Windows Azure
3
© 2009 Infosys Technologies Limited Overview of Scalability Scalability Is a property of a system, a network, or a process, which indicates its ability to either handle growing amounts of work in a graceful manner or its ability to be enlarged. For example, it can refer to the capability of a system to increase total throughput under an increased load when resources (typically hardware) are added.
4
© 2009 Infosys Technologies Limited Scale up/Vertical scalability 1.Scale up/Vertical scalability is adding more memory and CPUs to a single box. 2.One can realize its benefits in scenarios like Large shared memory space, Many dependent threads, Tightly-coupled internal interconnect Small scale apps
5
© 2009 Infosys Technologies Limited Scale Out/Horizontal scalability 1.Scale Out/Horizontal scalability is adding more boxes of similar memory and CPU 2.One can realize its benefits in scenarios like Work which can be broken up into smaller tasks Activity which can run independently and as an Atomic unit Process large volumes of data
6
© 2009 Infosys Technologies Limited Law of diminishing returns In economics, diminishing returns (also called diminishing marginal returns) refers to how the marginal production of a factor of production starts to progressively decrease as the factor is increased.economicsmarginalfactor of production According to this relationship, in a production system with fixed and variable inputs (say factory size and labor), each additional unit of the variable input (i.e., man- hours) yields smaller and smaller increases in outputs, also reducing each worker's mean productivity.labor Consequently, producing one more unit of output will cost increasingly more (owing to the major amount of variable inputs being used, to little effect). This concept is also known as the law of diminishing marginal returns or the law of increasing relative cost.
7
© 2009 Infosys Technologies Limited Concept of Linear Scalability Linear scalability, relative to load or demand, means that with fixed resources, performance decreases at a constant rate relative to load or demand increases. Linear scalability, relative to server resources, means that with a constant load or demand, performance improves at a constant rate relative to changes in resources.
8
© 2009 Infosys Technologies Limited Scale up Vs Scale Out: which one scores more Scale upScale Out Administration effortEasy to administer one machine. More complex Hardware FailureSingle failure could take large chunk of data. Single failure would not impact all the data. Infinite scalability !!There is a limit to single machine power. Linear and infinite scale. FitmentFor small scale scenarios/applications. For large scale application or multiple application with conflicting resource requirements. Hardware CostBuying bigger machine is expensive. Comes out to be cheaper. $(1 * 64 way processor) >>> $(64* 1 way processors) Software : Software license costSeat based licensing are cheaper. However CPU based licenses would tend to be expensive. Licensing costs can tend to be higher. Scale-out encourages usage of free software.
9
© 2009 Infosys Technologies Limited Cloud encourages the usage of scale out design by solving some of the scale out issues.. On-demand provisioning Infrastructure management outsourced to the cloud vendor Elastic scale – Go up or down instantaneously Higher level of abstractions with fault-tolerance and resilience built-in Pay-per-Use- Optimize capital expenditure (Hardware and Software) However if apps don’t scale within the data center they certainly won’t scale when just moved to the cloud
10
© 2009 Infosys Technologies Limited Key Principles of Scale out design Asynchronous, event-driven design Parallelization Divide and Conquer MapReduce /Master-Worker Idempotent operations De-normalized, partitioned data (sharding) Shared nothing architecture Go Stateless Fault-tolerance by redundancy and replication (Design for failure)
11
© 2009 Infosys Technologies Limited Typical Scale-out design pattern Parallel Event Driven Fault-Tolerance Task A Task B Task C Task D X 5X Publish Task A Controller Task B Task C Task D Status Watch Merge Publish Data Task A* Task B* Task C* Task D* Dead-letter Q Status Watch Merge Publish Compensate X 3X Idempotency Sharding
12
© 2009 Infosys Technologies Limited Techniques to transform a scale-up design to a scale-out design Analyze the existing code flow - Prepare diagrams – flowcharts, components diagrams Identify logical units of work and extract them into separate components which can work in an independent fashion – Think Contract First design Each thread can be considered as a separate task Extract common functionality to a shared service Make component interaction message or document driven Consolidate Public variable to messages/entities Implement the DTO patterns Re-arrange the code flow - Activity and State diagrams Parallelize Asynchronize Choose NoSQL over RDBMS De-normalize data to reduce firing multiple join-like queries Partition data to form smaller data sets which can Shard Implement message queuing to build loosely coupled components Question yourself, “What if this fails?”. Put your failover mechanisms in place Dead letter queues Retry Handlers Alerts/Notifications
13
© 2009 Infosys Technologies Limited Scale-out Apps on Windows Azure Windows Azure Web Role Worker Role 1 Web Role LB Web Role Worker Role 3 Web Role Worker Role 2 LB Storage Queues Tables Blobs Jobs Best Practices Break jobs into smaller chunks so as to avoid inefficient workload distribution Spawn multiple threads per role to speed up job execution Rationalize tasks within fewer roles to have more redundancy at same cost For handling large datasets, store dataset file in Blobs with reference information contained in messages pushed to Queues For application operating in phases, store transient messages in Queues and statuses of job/messages in Tables Use affinity groups – keep compute roles and storage closer For debugging, instrument code to trace code execution paths across the application Windows Azure Components Compute – Web and Worker Role Storage – Blobs, Table and Queues
14
© 2009 Infosys Technologies Limited Thank You Authored by: 1. Bhausaheb Jadhav 2. Raghavan Subramanian 3. Sidharth Subhash Ghag 4. Sonal Arora
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.