Download presentation
Presentation is loading. Please wait.
Published byBryce Curtis Modified over 9 years ago
1
Anil Nori Distinguished Engineer Microsoft Corporation
3
Unified Cache View Clients can be spread across machines or processes Clients Access the Cache as if it was a large single cache Cache Layer distributes data across the various cache nodes in a cluster
5
Web Scenarios Distributed/Global object cache Low latency access High scale Availability Cache for reference and activity data Scale IIS/ASP.Net applications Enterprise / HPC Scenarios LINQ enabled cache Co-locate computation and data Integrate with HPC server Persistence Heterogeneous client support. Software + Services Scenarios Application Cache for cloud Storage Integration with SSDS, Windows Azure More data services BI, Streaming, Reporting REST and SOA access
7
Velocity Service … Server 1 Server 2 Server 3 Application / Web Tier Application Cache Tier Configuration Store (Can be database, File share, etc.) Stores Global Cache Policies Stores Current Partitioning Information Configuration Manager Velocity Service Common Availability Substrate Clustering Substrate (Cluster Management) Clustering Substrate (Cluster Management) One of the Velocity Service Hosts the Configuration Manager Velocity Client UsersUsers Data manage, Object Manager
8
Application (K2, V2) Cache2 Cache1 Cache3 Primary for (K2,V2) K2, V2 Get(K2) Primary for (K1,V1) Primary for (K3,V3) K3, V3 K1, V1 K3, V3 K1, V1 K2, V2 Routing layer Velocity Client1 Routing Table K2, V2 Using the Routing table client routes the PUT to cache2 (primary) node PUT Queues the PUT operation PUTs locally Returns control Propagates operation to Cache1 and Cache3 Replication Agent
9
Application Cache2 Cache1 Primary for K2,V2 Primary for K2,V2 Primary for K1,V1 K1, V1 Cache3 Primary for K3,V3 K3, V3 Velocity Client2 Get(K2) Routing Table (K2, V2) Velocity Client1 Routing Table PUT Using the Routing table client routes the PUT to cache2 (primary) node Routing Table K2, V2 Operations queue for notifications, to bring up a new secondary, etc.
11
RegionKey "Cust1" "Cust2" "Cust33" "ToyRegion""Toy101" "ToyRegion""Toy102" "BoxRegion""Box101" Partition (Range of Ids) 0 – 1000 1001 - 2000 … ….. xxx - Maxint Region (Name) Default Region 1 Default Region 2 … Default Region 256 ToyRegion BoxRegion Velocity Service Region Name Hashed into Region Id Keys Bucketized into Regions ID Ranges mapped to Nodes
12
Velocity Client Local Cache Put(K2, v2) Routing Table Cache2 Cache1 Primary for K2,V2 K2, V2 Primary for K1,V1 K1, V1 Cache3 Primary for K3,V3 K3, V3 Velocity Client Local Cache Routing Table K2, V2 Get(K2)
13
Application (K2, V2) Cache2 Cache1 Cache3 Primary for (K2,V2) Get(K2) Primary for (K1,V1) Primary for (K3,V3) K3, V3 Velocity Client1 Routing Table K2, V2 Using the Routing table client routes the PUT to cache2 (primary) node PUT Queues the PUT operation PUTs locally Propagates operation to secondaries ( cache1 & cache3) Waits for a quorum of acks Returns control Secondary for (K2,V2), (K3,V3) K2, V2 K1, V1 K3, V3 Secondary for (K1,V1), (K3,V3) K3, V3 K1, V1 Secondary for (K1,V1), (K2,V2) K1, V1 K2, V2 Velocity Client Routing Table K2, V2 Replication Agent
14
Cache4 Primary for (K4,V4) Primary for (K4,V4) K4, V4 Secondary for K1, V1 K3, V3 Partition Manager Global Partition Map Cache2 Cache1 Cache3 Primary for (K2,V2) Primary for (K3,V3) K3, V3 Routing Table Secondary for K2, V2 K1, V1 K3, V3 Secondary for K3, V3 K1, V1 Secondary for K4, V4 K2, V2 Detects Cache 2 failure. Notifies PM (on Cache4) Replication Agent Reconfiguration Agent Reconfiguration Agent Local Partition Map Picks Cache1 as the primary for (K2,V2). Sends messages to the secondary caches, Cache1 and Cache3. Updates GPM PM analyzes the info on secondaries of all primary partitions of Cache2 to elect the primaries. Cache1 polls secondaries (Cache2) to ensure it has the latest data; otherwise, it will give up primary ownership Cache1 initiates reconfiguration. After reconfig, Cache1 is primary for (K1, V1) and (K2, V2)
15
Application Velocity client and server components run as part of the application process Avoids serialization and network costs Provides high performance, low latency access Guaranteeing locality and load balancing is tricky Better suited for replicated caches Velocity Components K3, V3 K1, V1 K2, V2 Application Velocity Components K3, V3 K1, V1 K2, V2 Application Velocity Components K3, V3 K1, V1 K2, V2
16
Application Cache2 Cache1 Primary for K2,V2 Primary for K2,V2 Primary for K1,V1 K1, V1 Cache3 Primary for K3,V3 K3, V3 Velocity Client1 Routing Table Register Notification for Key "a", "b" Map Keys to Partition Ranges Poll Required Nodes Nodes Return List of Changes
19
Local Store Components Administration and Monitoring Cache Monitors Tools Integration In-memory Data Manager Hash, B-trees DM API Distributed Manager Dispatch Manager Distributed Object Manager Distributed Components Failure Detection Failure Detection Reliable Message Raw Transport Cluster Substrate (Fabric) Local Partition Map Replication Agent Reconfiguration Agent Routing Table Common Availability Substrate (CAS) Cache API & Service Layer Cache API & Service Layer Cache API Cache Service Object Manager Policy Management Notification Management Region Management Query Processor Cache API Local Cache Client Layer Dispatch Manager Federated Query Processor
20
Version Based Update TimeClient1Client2 (Different Thread or process) T0 CacheItem item = catalog.GetCacheItem(“PlayerRegion”, ”Zune”); CacheItem item = catalog.GetCacheItem(“PlayerRegion”, ”Zune”); T1((ZuneObject)item.Object).inventory --; T2 catalog.Put(“PlayerRegion”, “Zune”, item.Object, item.Version); T3 catalog.Put(“PlayerRegion”, “Zune”, item.Object, item.Version); // Version mismatch // Client must retry again Two clients access the same item Both update the item Second Client gets in first; put succeeds because item version matches; atomically increments the version First client tries put; Fails because the versions don’t match First client tries put; Fails because the versions don’t match
21
K1 Client1: GetAndLock ("k1") Client1: GetAndLock ("k1") Client2: GetAndLock ("k1") Client2: GetAndLock ("k1") Client3: Get ("k1") Client3: Get ("k1") Regular Get succeeds GetAndLock gets lock handle Other GetAndLock on same item fails Other GetAndLock on same item fails
25
Velocity Data Node 100 Data Node 101 P P S S S S S S Data Node 102 P P S S S S Data Node 103 P P S S S S S S Data Node 104 P P S S S S S S Data Node 105 P P S S S S S S Primary Secondary Fabric P P S S S S S S Ring Topology Failure Detector Cluster Leader Elector CAS Replication Agent Reconfiguration Agent Reconfiguration Agent Local Partition Map Velocity Data & Master Node 107 Load Balancer/ Placement Advisor Partition Management Partition Manager Global Partition Map P P S S S S S S Velocity Data & Master Node 106 Load Balancer/ Placement Advisor Partition Management Partition Manager Global Partition Map P P S S S S S S Global Partition Map Global Partition Map GPM Store S S S S S S S S Application Node Velocity->CAS Client (Routing Table) Velocity->CAS Client (Routing Table) Velocity Velocity Components
26
64 210 2 30 90 135 180 225 50 76 120 151 103 200 83 98 174218 250 40 46 17 r -6 r -5 r -4 r6r6 r5r5 r4r4 r7r7 Routing Table at Node 64: Successor = 76 Predecessor = 50 Neighborhood = (83, 76, 50, 46) Routing nodes = (200, 2, 30, 46, 50, 64, 64, 64, 64, 64, 83, 98, 135, 200)
31
Cache2 Cache1 Primary Regions Toy1, 500 Cache3 Primary Regions Toy2, 350 Toy3, 400 Cache API Local Cache Velocity Client Dispatch Manager Federated Query Processor Object Manager In-memory Data Manager Query Processor Object Manager In-memory Data Manager Query Processor Object Manager In-memory Data Manager Query Processor from toy in catalog () where toy.ToyPrice > 300 select toy; ToyRegion Toy4, 100 from toy in catalog () where toy.ToyPrice > 300 select toy;
32
Cache2 Cache1 Primary Regions Toy1, 500 Cache3 Primary Regions Toy2, 350 Toy3, 400 Cache API Local Cache Velocity Client Dispatch Manager Federated Query Processor Object Manager In-memory Data Manager Query Processor Object Manager In-memory Data Manager Query Processor Object Manager In-memory Data Manager Query Processor from toy in catalog.GetRegion (“ToyRegion”) where toy.ToyPrice > 300 select toy; ToyRegion Toy4, 100 from toy in catalog.GetRegion (“ToyRegion”) where toy.ToyPrice > 300 select toy;
33
Scratch Velocity Intermediate Store Velocity Intermediate Store Rollup Operation Rollup Operation Final Results Final Results Keys Split Method Split Method Market Data Central Market Data Store (~1 TB Tick Data) Central Market Data Store (~1 TB Tick Data) Final Results Store Job Input Job Input Scratch Calculation Operation Calculation Operation Velocity Data Cache Market Data
34
Scratch Velocity Intermediate Store Velocity Intermediate Store Rollup Operation Rollup Operation Final Results Final Results Final Results Store Keys Split Method Split Method Central Market Data Store (~1 TB Tick Data) Central Market Data Store (~1 TB Tick Data) Job Input Job Input Scratch Calculation Operation Velocity Node Calculation Operation Velocity Node Market Data
36
Windows Azure Service Role Storage/SSDS Velocity Client Velocity Cache
37
Application ASP.NET Application Storage/SSDS Velocity Client Velocity Cache
38
Application ASP.NET Application Storage/SSDS Velocity Client Velocity Caching Service
41
Please fill out your evaluation for this session at: This session will be available as a recording at: www.microsoftpdc.com
42
© 2008 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.