Software Engineer, #MongoDBDays.

Slides:



Advertisements
Similar presentations
Case Study - Amazon. Amazon r Amazon has many Data Centers r Hundreds of services r Thousands of commodity machines r Millions of customers at peak times.
Advertisements

June 22-23, 2005 Technology Infusion Team Committee1 High Performance Parallel Lucene search (for an OAI federation) K. Maly, and M. Zubair Department.
 Need for a new processing platform (BigData)  Origin of Hadoop  What is Hadoop & what it is not ?  Hadoop architecture  Hadoop components (Common/HDFS/MapReduce)
GGF Toronto Spitfire A Relational DB Service for the Grid Peter Z. Kunszt European DataGrid Data Management CERN Database Group.
NoSQL and NewSQL Justin DeBrabant CIS Advanced Systems - Fall 2013.
Distributed Databases
WDK Driver Test Manager. Outline HCT and the history of driver testing Problems to solve Goals of the WDK Driver Test Manager (DTM) Automated Deployment.
Chapter 9 Overview  Reasons to monitor SQL Server  Performance Monitoring and Tuning  Tools for Monitoring SQL Server  Common Monitoring and Tuning.
Service Broker Lesson 11. Skills Matrix Service Broker Service Broker, provides a solution to common problems with message delivery and consistency that.
Distributed storage for structured data
Installing and Setting up mongoDB replica set PREPARED BY SUDHEER KONDLA SOLUTIONS ARCHITECT.
MongoDB Sharding and its Threats
Jeff Lemmerman Matt Chimento Medtronic Confidential 1 9th Annual CodeFreeze Symposium Medtronic Energy and Component Center.
Google Distributed System and Hadoop Lakshmi Thyagarajan.
Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google∗
How WebMD Maintains Operational Flexibility with NoSQL Rajeev Borborah, Sr. Director, Engineering Matt Wilson – Director, Production Engineering – Consumer.
22-Aug-15 | 1 |1 | Help! I need more servers! What do I do? Scaling a PHP application.
Distributed Data Stores – Facebook Presented by Ben Gooding University of Arkansas – April 21, 2015.
1 The Google File System Reporter: You-Wei Zhang.
Module 5: Managing Public Folders. Overview Managing Public Folder Data Managing Network Access to Public Folders Publishing an Outlook 2003 Form Discussion:
SOFTWARE SYSTEMS DEVELOPMENT MAP-REDUCE, Hadoop, HBase.
Database Design – Lecture 16
VLAN Trunking Protocol (VTP)
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
Meet with the AppEngine Márk Gergely eu.edge. What is AppEngine? It’s a tool, that lets you run your web applications on Google's infrastructure. –Google's.
MySQL. Dept. of Computing Science, University of Aberdeen2 In this lecture you will learn The main subsystems in MySQL architecture The different storage.
Hadoop Hardware Infrastructure considerations ©2013 OpalSoft Big Data.
MongoDB Replica,Shard Cluster 中央大學電算中心 楊素秋
SURENDER SARA 10GAS Building Corporate KPI’s
MapReduce and GFS. Introduction r To understand Google’s file system let us look at the sort of processing that needs to be done r We will look at MapReduce.
VICTORIA UNIVERSITY OF WELLINGTON Te Whare Wananga o te Upoko o te Ika a Maui SWEN 432 Advanced Database Design and Implementation MongoDB Architecture.
MongoDB is a database management system designed for web applications and internet infrastructure. The data model and persistence strategies are built.
Fast Crash Recovery in RAMCloud. Motivation The role of DRAM has been increasing – Facebook used 150TB of DRAM For 200TB of disk storage However, there.
Databases Illuminated
GFS. Google r Servers are a mix of commodity machines and machines specifically designed for Google m Not necessarily the fastest m Purchases are based.
11 CLUSTERING AND AVAILABILITY Chapter 11. Chapter 11: CLUSTERING AND AVAILABILITY2 OVERVIEW  Describe the clustering capabilities of Microsoft Windows.
Department of Computing, School of Electrical Engineering and Computer Sciences, NUST - Islamabad KTH Applied Information Security Lab Secure Sharding.
CS525: Big Data Analytics MapReduce Computing Paradigm & Apache Hadoop Open Source Fall 2013 Elke A. Rundensteiner 1.
1 MSRBot Web Crawler Dennis Fetterly Microsoft Research Silicon Valley Lab © Microsoft Corporation.
Chapter 4 Version 1 Virtual LANs. Introduction By default, switches forward broadcasts, this means that all segments connected to a switch are in one.
 Introduction  Architecture NameNode, DataNodes, HDFS Client, CheckpointNode, BackupNode, Snapshots  File I/O Operations and Replica Management File.
Senior Solutions Architect, MongoDB Inc. Massimo Brignoli #MongoDB Introduction to Sharding.
SCALING AND PERFORMANCE CS 260 Database Systems. Overview  Increasing capacity  Database performance  Database indexes B+ Tree Index Bitmap Index 
Bigtable: A Distributed Storage System for Structured Data
(Part 2) Josh Wells. Topics  Quick Review  Aggregation  Sharding  MongoDB Users.
1 Information Retrieval and Use De-normalisation and Distributed database systems Geoff Leese September 2008, revised October 2009.
An Introduction to Super-Scalability But first…
Apache Solr Dima Ionut Daniel. Contents What is Apache Solr? Architecture Features Core Solr Concepts Configuration Conclusions Bibliography.
Orion Contextbroker PROF. DR. SERGIO TAKEO KOFUJI PROF. MS. FÁBIO H. CABRINI PSI – 5120 – TÓPICOS EM COMPUTAÇÃO EM NUVEM
SQL Basics Review Reviewing what we’ve learned so far…….
Gorilla: A Fast, Scalable, In-Memory Time Series Database
Amazon Web Services. Amazon Web Services (AWS) - robust, scalable and affordable infrastructure for cloud computing. This session is about:
CalvinFS: Consistent WAN Replication and Scalable Metdata Management for Distributed File Systems Thomas Kao.
1 Gaurav Kohli Xebia Breaking with DBMS and Dating with Relational Hbase.
Scaling Network Load Balancing Clusters
Managing Multi-User Databases
Advanced Topics in Concurrency and Reactive Programming: Case Study – Google Cluster Majeed Kassis.
MongoDB Er. Shiva K. Shrestha ME Computer, NCIT
Life of a Sharded Write by Randolph Tan.
Ops Manager API, Puppet and OpenStack – Fully automated orchestration from scratch! MongoDB World 2016.
MongoDB Distributed Write and Read
Learning MongoDB ZhangGang
NOSQL.
A Technical Overview of Microsoft® SQL Server™ 2005 High Availability Beta 2 Matthew Stephen IT Pro Evangelist (SQL Server)
Senior Solutions Architect, MongoDB Inc.
MongoDB Connection in Husky
Google Filesystem Some slides taken from Alan Sussman.
CS6604 Digital Libraries IDEAL Webpages Presented by
Distributed File Systems
Presentation transcript:

Software Engineer, #MongoDBDays

Notes to the presenter Themes for this presentation: Horizontal scale only viable approach Cover the scenarios when to shard Explain the mechanics Themes: MongoDB makes this easy MongoDB includes this functionality free Best approach to with consistency

Agenda Scaling Data MongoDB's Approach Architecture Configuration Mechanics

Examining Growth User Growth – 1995: 0.4% of the world’s population – Today: 30% of the world is online (~2.2B) – Emerging Markets & Mobile Data Set Growth – Facebook’s data set is around 100 petabytes – 4 billion photos taken in the last year (4x a decade ago)

Read/Write Throughput Exceeds I/O

Working Set Exceeds Physical Memory

Vertical Scalability (Scale Up)

Horizontal Scalability (Scale Out)

Data Store Scalability Custom Hardware – Oracle Custom Software – Facebook + MySQL – Google

Data Store Scalability Today MongoDB Auto-Sharding A data store that is – 100% Free – Publicly available – Open-source ( – Horizontally scalable – Application independent

Partitioning User defines shard key Shard key defines range of data Key space is like points on a line Range is a segment of that line

Data Distribution Initially 1 chunk Default max chunk size: 64mb MongoDB automatically splits & migrates chunks when max reached

Routing and Balancing Queries routed to specific shards MongoDB balances cluster MongoDB migrates data to new nodes

MongoDB Auto-Sharding Minimal effort required – Same interface as single mongod Two steps – Enable Sharding for a database – Shard collection within database

What is a Shard? Shard is a node of the cluster Shard can be a single mongod or a replica set

Config Server – Stores cluster chunk ranges and locations – Can have only 1 or 3 (production must have 3) – Not a replica set Meta Data Storage

Routing and Managing Data Mongos – Acts as a router / balancer – No local data (persists to config database) – Can have 1 or many

Sharding infrastructure

Example Cluster Don’t use this setup in production!  Only one Config server (No Fault Tolerance)  Shard not in a replica set (Low Availability)  Only one mongos and shard (No Performance Improvement)  Useful for development or demonstrating configuration mechanics

Starting the Configuration Server mongod --configsvr Starts a configuration server on the default port (27019)

Start the mongos Router mongos --configdb :27019 For 3 configuration servers: mongos --configdb :, :, : This is always how to start a new mongos, even if the cluster is already running

Start the shard database mongod --shardsvr Starts a mongod with the default shard port (27018) Shard is not yet connected to the rest of the cluster Shard may have already been running in production

Add the Shard On mongos:  sh.addShard(‘ :27018’) Adding a replica set:  sh.addShard(‘ / ’)

Verify that the shard was added db.runCommand({ listshards:1 }) { "shards" : [{"_id”: "shard0000”,"host”: ” :27018” } ], "ok" : 1 }

Enabling Sharding Enable sharding on a database sh.enableSharding(“ ”) Shard a collection with the given key sh.shardCollection(“.people”,{“country”:1}) Use a compound shard key to prevent duplicates sh.shardCollection(“.cars”,{“year”:1, ”uniqueid”:1})

Tag Aware Sharding Tag aware sharding allows you to control the distribution of your data Tag a range of shard keys sh.addTagRange(,,, ) Tag a shard sh.addShardTag(, )

Partitioning Remember it's based on ranges

Chunk is a section of the entire range

Chunk splitting A chunk is split once it exceeds the maximum size There is no split point if all documents have the same shard key Chunk split is a logical operation (no data is moved)

Balancing Balancer is running on mongos Once the difference in chunks between the most dense shard and the least dense shard is above the migration threshold, a balancing round starts

Acquiring the Balancer Lock The balancer on mongos takes out a “balancer lock” To see the status of these locks: use config db.locks.find({ _id: “balancer” })

Moving the chunk The mongos sends a moveChunk command to source shard The source shard then notifies destination shard Destination shard starts pulling documents from source shard

Committing Migration When complete, destination shard updates config server  Provides new locations of the chunks

Cleanup Source shard deletes moved data  Must wait for open cursors to either close or time out  NoTimeout cursors may prevent the release of the lock The mongos releases the balancer lock after old chunks are deleted

Cluster Request Routing Targeted Queries Scatter Gather Queries Scatter Gather Queries with Sort

Cluster Request Routing: Targeted Query

Routable request received

Request routed to appropriate shard

Shard returns results

Mongos returns results to client

Cluster Request Routing: Non-Targeted Query

Non-Targeted Request Received

Request sent to all shards

Shards return results to mongos

Mongos returns results to client

Cluster Request Routing: Non-Targeted Query with Sort

Non-Targeted request with sort received

Request sent to all shards

Query and sort performed locally

Shards return results to mongos

Mongos merges sorted results

Mongos returns results to client

Shard Key Shard key is immutable Shard key values are immutable Shard key must be indexed Shard key limited to 512 bytes in size Shard key used to route queries – Choose a field commonly used in queries Only shard key can be unique across shards – `_id` field is only unique within individual shard

Shard Key Considerations Cardinality Write Distribution Query Isolation Reliability Index Locality

{ _id: ObjectId(), user: 123, time: Date(), subject: “...”, recipients: [], body: “...”, attachments: [] } Example: Storage Most common scenario, can be applied to 90% cases Each document can be up to 16MB Each user may have GBs of storage Most common query: get user s sorted by time Indexes on {_id}, {user, time}, {recipients}

Example: Storage Cardinality Write Scaling Query Isolation Reliability Index Locality

Example: Storage Cardinality Write Scaling Query Isolation Reliability Index Locality _idDoc levelOne shard Scatter/gat her All users affected Good

Example: storage Cardinality Write scaling Query isolation Reliability Index locality _idDoc levelOne shard Scatter/gat her All users affected Good hash(_id)Hash levelAll Shards Scatter/gat her All users affected Poor

Example: Storage Cardinality Write Scaling Query Isolation Reliability Index Locality _idDoc levelOne shard Scatter/gat her All users affected Good hash(_id)Hash levelAll Shards Scatter/gat her All users affected Poor userMany docsAll ShardsTargeted Some users affected Good

Example: Storage Cardinality Write Scaling Query Isolation Reliability Index Locality _idDoc levelOne shard Scatter/gat her All users affected Good hash(_id)Hash levelAll Shards Scatter/gat her All users affected Poor userMany docsAll ShardsTargeted Some users affected Good user, timeDoc levelAll ShardsTargeted Some users affected Good

Read/Write Throughput Exceeds I/O

Working Set Exceeds Physical Memory

Sharding Enables Scale MongoDB’s Auto-Sharding – Easy to Configure – Consistent Interface – Free and Open-Source

What’s next? – Hash-Based Sharding in MongoDB 2.4 (2:50pm) – Webinar: Indexing and Query Optimization (May 22 nd ) – Online Education Program – MongoDB User Group Resources

Software Engineer, #MongoDBDays