Download presentation
Presentation is loading. Please wait.
Published byBartholomew York Modified over 8 years ago
1
PolyBase Query Hadoop with ease Sahaj Saini SQL Server, Microsoft
2
2 Please silence cell phones 2
3
Agenda What is PolyBase? Why customers need it? How it works? Demo Q&A
4
Why?
5
“Our mission is to empower every person and every organization on the planet to achieve more” - Satya Nadella 5
6
6 All the interest in Big Data Increased number and variety of data sources that generate large quantities of data. Realization that data is “too valuable” to delete. Dramatic decline in the cost of hardware, especially storage. $
7
7 The Hadoop Ecosystem
8
Initially MapReduce for insights from HDFS-resident data Recently SQL-like data warehouse technologies on HDFS e.g. Hive, Impala, HAWQ, Spark/Shark Hadoop Evolution
9
9 9 What if you use RDBMS and Hadoop?
10
What is PolyBase?
11
11 Big Picture Provides a T-SQL language extension for combining data from both worlds
12
12 PolyBase in SQL Server 2016
13
13 PolyBase journey 2012 2013 2015… …… 2016 … 2014 PolyBase in SQL Server PDW V2 (Analytics Platform System) PolyBase in SQL Server 2016 CTP2 CTP3 PolyBase in Azure SQL Data Warehouse RTM
14
14 Example 1: Auto Insurance Usage-based Insurance Combining non-relational sensor data from cars (kept in Hadoop) with structured customer data (kept in APS) Ability to adjust policies based on driver behavior ‘ Pay-as-you-drive’ - Driver Discount & Policy adjustment Status - In production
15
PolyBase Demo SQL Server 2016
16
16 Example 2: Wind Turbine Manufacturer Turbine Monitoring Analyzing sensor data from wind turbines (kept in Hadoop) combined with relational turbine data (kept in SQL Server) Ability to do change detection, proactive maintenance and reporting Turbine Monitoring Status - In development
17
17 PolyBase Use Cases
18
How does PolyBase work?
19
19 Step 1: Setup a Hadoop Cluster Hortonworks or Cloudera Distributions Hadoop 2.0 or above Linux or Windows On premise or in Azure Namenode (HDFS) File System Hadoop cluster
20
20 Or Azure Storage Account Azure Storage Blob (ASB) exposes an HDFS layer PolyBase reads and writes from ASB using Hadoop APIs No compute push-down support for ASB
21
Step 2: Install SQL Server 21 Select PolyBase feature Adds two new services - PolyBase Engine - PolyBase Data Movement Service Pre-requisite: download and install JRE
22
1. Install multiple SQL Server instances with PolyBase. Step 3: Scale-out 22 Head Node PolyBase Engine PolyBase DMS PolyBase Engine 2. Choose one as Head Node. 3. Configure remaining as Compute Nodes a.run stored procedure b.shutdown PolyBase Engine c.restart PolyBase DMS
23
After Step 3 23 PolyBase Group for Scale-out Computation Head node contains the SQL Server instance to which PolyBase queries are submitted Compute nodes are used for scale- out query processing on external data Compute Nodes
24
Step 4 - Choose Hadoop flavor Supported distributions in CTP3 Cloudera CDH 5.1 on Linux Hortonworks 2.0, 2.1 & 2.2 on Linux Hortonworks 2.0, 2.2 on Windows Server Azure blob storage (ASB) What happens under the covers? Loading the right client jars to connect to Hadoop -- different numbers map to various Hadoop flavors -- example: value 4 stands for HDP 2.0 on Windows or ASB, value 5 for HDP 2.0 on Linux, value 6 for CHD 5.1 on Linux, value 7 for HDP 2.1/2.2 on Linux and Windows or ASB 7
25
25 After Step 4 Namenode (HDFS) File System
26
PolyBase Design
27
27 Under-the-hood Exploiting compute resources of Hadoop Clusters with push-down computation
28
Uses Hadoop RecordReaders/RecordWriters to read/write standard HDFS file types HDFS bridge in DMS
29
29 Under-the-hood Exploiting compute resources of Hadoop Clusters with push-down computation
30
30 Namenode (HDFS) Hadoop Cluster File System Data moves between clusters in parallel SQL16
31
31 Under-the-hood Exploiting compute resources of Hadoop Clusters with push-down computation
32
Creating External Tables Once per Hadoop Cluster Once per File Format HDFS File Path
33
Creating External Tables (secure Hadoop) Once per Hadoop User HDFS File Path Once per File Format Once per Hadoop user
34
-- select on external table (data in HDFS) SELECT * FROM Customer WHERE c_nationkey = 3 and c_acctbal < 0; A possible execution plan: CREATE temp table T Execute on SQL compute nodes 1 IMPORT FROM HDFS HDFS Customer file read into T in parallel 2 EXECUTE QUERY Select * from T where T.c_nationkey =3 and T.c_acctbal < 0 3 PolyBase Query Example #1
35
35 Under-the-hood Exploiting compute resources of Hadoop Clusters with push-down computation
36
HDFS Hadoop 7 2 5 HDFS blocks DB 3 4 6 PolyBase Query 1 MapReduce Cost-based decision on how much computation to push SQL operations on HDFS data pushed into Hadoop as MapReduce jobs Big Picture Takeaway Map job
37
Cost-based Decision (for split-based query execution) Major factor for decision is data volume reduction Hadoop takes 20-30 seconds to spin-up Map job o Spin-up time varies depending on distribution and OS Cardinality of predicate matters o No push-down for scenarios where SQL can execute under 20-30 seconds w/o push-down o Creating statistics on external table (not auto-created) Queries can have “pushable” & “non-pushable” expressions and predicates – Pushable ones will be evaluated on Hadoop side – Processing of non-pushable ones will be done on SQL side – Aggregate functions (sum, count, …) partially pushed – JOINS never pushed, always executed on SQL side External Table External Data source External File Format Your Apps PowerPivot PowerView PDW Engine Service Polybase Storage Layer (PPAX) HDFS Bridge – (as part of DMS) Job Submitter
38
-- select and aggregate on external table (data in HDFS) SELECT AVG(c_acctbal) FROM Customer WHERE c_acctbal < 0 GROUP BY c_nationkey; Execution Plan: PolyBase Query Example #2 Run MR Job on Hadoop Apply filter and compute aggregate on Customer. 1 What happens here? Step 1: QO compiles predicate into Java. Step 2: Engine submits Map job to Hadoop cluster. Output left in hdfsTemp. hdfsTemp FRA UK
39
-- select and aggregate on external table (data in HDFS) SELECT AVG(c_acctbal) FROM Customer WHERE c_acctbal < 0 GROUP BY c_nationkey; Execution Plan: PolyBase Query Example #2 1.Query optimizer made a cost- based decision on what operators to push. 2.Predicate and aggregate pushed into Hadoop cluster as a Map job. Run MR Job on Hadoop Apply filter and compute aggregate on Customer. Output left in hdfsTemp 1 IMPORT hdfsTEMP Read hdfsTemp into T 3 CREATE temp table T On SQL compute nodes 2 RETURN OPERATION Read from T Do final aggregation 4 hdfsTemp FRA UK
40
Query Capabilities
41
Query Capabilities (1) Combine relational and external data SELECT FROM 1.Querying external tables 2.Joining external with regular SQL tables 3.Pushing compute for basic expressions and aggregates External tables referring to data in two HDP Hadoop clusters SQL Table
42
Query Capabilities (2) Push-Down Computation Pushing Compute o Either on data source level or o Per-query basis using query hints
43
Query Capabilities (3) Multiple User IDs Credential support o Credential support for multiple user IDs associated with external data source
44
Query Capabilities (4) Seamless BI integration
45
Import Scenario – Persistent Storage SELECT INTO 1.Importing data from Hadoop or Azure storage for persistent storage 2.‘ETL’ type of processing possible via T-SQL External table referring to data in HDP Hadoop clusters new SQL Table created
46
External table for aging data into Hadoop Export Scenario – Data aging to Hadoop/Azure INSERT INTO 1.Exporting SQL data into Hadoop or Azure Storage 2.‘ETL’ type of processing possible via T-SQL Export data to Hadoop
47
Dr. David DeWitt For letting me use his material to explain the PolyBase technology. Dr. Artin Avanes For selecting and building the demo use case. Our team in Gray Systems Lab, Madison and Aliso Viejo Acknowledgments
48
Thank You
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.