Presentation is loading. Please wait.

Presentation is loading. Please wait.

INSTITUTE OF COMPUTING TECHNOLOGY BigDataBench: a Big Data Benchmark Suite from Internet Services Lei Wang, Jianfeng Zhan, Chunjie Luo, Yuqing Zhu, Qiang.

Similar presentations


Presentation on theme: "INSTITUTE OF COMPUTING TECHNOLOGY BigDataBench: a Big Data Benchmark Suite from Internet Services Lei Wang, Jianfeng Zhan, Chunjie Luo, Yuqing Zhu, Qiang."— Presentation transcript:

1 INSTITUTE OF COMPUTING TECHNOLOGY BigDataBench: a Big Data Benchmark Suite from Internet Services Lei Wang, Jianfeng Zhan, Chunjie Luo, Yuqing Zhu, Qiang Yang, Yongqiang He, Wanling Gao, Zhen Jia, Yingjie Shi, Shujie Zhang, Gang Lu, Kent Zhang, Xiaona Li, and Bizhu Qiu HPCA 2014 1

2 Orlando, 2014.2.18 HPCA 2014 Why Big Data Benchmarking? Measuring big data systems and architectures quantitatively

3 Orlando, 2014.2.18 HPCA 2014 What is BigDataBench? An open source big data benchmarking project http://prof.ict.ac.cn/BigDataBench/ 6 real-world data sets – Generate (4V) big data 19 workloads – OLTP, Cloud OLTP, OLAP, and offline analytics – Same workloads: different implementations

4 Orlando, 2014.2.18 HPCA 2014 Executive summary Big Data Benchmarks Do we know enough about big data benchmarking? Big Data workload characterization What are differences from traditional workloads? Exploring best big data architectures brawny-core or wimpy multi-core or wimpy many-core?

5 Orlando, 2014.2.18 HPCA 2014 Outline Benchmarking Methodology and Decision Big Data Workload Characterization Evaluating Hardware Systems with Big Data Conclusion 3 3 2

6 Orlando, 2014.2.18 HPCA 2014 Methodology 4V of Big Data System and architecture characteristics BigDataBench Refine

7 Orlando, 2014.2.18 HPCA 2014 Methodology (Cont’) Diverse Data Sets Diverse Worklo ads Data Sources Text data Graph data Table data Extended … Data Types Structured Semi-structured Unstructured Big Data Sets Preserving 4V BigDataBench Investigate Typical Application Domains BDGS: big data generation tools Application Types OLTP Cloud OLTP OLAP Offline analytics Basic & Important Operations and Algorithms Extended… Represent Software Stack Extended… Big Data Workloads

8 Orlando, 2014.2.18 HPCA 2014 Top Sites on the Web More details in http://www.alexa.com/topsites/global;0 http://www.alexa.com/topsites/global;0 Search Engine, Social Network and Electronic Commerce take 80% page views of all the Internet service.

9 Orlando, 2014.2.18 HPCA 2014 MPI Shark Impala NoSql Software Stacks BigDataBench Summary 19 Workloads (Cloud) OLTP OLAP Offline Analytics Search Engine Social Network E-commerce Six Real-world Data Sets Google Web Graph E-commerce Transaction Wikipedia Entries BDGS(Big Data Generator Suite) for scalable data Facebook Social Network ProfSearch Person resumes Amazon Movie Reviews

10 Orlando, 2014.2.18 HPCA 2014 Outline Benchmarking Methodology and Decision Big Data Workload Characterization Evaluating Hardware Systems with Big Data Conclusion 3 5 3 2

11 Orlando, 2014.2.18 HPCA 2014 Big Data Workloads Analyzed Input data size varying from 32GB to 1TB

12 Orlando, 2014.2.18 HPCA 2014 Other Benchmarks Compared HPCC Representative HPC benchmark suite 7 benchmarks PARSEC CMP (Multi-threaded) benchmark suite 12 benchmarks SPECCPU SPECCFP SPECINT

13 Orlando, 2014.2.18 HPCA 2014 Metrics User-perceivable metrics OLTP services: requests per second(RPS) Cloud OLTP: operations per second(OPS) OLAP and Offline analytics: data processed per second(DPS) Micro-architecture characteristics Hardware performance counter

14 Orlando, 2014.2.18 HPCA 2014 Experimental Configurations Testbed Configurations Fifteen nodes: 1 master + 14 slaves Data input size: 32GB~1TB Each node: 2*Xeon E5645, 16GB Memory, 8TB Disk Network: 1Gb Ethernet CPU TypeIntel CPU Core Intel Xeon E56456 Cores @ 2.40G L1D CacheL1I CacheL2 CacheL3 Cache 6*32KB 6*256KB12MB Software Configurations OS : Centos 5.5 with Linux kernel 2.6.34. Stacks: Hadoop 1.0.2, Hbase 0.94.5, Hive 0.9, MPICH2 1.5, Nutch 1.1, and Rubis 5.0

15 Orlando, 2014.2.18 HPCA 2014 Instruction Breakdown Data Analytics Services More integer instructions (Less floating point instructions) The average ratio of integer to floating point instructions is 75 FP instruction: X87+SSE FP ( X87, SSE_Pack_Float, SSE_Pack_Double, SSE_Scalar_Float and SSE_Scalar_Double ) Integer instruction: Total _Ins - FP_Ins - Branch_Ins - Store_Ins - Load_Ins

16 Orlando, 2014.2.18 HPCA 2014 Floating Point Operation Intensity (E5310) Total number of floating point instructions divided by total number of memory access bytes in a run of workload. Very low floating point operation intensity : two orders of magnitude lower than in the traditional workloads Data Analytics Services CPU TypeIntel CPU Core Intel Xeon E53104 Cores @ 1.6G L1 Cache L2 CacheL3 Cache 4*32KB 2*4MBNone

17 Orlando, 2014.2.18 HPCA 2014 Floating Point Operation Intensity Data AnalyticsServices Floating point operation intensity on E5645 is higher than that on E5310

18 Orlando, 2014.2.18 HPCA 2014 Integer Operation Intensity Data Analytics Services Integer operation intensity is in the same order like the traditional workloads Integer operation intensity on E5645 is higher than that on E5310 L3 Cache is effective & Bandwidth improvement

19 Orlando, 2014.2.18 HPCA 2014 Possible reasons (Xeon E5645 vs. Xeon E5310) More cores in one processor Deeper cache hierarchy level: L1~L3 vs. L1~L2 Larger bandwidth in Front Side Bus Sixe cores in Xeon E5645 vs. four cores in Xeon E5310 L3 cache is effective in decreasing memory access traffic for big data workloads Xeon E5645 adopts Intel QuickPath Interconnect (QPI) to eliminate bottlenecks in Front Side Bus [ASPLOS 2012] Hyperthreading technology Hyperthreading can improve performance by factors of 1.3~1.6 times for scale-out workloads Technique improvements of Xeon E5645:

20 Orlando, 2014.2.18 HPCA 2014 Cache Behaviors Higher L1I Cache misses than the traditional workloads Data analytic workloads have better L2 Cache behaviors than service workloads with the exception of BFS Good L3 Cache behaviors Data Analytics Services 56 74 83

21 Orlando, 2014.2.18 HPCA 2014 TLB Behaviors data analysis service 14 5 Higher ITBL misses than the traditional workloads

22 Orlando, 2014.2.18 HPCA 2014 Computation intensity (integer operations) Integer Operations per Byte (Receiving from networks) Integer Operations per Byte (Memory Accesses) X axis : (total number of integer instructions)/(total memory access bytes) Higher : execute more integer operations between two memory accesses Y axis : (total number of integer instructions)/(total bytes receiving from networks) Higher : execute more integer operations on the same receiving bytes

23 Orlando, 2014.2.18 HPCA 2014 Big Workloads Characterization Summary Data movement dominated computing Low computation intensity Cache Behaviors ( Xeon E5645) Very high L1I MPKI L3 Cache is effective Diverse workload behaviors Computation/communication vs. computation/memory accesses

24 Orlando, 2014.2.18 HPCA 2014 Outline Benchmarking Methodology and Decision Big Data Workload Characterization Evaluating Hardware Systems with Big Data Y. Shi, S. A. McKee et al. Performance and Energy Efficiency Implications from Evaluating Four Big Data Systems, Submitted to IEEE Micro. Conclusion 3 3

25 Orlando, 2014.2.18 HPCA 2014 State-of-art Big Data System Architectures Wimpy many-core processors Wimpy multi-core processors Brawny-core processors Big Data System & Architecture Trends Hardware Designers: What are the best big data system and architectures in terms of both performance and energy efficiency? Data Center Administrators: How to choose appropriate hardware for big data applications?

26 Orlando, 2014.2.18 HPCA 2014 Evaluated Platforms  Xeon E5310 (Brawny-core) scale-up Xeon E5645 (Brawny- core)  Atom D510 (Wimpy multi-core) scale-out TileGx 36 (Wimpy many-core) ModelXeon E5645Xeon E5310Atom D510TileGx36 No. of Processors2111 No. of Cores/CPU64236 Frequency2.4GHz1.6GHz1.66GHz1.2GHz L1 Cache (I/D)32KB/32KB 32KB/24KB32KB/32KB L2 Cache256KB*64096KB*2512KB*2256KB*36 L3 Cache12MBNONE TDP80W 13W45W Basic Information ModelXeon E5645Xeon E5310Atom D510TileGx36 Pipeline Depth1614165 Superscalar Widths4423 Instruction Set Architecture X86 MIPS Hyper-threadingYesNoYesNo Out-of-Order ExecutionYes No Specified Floating Point Unit Yes No Architectural Characteristics

27 Orlando, 2014.2.18 HPCA 2014 Chosen Workloads from BigDataBench Application Type Offline analytics Realtime analytics Workload Sort Wordcount Grep Naïve Bayes K-means Select Query Aggregation Query Join Query Time Complexity O(n*logn) O(n) O(m*n) O(n) Map Operation Quicksort String comparison & integer calculation Statistics computation Distance computation String comparison String comparison & integer calculation String comparison Reduce Operation Merge sort Combination Merge None Combination Cross product Reduce Input/Map Input 1 0.067 1.85e-6 1.98e-5 2.64e-5 N/A 0.20 0.19

28 Orlando, 2014.2.18 HPCA 2014 Experimental Configurations  Software stack : Hadoop 1.0.2  Cluster configuration: Xeon & Atom-based systems : 1 master + 4 slaves Tilera system : 1 master + 2 slaves  Data Size: 500MB, 2GB, 8GB, 32GB, 64GB, 128GB  Apples-to-Apples comparison : Deploy the systems with the same network and disk configurations Provide about 1GB memory for each hardware thread / core Adjust the Hadoop parameters to optimize performance

29 Orlando, 2014.2.18 HPCA 2014 Metrics  Performance : Data processed per second (DPS)  Energy Efficiency : Data processed per joule(DPJ) Data Input Size DPS = Running Time Data Input Size DPJ = Energy Consumption  Report DPS and DPJ per processor

30 Orlando, 2014.2.18 HPCA 2014 General Observations The Average DPS Comparison The Average DPJ Comparison  I/O intensive workload (Sort) : many-core TileGx36 achieves the best performance and energy efficiency, The brawny-core processors do not provide performance advantages.  CPU-intensive and floating point operation dominated workloads (Bayes & K- means) : brawny-core processors show obvious performance advantages with close energy efficiency to wimpy-core processors.  Other workloads: no platform consistently wins in terms of both performance and energy efficiency. Report the average number only when the data sizes bigger than 8GB (not fully utilized on small data sizes).

31 Orlando, 2014.2.18 HPCA 2014 Improvements from Scaling-out the Wimpy Core (TileGx36 vs. Atom D510) The core of TileGx36 is more wimpy than Atom D510 TileGx36 integrates more cores on the NOC(Network on Chip) Adopts MIPS-derived VLIW instruction set. Does not support hyperthreading. Less stages in the pipeline depth. Does not have dedicated floating point units. 36 cores in TileGx36 vs. 4 cores Atom D510

32 Orlando, 2014.2.18 HPCA 2014 Improvements from Scaling-out the Wimpy Core (TileGx36 vs. Atom D510) The DPS Comparison The DPJ Comparison  I/O intensive workload (Sort): TileGx36 shows 4.1 times performance improvement, 1.01 times energy improvement (on average).  CPU-intensive and floating point operation dominated workloads(Bayes & K-means): TileGx36 shows 2.5 times performance advantage and 0.7 times energy efficiency (on average).  Other workloads: TileGx36 shows 2.5 times performance improvement, 1.03 times energy improvement (on average).

33 Orlando, 2014.2.18 HPCA 2014 Improvements from Scaling-out the Wimpy Core (TileGx36 vs. Atom D510) The core of TileGx36 is more wimpy than Atom D510 TileGx36 integrates more cores on the NOC(Network on Chip) Adopts MIPS-derived VLIW instruction set. Does not support hyperthreading. Less stages in the pipeline depth. Does not have dedicated floating point units. 36 cores in TileGx36 vs. 4 cores Atom D510  Scaling out the wimpy core can bring performance advantage by improving execution parallelism.  Simplifying the wimpy cores and integrating more cores on the NOC is an option for Big Data workloads.

34 Orlando, 2014.2.18 HPCA 2014 Scale-up the Brawny Core(Xeon E5645) vs. Scale-out the Wimpy Core (TileGx36) The DPS Comparison The DPJ Comparison  I/O intensive workload (Sort): TileGx36 shows 1.2 times performance improvement, 1.9 times energy improvement (on average).  CPU-intensive and floating point operation dominated workloads (Bayes & K- means): E5645 shows 4.2 times performance improvement, 2.0 times energy improvement (on average).  Other workloads: E5645 shows performance advantage, but with no consistent energy improvement.

35 Orlando, 2014.2.18 HPCA 2014 Hardware Evaluation Summary  No one-size-fits-all solution None of the microprocessors consistently wins in terms of both performance and energy efficiency for all of our Big Data workloads  One-size-fits-a-bunch solution There are different classes of Big Data workloads, and each class of workload realizes better performance and energy efficiency on different architectures.

36 Orlando, 2014.2.18 HPCA 2014 Outline Benchmarking Methodology and Decision Big Data Workload Characterization Evaluating hardware systems With Big Data Conclusion 3 3

37 Orlando, 2014.2.18 HPCA 2014 Conclusion An open source big data benchmark suite Data-centric benchmarking methodology http://prof.ict.ac.cn/BigDataBench Big Data workload characterization Data movement dominated computing Diverse behaviors Must including diversity of data and workloads Eschew one-size-fits-all solution Tailor system designs to specific workload requirements.

38 Orlando, 2014.2.18 HPCA 2014 THANKs


Download ppt "INSTITUTE OF COMPUTING TECHNOLOGY BigDataBench: a Big Data Benchmark Suite from Internet Services Lei Wang, Jianfeng Zhan, Chunjie Luo, Yuqing Zhu, Qiang."

Similar presentations


Ads by Google