Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 1 Big Data + Cloud Computing 张奇 复旦大学 COMP630030 Data Intensive Computing 1.

Similar presentations


Presentation on theme: "Lecture 1 Big Data + Cloud Computing 张奇 复旦大学 COMP630030 Data Intensive Computing 1."— Presentation transcript:

1 Lecture 1 Big Data + Cloud Computing 张奇 复旦大学 COMP630030 Data Intensive Computing 1

2 2

3 3

4 4

5 How much data? Wayback Machine (Internet Archive) has 2 PB + 20 TB/month (2006) Google processes 20 PB a day (2008) “all words ever spoken by human beings” ~ 5 EB NOAA has ~1 PB climate data (2007) CERN’s LHC will generate 15 PB a year (2008) 640K ought to be enough for anybody. 5

6 Happening everywhere! 6 Molecular biology (cancer) microarray chips Particle events (LHC) particle colliders microprocessors Simulations (Millennium) Network traffic (spam) fiber optics 300M/day 1B

7 7 Maximilien Brice, © CERN

8 8

9 9 2013 年 5 月 17 日,阿里集团最后一台 IBM 小机在支付宝下线。

10 Example: Wikipedia Anthropology Experiment – Download entire revision history of Wikipedia – 4.7 M pages, 58 M revisions, 800 GB – Analyze editing patterns & trends Computation – Hadoop on 20-machine cluster 10 Kittur, Suh, Pendleton (UCLA, PARC), “He Says, She Says: Conflict and Coordination in Wikipedia” CHI, 2007 Increasing fraction of edits are for work indirectly related to articles

11 Example: Scene Completion Image Database Grouped by Semantic Content – 30 different Flickr.com groups – 2.3 M images total (396 GB). Select Candidate Images Most Suitable for Filling Hole – Classify images with gist scene detector [Torralba] – Color similarity – Local context matching Computation – Index images offline – 50 min. scene matching, 20 min. local matching, 4 min. compositing – Reduces to 5 minutes total by using 5 machines Extension – Flickr.com has over 500 million images … 11 Hays, Efros (CMU), “Scene Completion Using Millions of Photographs” SIGGRAPH, 2007

12 Example: Web Page Analysis Experiment – Use web crawler to gather 151M HTML pages weekly 11 times Generated 1.2 TB log information – Analyze page statistics and change frequencies Systems Challenge “ Moreover, we experienced a catastrophic disk failure during the third crawl, causing us to lose a quarter of the logs of that crawl. ” Fetterly, Manasse, Najork, Wiener (Microsoft, HP), “A Large-Scale Study of the Evolution of Web Pages,” Software-Practice & Experience, 2004 12

13 GATGCTTACTATGCGGGCCCC CGGTCTAATGCTTACTATGC GCTTACTATGCGGGCCCCTT AATGCTTACTATGCGGGCCCCTT TAATGCTTACTATGC AATGCTTAGCTATGCGGGC AATGCTTACTATGCGGGCCCCTT CGGTCTAGATGCTTACTATGC AATGCTTACTATGCGGGCCCCTT CGGTCTAATGCTTAGCTATGC ATGCTTACTATGCGGGCCCCTT ? ? Subject genome Sequencer Reads 13

14 DNA Sequencing Genome of an organism encodes genetic information in long sequence of 4 DNA nucleotides: ATCG – Bacteria: ~5 million bp(base pairs) – Humans: ~3 billion bp Current DNA sequencing machines can generate 1-2 Gbp of sequence per day, in millions of short reads (25-300bp) – Shorter reads, but much higher throughput – Per-base error rate estimated at 1-2% (Simpson, et al, 2009) Recent studies of entire human genomes have used 3.3 (Wang, et al., 2008) & 4.0 (Bentley, et al., 2008) billion 36bp reads – ~144 GB of compressed sequence data ATCTGATAAGTCCCAGGACTTCAGT GCAAGGCAAACCCGAGCCCAGTTT TCCAGTTCTAGAGTTTCACATGATC GGAGTTAGTAAAAGTCCACATTGAG 14

15 15 CGGTCTAGATGCTTAGCTATGCGGGCCCCTT Reference sequence Alignment GCTTA T CTAT TTA T CTATGC A T CTATGCGG GCTTA T CTAT TCTAGATGCTTCTAGATGCT CTATGCGGGC CTAGATGCTT A T CTATGCGG CTATGCGGGC A T CTATGCGG Subject reads

16 16 CGGTCTAGATGCTTATCTATGCGGGCCCCTT GCTTATCTAT TTATCTATGC ATCTATGCGG GCTTATCTAT GGCCCCTT GCCCCTT CCTT CGG CGGTC CGGTCT CGGTCTAG TCTAGATGCTTCTAGATGCT CTATGCGGGCCTAGATGCTT CTT ATGCGGGCCC Reference sequence Subject reads

17 Example: Bioinformatics Evaluate running time on local 24 core cluster – Running time increases linearly with the number of reads 17 Michael Schatz. CloudBurst: Highly Sensitive Read Mapping with MapReduce. Bioinformatics, 2009

18 Example: Data Mining del.icio.us crawl->a bipartite graph covering 802739 Webpages and 1021107 tags. 18 Haoyuan LiHaoyuan Li, Yi Wang, Dong Zhang, Ming Zhang, Edward Y. Chang: Pfp: parallel fp-growth for query recommendation. RecSys 2008: 107- 114Yi Wang Ming ZhangEdward Y. ChangRecSys 2008

19 Example: Information Flow 19

20 Example: Opinion Leader 20

21 There’s nothing like more data! s/inspiration/data/g; (Banko and Brill, ACL 2001) (Brants et al., EMNLP 2007) 21

22 22

23 What is Cloud Computing? 1.First write down your own opinion about “cloud computing”, whatever you thought about in your mind. 2.Question: What ? Who? Why? How? Pros and cons? 3.The most important question is: What is the relation with us? 23 From “Introduction to Cloud Computing and Technical Issues” Eom, Hyeonsang

24 What Is Cloud Computing? Internet computing – Computation done through the Internet – No concern about any maintenance or management of actual resources Shared computing resources – As opposed to local servers and devices Comparable to Grid Infrastructure Web applications Specialized raw computing services 24 From “Introduction to Cloud Computing and Technical Issues” Eom, Hyeonsang

25 Cloud Computing Resources Large pool of easily usable and accessible virtualized resources – Dynamic reconfiguration for adjusting to different loads (scales) – Optimization of resource utilization Pay-per-use model – Guarantees offered by the Infrastructure Provider by means of customized SLAs(Service Level Agreements) Set of features – Scalability, pay-per-use utility model and virtualization 25 From “Introduction to Cloud Computing and Technical Issues” Eom, Hyeonsang

26 Technical Key Points User interaction interface: how users of cloud interface with the cloud Services catalog: services a user can request System management: management of available resources Provisioning tool: carving out the systems from the cloud to deliver on the requested service Monitoring and metering: tracking the usage of the cloud Servers: virtual or physical servers managed by system administrators 26 From “Introduction to Cloud Computing and Technical Issues” Eom, Hyeonsang

27 Key Characteristics (1/2) Cost savings for resources – Cost is greatly reduced as initial expense and recurring expenses are much lower than traditional computing – Maintenance cost is reduced as a third party maintains everything from running the cloud to storing data Platform, Location and Device independency – Adoptable for all sizes of businesses, in particular small and mid-sized ones 27 From “Introduction to Cloud Computing and Technical Issues” Eom, Hyeonsang

28 Key Characteristics (2/2) Scalable services and applications – Achieved through server virtualization technology Redundancy and disaster recovery 28 From “Introduction to Cloud Computing and Technical Issues” Eom, Hyeonsang

29 Timeline 29 From “Introduction to Cloud Computing and Technical Issues” Eom, Hyeonsang

30 Cloud Computing Architecture Front End – End user, client or any application (i.e., web browser, etc.) Back End (Cloud services) – Network of servers with any computer program and data storage system It is usually assumed that clouds have infinite storage capacity for any software available in market 30

31 31

32 Cloud Service Taxonomy Layer – Software-as-a-Service (SaaS) – Platform-as-a-Service (PaaS) – Infrastructure-as-a-Service (IaaS) – Data Storage-as-a-Service (DaaS) – Communication-as-a-Service (CaaS) – Hardware-as-a-Service(HaaS ) Type – Public cloud – Private cloud – Inter-cloud 32

33 Infrastructure as a Service 33

34 What Is Infrastructure-as-a-Service (IaaS) Characteristics – Utility computing and billing model – Automation of administrative tasks – Dynamic scaling – Desktop virtualization – Policy-based services – Internet connectivity 34

35 Use Scenario for IaaS 35

36 Business Example: Amazon EC2 36

37 Amazon EC2 CLI (Client Level Interface) 37

38 Amazon EC2 Automated Management 38

39 Data Storage as a Service (DaaS) Definition – Delivery of data storage as a service, including database-like services, often billed on a utility computing basis Database (Amazon SimpleDB & Google App Engine's BigTable datastore) Network attached storage (MobileMe iDisk & Nirvanix CloudNAS) Synchronization (Live Mesh Live Desktop component & MobileMe push functions) Web service (Amazon Simple Storage Service & Nirvanix SDN) 39

40 Amazon S3 40

41 Platform as a Service 41

42 PaaS Example: Google App Engine Service that allows user to deploy user’s Web applications on Google's very scalable architecture – Providing user with a sandbox for user’s Java and Python application that can be referenced over the Internet – Providing Java and Python APIs for persistently storing and managing data (using the Google Query Language or GQL) 42

43 Google App Engine 43

44 Google App Engine (Python Example) 44

45 Software-as-a-Service (SaaS) Definition – Software deployed as a hosted service and accessed over the Internet Features – Open, Flexible – Easy to Use – Easy to Upgrade – Easy to Deploy 45

46 Software as a Service 46

47 Human as a Service 47

48 Administration/Business Support 48

49 Cloud Architecture -> Cloud Players 49

50 Players 50

51 Players: Providers 51

52 Players: Cloud Intermediaires 52

53 Players: Application Providers 53

54 54

55 Web-Scale Problems? Don’t hold your breath: – Biocomputing – Nanocomputing – Quantum computing –…–… It all boils down to… – Divide-and-conquer – Throwing more hardware at the problem Simple to understand… a lifetime to master… 55

56 Divide and Conquer “Work” w1w1 w2w2 w3w3 r1r1 r1r1 r2r2 r2r2 r3r3 r3r3 “Result” “worker” Partition Combine 56

57 Different Workers Different threads in the same core Different cores in the same CPU Different CPUs in a multi-processor system Different machines in a distributed system 57

58 Choices, Choices, Choices Commodity vs. “exotic” hardware Number of machines vs. processor vs. cores Bandwidth of memory vs. disk vs. network Different programming models 58

59 Flynn’s Taxonomy Instructions Single (SI)Multiple (MI) Data Multiple (MD) SISD Single-threaded process MISD Pipeline architecture SIMD Vector Processing MIMD Multi-threaded Programming Single (SD) 59

60 SISD DDDDDDD Processor Instructions 60

61 SIMD D0D0 Processor Instructions D0D0 D0D0 D0D0 D0D0 D0D0 D1D1 D2D2 D3D3 D4D4 … DnDn D1D1 D2D2 D3D3 D4D4 … DnDn D1D1 D2D2 D3D3 D4D4 … DnDn D1D1 D2D2 D3D3 D4D4 … DnDn D1D1 D2D2 D3D3 D4D4 … DnDn D1D1 D2D2 D3D3 D4D4 … DnDn D1D1 D2D2 D3D3 D4D4 … DnDn D0D0 61

62 MIMD DDDDDDD Processor Instructions DDDDDDD Processor Instructions 62

63 Memory Typology: Shared Memory Processor 63

64 Memory Typology: Distributed MemoryProcessorMemoryProcessor MemoryProcessorMemoryProcessor Network 64

65 Memory Typology: Hybrid Memory Processor Network Processor Memory Processor Memory Processor Memory Processor 65

66 Parallelization Problems How do we assign work units to workers? What if we have more work units than workers? What if workers need to share partial results? How do we aggregate partial results? How do we know all the workers have finished? What if workers die? What is the common theme of all of these problems? 66

67 General Theme? Parallelization problems arise from: – Communication between workers – Access to shared resources (e.g., data) Thus, we need a synchronization system! This is tricky: – Finding bugs is hard – Solving bugs is even harder 67

68 Managing Multiple Workers Difficult because – (Often) don’t know the order in which workers run – (Often) don’t know where the workers are running – (Often) don’t know when workers interrupt each other Thus, we need: – Semaphores (lock, unlock) – Conditional variables (wait, notify, broadcast) – Barriers Still, lots of problems: – Deadlock, livelock, race conditions,... Moral of the story: be careful! – Even trickier if the workers are on different machines 68

69 Patterns for Parallelism Parallel computing has been around for decades Here are some “design patterns” … 69

70 Master/Slaves slaves master 70

71 Producer/Consumer Flow CP P P C C CP P P C C 71

72 Work Queues C P P P C C shared queue WWWWW 72

73 Rubber Meets Road From patterns to implementation: – pthreads, OpenMP for multi-threaded programming – MPI for clustering computing –…–… The reality: – Lots of one-off solutions, custom code – Write you own dedicated library, then program with it – Burden on the programmer to explicitly manage everything MapReduce to the rescue! – (for next time) 73

74 Questions? 74


Download ppt "Lecture 1 Big Data + Cloud Computing 张奇 复旦大学 COMP630030 Data Intensive Computing 1."

Similar presentations


Ads by Google