HAMS Technologies 1

Slides:



Advertisements
Similar presentations
MapReduce Online Tyson Condie UC Berkeley Slides by Kaixiang MO
Advertisements

Interval Heaps Complete binary tree. Each node (except possibly last one) has 2 elements. Last node has 1 or 2 elements. Let a and b be the elements in.
P2PR-tree: An R-tree-based Spatial Index for P2P Environments ANIRBAN MONDAL YI LIFU MASARU KITSUREGAWA University of Tokyo.
HAMS Technologies 1
LIBRA: Lightweight Data Skew Mitigation in MapReduce
SecureMR: A Service Integrity Assurance Framework for MapReduce Wei Wei, Juan Du, Ting Yu, Xiaohui Gu North Carolina State University, United States Annual.
Chess Problem Solver Solves a given chess position for checkmate Problem input in text format.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Pei Fan*, Ji Wang, Zibin Zheng, Michael R. Lyu Toward Optimal Deployment of Communication-Intensive Cloud Applications 1.
Central Limit Theorem.
A Grid Parallel Application Framework Jeremy Villalobos PhD student Department of Computer Science University of North Carolina Charlotte.
Secure Multicast Xun Kang. Content Why need secure Multicast? Secure Group Communications Using Key Graphs Batch Update of Key Trees Reliable Group Rekeying.
Dynamic Load Balancing Experiments in a Grid Vrije Universiteit Amsterdam, The Netherlands CWI Amsterdam, The
Portland: A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric Offense Kai Chen Shih-Chi Chen.
Index Structures Parin Shah Id:-207. Topics Introduction Structure of B-tree Features of B-tree Applications of B-trees Insertion into B-tree Deletion.
Knight’s Tour Distributed Problem Solving Knight’s Tour Yoav Kasorla Izhaq Shohat.
Mapping Techniques for Load Balancing
MULTICOMPUTER 1. MULTICOMPUTER, YANG DIPELAJARI Multiprocessors vs multicomputers Interconnection topologies Switching schemes Communication with messages.
NETWORK TOPOLOGY.
Switching, routing, and flow control in interconnection networks.
Computer Networks Lecture 1 & 2 Introduction and Layer Model Approach Lahore Leads University.
1 Real-Time Queueing Network Theory Presented by Akramul Azim Department of Electrical and Computer Engineering University of Waterloo, Canada John P.
Hadoop Ida Mele. Parallel programming Parallel programming is used to improve performance and efficiency In a parallel program, the processing is broken.
1 Reasons for parallelization Can we make GA faster? One of the most promising choices is to use parallel implementations. The reasons for parallelization.
Pregel: A System for Large-Scale Graph Processing
1 The Turn Model for Adaptive Routing. 2 Summary Introduction to Direct Networks. Deadlocks in Wormhole Routing. System Model. Partially Adaptive Routing.
Binary Exponential Backoff Binary exponential backoff refers to a collision resolution mechanism used in random access MAC protocols. This algorithm is.
MOBILE AD-HOC NETWORK(MANET) SECURITY VAMSI KRISHNA KANURI NAGA SWETHA DASARI RESHMA ARAVAPALLI.
Sun Grid Engine. Grids Grids are collections of resources made available to customers. Compute grids make cycles available to customers from an access.
HAMS Technologies 1
HAMS Technologies 1
High Throughput Computing on P2P Networks Carlos Pérez Miguel
Protocols. Protocol Set of rules that govern: Connection Communication data transfer Protocols regulate: access method allowed physical topologies types.
Pregel: A System for Large-Scale Graph Processing Grzegorz Malewicz, Matthew H. Austern, Aart J. C. Bik, James C. Dehnert, Ilan Horn, Naty Leiser, and.
A Graph-based Friend Recommendation System Using Genetic Algorithm
1 Multiprocessor and Real-Time Scheduling Chapter 10 Real-Time scheduling will be covered in SYSC3303.
The Owner Share scheduler for a distributed system 2009 International Conference on Parallel Processing Workshops Reporter: 李長霖.
Ethernet: Distributed Packet Switching for Local Computer Networks Authors: Robert M. Metcalfe and David R. Boggs Presentation: Christopher Peery.
Chapter 8-2 : Multicomputers Multiprocessors vs multicomputers Multiprocessors vs multicomputers Interconnection topologies Interconnection topologies.
Distributed Computing Systems CSCI 4780/6780. Geographical Scalability Challenges Synchronous communication –Waiting for a reply does not scale well!!
1 Dr. Ali Amiri TCOM 5143 Lecture 8 Capacity Assignment in Centralized Networks.
UAB Dynamic Tuning of Master/Worker Applications Anna Morajko, Paola Caymes Scutari, Tomàs Margalef, Eduardo Cesar, Joan Sorribes and Emilio Luque Universitat.
Topology A topology (from Greek topos meaning place) is a description of any kind of locality in terms of its layout. In communication networks,
Complex Contagions Models in Opportunistic Mobile Social Networks Yunsheng Wang Dept. of Computer Science, Kettering University Jie Wu Dept. of Computer.
Click to add your name and company information. Welcome to the Time Mastery Profile Action Planning Seminar Welcome to the Time Mastery Profile ® Action.
Distributed Computing Systems CSCI 4780/6780. Scalability ConceptExample Centralized servicesA single server for all users Centralized dataA single on-line.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2000 CH. 8: SWITCHING & DATAGRAM NETWORKS 7.1.
Issues on the operational cluster 1 Up to 4.4x times variation of the execution time on 169 cores Using -O2 optimization flag Using IBM MPI without efficient.
Franklin Kramer.   Background  Experiment  Results  Conclusions Overview.
HAMS Technologies 1
Parallel Application Paradigms CS433 Spring 2001 Laxmikant Kale.
Dynamic Tuning of Parallel Programs with DynInst Anna Morajko, Tomàs Margalef, Emilio Luque Universitat Autònoma de Barcelona Paradyn/Condor Week, March.
DELAC MTG. March 20th 2013 PARTNERING POSITIVELY FOR YOUR CHILD’S SUCCESS COMMUNICATING WITH YOUR CHILD’S TEACHER ADAPTED FROM SMITH (2013), UNIV. OF FL.
Dynamic Load Balancing Tree and Structured Computations.
COMP7330/7336 Advanced Parallel and Distributed Computing Task Partitioning Dr. Xiao Qin Auburn University
COMP7330/7336 Advanced Parallel and Distributed Computing Task Partitioning Dynamic Mapping Dr. Xiao Qin Auburn University
Electronic visualization laboratory, university of illinois at chicago Sort Last Parallel Rendering for Viewing Extremely Large Data Sets on Tile Displays.
Distributed Correlation in Fabric Kiwi Team PSNC.
HERON.
Large-scale file systems and Map-Reduce
Dynamic Graph Partitioning Algorithm
Chapter One Introduction to Pipelined Processors
A Distributed Bucket Elimination Algorithm
Comparison between Suzuki Kasami’s and Raymond’s Tree Algorithm
Mike Becher and Wolfgang Rehm
Address [ 30 Xueyuan Road, Beijing , China]
When Democracy and Computer Science Come Together
COS 518: Distributed Systems Lecture 11 Mike Freedman
Physician Mailing Lists
Senior Project Ideas.
Presentation transcript:

HAMS Technologies 1

2 HAMS Technologies » Consider that you have a big task that can be divided into four independent task. Also consider that you can spawn infinite number of communication process but one process can communicate with exactly one another process at a time. We have one master process responsible to distribute one task among four different worker process. » Now we will see the different possible communication topologies and analyze the behavior of each. Master Node Worker-1Worker-2Worker-3Worker-4 Master Node Intermediate Communication Node -1 Worker 1 Worker 2 Internediate Communication Node-2 Worker 3 Worker 4 Master Node Worker-1 Intermediate- 1 Worker- 2 Intermediate-2 Worker- 3 Intermediate-3 Worker-4

Direct (star) topology Master Node Worker- 1 Worker- 2 Worker- 3 Worker- 4 Communication will happened as follow At time unit 1 Submit job-1, Master Node to Worker-1 At time unit 2 Submit job-2, Master Node to Worker-2 At time unit 3 Submit job-3, Master Node to Worker-3 At time unit 4 Submit job-4, Master Node to Worker-4 At time unit 5 Receive job-1, Worker-1 to Master Node At time unit 6 Receive job-2, Worker-2 to Master Node At time unit 7 Receive job-3, Worker-3 to Master Node At time unit 8 Receive job-4, Worker-4 to Master Node Total time for communication = 8 units HAMS Technologies 3

Binary tree topology Communication will happened as follow At time unit 1 Submit job-1 and job-2, Master Node to Intermediate-1 At time unit 2 Submit job-3 and job4, Master Node to intermediate-2 AND Submit job-1, Intermediate-1 to worker-1 At time unit 3 Submit job-3, Intermediate-2 to Worker-3 AND Submit job-2, Intermediate-1 to Worker-2 At time unit 4 Submit job-4, Intermediate-2 to Worker-4 AND Receive Job-1, Worker-1 to intermediate-1 At time unit 5 Receive job-2, Worker-1 to intermediate-1 AND Receive Job-3, Worker-3 to intermediate-2 At time unit 6 Receive job-1 and job-2, intermediate-1 to Master Node AND Receive Job-4, Worker-4 to intermediate-2 At time unit 7 Receive job-3 and job-4, intermediate-2 to Master Node Total time for communication = 7 units Major Problem with Binary topology.. 1.Communication is not balanced 2.Many idle node at every stage due to unavailability of data or due to business of relative node, So We suggest and going for exponential kind of tree Master Node Intermediate Communication Node -1 Worker 1 Worker 2 Internediate Communication Node-2 Worker 3 Worker 4 HAMS Technologies 4

Exponential tree (with variation) topology Communication will happened as follow At time unit 1 Send job-3 and job-4, Master Node to Intermediate-2 At time unit 2 Send job-2, Master Node to intermediate-1 AND Send job-4, Intermediate-2 to Intermediate-3 At time unit 3 Send job-1, Master Node to Worker-1 AND Send job-2, Intermediate-1 to Worker-2 AND Send job-3, Intermediate-2 to Worker-3 AND Send job-4, Intermediate-3 to Worker-4 At time unit 4 Received job-1, Worker-1 to Master Node AND Received job-2, Worker-2 to Intermediate-1 AND Received job-3, Worker-3 to Intermediate-2 AND Received job-4, Worker-4 to Intermediate-3 At time unit 5 Received job-2, Intermediate-1 to Master Node AND Received job-4, Intermediate-3 to Intermediate-2 At time unit 6 Receive job-3 and job-4, intermediate-2 to Master Node AND Receive Job-4, Worker-4 to intermediate-2 Total time for communication = 6 units Master Node Worker-1 Intermediate-1 Worker-2 Intermediate-2 Worker-3 Intermediate-3 Worker-4 HAMS Technologies 5

Comparison between Time taken by different topologies TopologyTime taken (for scheduling 4 jobs to 4 worker process) Reason and Conclusion Direct8 unitOne process (Master-node) is sending/receiving all the jobs. So Master node becomes the bottleneck here. In general for X worker processes it will take 2X unit time Binary7 UnitCommunication task is divided but many process has to wait for other relative process to be free. This consume extra time. In general for X worker process, this approach will take 2X-1 unit time Exponential kind of tree 6 unitCommunication will be smooth in this case. In general for X worker process, this approach will take 2*(X-1) unit times HAMS Technologies 6

7 Thank you Kindly drop us a mail at below mention address for any suggestion and clarification. We like to hear from you HAMS Technologies