By: Chris Klumph and Kody Willman. Types of Heuristics References Terminology Static Mappings 6 Example mappings 4 Graph chromosome mappings 1 Tree mapping.

Slides:



Advertisements
Similar presentations
Algorithm Design Techniques
Advertisements

Local Search Algorithms
Scheduling in Distributed Systems Gurmeet Singh CS 599 Lecture.
1 An Adaptive GA for Multi Objective Flexible Manufacturing Systems A. Younes, H. Ghenniwa, S. Areibi uoguelph.ca.
Using Parallel Genetic Algorithm in a Predictive Job Scheduling
Scheduling on Parallel Systems - Sathish Vadhiyar.
Genetic Algorithms Contents 1. Basic Concepts 2. Algorithm
Spie98-1 Evolutionary Algorithms, Simulated Annealing, and Tabu Search: A Comparative Study H. Youssef, S. M. Sait, H. Adiche
1 Lecture 8: Genetic Algorithms Contents : Miming nature The steps of the algorithm –Coosing parents –Reproduction –Mutation Deeper in GA –Stochastic Universal.
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
A new crossover technique in Genetic Programming Janet Clegg Intelligent Systems Group Electronics Department.
Task Assignment and Transaction Clustering Heuristics.
Security-Driven Heuristics and A Fast Genetic Algorithm for Trusted Grid Job Scheduling Shanshan Song, Ricky Kwok, and Kai Hwang University of Southern.
COMP305. Part II. Genetic Algorithms. Genetic Algorithms.
EDA (CS286.5b) Day 11 Scheduling (List, Force, Approximation) N.B. no class Thursday (FPGA) …
Chapter 6: Transform and Conquer Genetic Algorithms The Design and Analysis of Algorithms.
1 Reasons for parallelization Can we make GA faster? One of the most promising choices is to use parallel implementations. The reasons for parallelization.
Escaping local optimas Accept nonimproving neighbors – Tabu search and simulated annealing Iterating with different initial solutions – Multistart local.
Genetic Algorithm.
 Escalonamento e Migração de Recursos e Balanceamento de carga Carlos Ferrão Lopes nº M6935 Bruno Simões nº M6082 Celina Alexandre nº M6807.
Network Aware Resource Allocation in Distributed Clouds.
Slides are based on Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems.
Cristian Urs and Ben Riveira. Introduction The article we chose focuses on improving the performance of Genetic Algorithms by: Use of predictive models.
ROBUST RESOURCE ALLOCATION OF DAGS IN A HETEROGENEOUS MULTI-CORE SYSTEM Luis Diego Briceño, Jay Smith, H. J. Siegel, Anthony A. Maciejewski, Paul Maxwell,
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory Mixed Integer Problems Most optimization algorithms deal.
An Introduction to Artificial Life Lecture 4b: Informed Search and Exploration Ramin Halavati In which we see how information.
Zorica Stanimirović Faculty of Mathematics, University of Belgrade
An Iterative Heuristic for State Justification in Sequential Automatic Test Pattern Generation Aiman H. El-MalehSadiq M. SaitSyed Z. Shazli Department.
A Survey of Distributed Task Schedulers Kei Takahashi (M1)
Optimization Problems - Optimization: In the real world, there are many problems (e.g. Traveling Salesman Problem, Playing Chess ) that have numerous possible.
1 Chapter 5 Advanced Search. 2 Chapter 5 Contents l Constraint satisfaction problems l Heuristic repair l The eight queens problem l Combinatorial optimization.
Stochastic DAG Scheduling using Monte Carlo Approach Heterogeneous Computing Workshop (at IPDPS) 2012 Extended version: Elsevier JPDC (accepted July 2013,
Applying Genetic Algorithm to the Knapsack Problem Qi Su ECE 539 Spring 2001 Course Project.
Genetic Algorithms Introduction Advanced. Simple Genetic Algorithms: Introduction What is it? In a Nutshell References The Pseudo Code Illustrations Applications.
Genetic Algorithms Siddhartha K. Shakya School of Computing. The Robert Gordon University Aberdeen, UK
1 Short Term Scheduling. 2  Planning horizon is short  Multiple unique jobs (tasks) with varying processing times and due dates  Multiple unique jobs.
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
C OMPARING T HREE H EURISTIC S EARCH M ETHODS FOR F UNCTIONAL P ARTITIONING IN H ARDWARE -S OFTWARE C ODESIGN Theerayod Wiangtong, Peter Y. K. Cheung and.
O PTIMAL SERVICE TASK PARTITION AND DISTRIBUTION IN GRID SYSTEM WITH STAR TOPOLOGY G REGORY L EVITIN, Y UAN -S HUN D AI Adviser: Frank, Yeong-Sung Lin.
What the senior design students have been doing By Chris Klumph and Kody Willman.
Genetic Algorithms What is a GA Terms and definitions Basic algorithm.
A Hyper-heuristic for scheduling independent jobs in Computational Grids Author: Juan Antonio Gonzalez Sanchez Coauthors: Maria Serna and Fatos Xhafa.
Optimization Problems
Alice E. Smith and Mehmet Gulsen Department of Industrial Engineering
Static Process Scheduling
Efficient Load Balancing Algorithm for Cloud Computing Network Che-Lun Hung 1, Hsiao-hsi Wang 2 and Yu-Chen Hu 2 1 Dept. of Computer Science & Communication.
A Fast Genetic Algorithm Based Static Heuristic For Scheduling Independent Tasks on Heterogeneous Systems Gaurav Menghani Department of Computer Engineering,
Chapter 5. Advanced Search Fall 2011 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
1 Contents 1. Basic Concepts 2. Algorithm 3. Practical considerations Genetic Algorithm (GA)
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal Evolutionary Computation.
1 Comparative Study of two Genetic Algorithms Based Task Allocation Models in Distributed Computing System Oğuzhan TAŞ 2005.
Applications of Tabu Search OPIM 950 Gary Chen 9/29/03.
Genetic Algorithms And other approaches for similar applications Optimization Techniques.
Genetic Algorithm(GA)
 Presented By: Abdul Aziz Ghazi  Roll No:  Presented to: Sir Harris.
 Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems n Introduction.
1 Genetic Algorithms Contents 1. Basic Concepts 2. Algorithm 3. Practical considerations.
Optimization Problems
Chapter 14 Genetic Algorithms.
Load Balancing and It’s Related Works in Cloud Computing
A Dynamic Critical Path Algorithm for Scheduling Scientific Workflow Applications on Global Grids e-Science IEEE 2007 Report: Wei-Cheng Lee
Efficient Load Balancing Algorithm for Cloud
Artificial Intelligence (CS 370D)
Resource Allocation in Heterogeneous Computing Systems
Optimization Problems
Scheduling on Parallel Systems
More on Search: A* and Optimization
Aiman H. El-Maleh Sadiq M. Sait Syed Z. Shazli
Md. Tanveer Anwar University of Arkansas
Local Search Algorithms
Presentation transcript:

By: Chris Klumph and Kody Willman

Types of Heuristics References Terminology Static Mappings 6 Example mappings 4 Graph chromosome mappings 1 Tree mapping Dynamic Mappings Immediate mode (5) Batch mode (3)

Types of Static Mappings Static Mapping is when you know what tasks are scheduled, and you just need to choose the best way to map them. Opportunistic Load Balancing (OLB) Opportunistic Load Balancing (OLB) Minimum Execution Time (MET) Minimum Execution Time (MET) Minimum Completion Time (MCT) Minimum Completion Time (MCT) Min-Min Max-Min Duplex Genetic Algorithms (GA) Simulated Annealing (SA) Genetic Simulated Annealing (GSA) Genetic Simulated Annealing (GSA) Tabu A* NextPreviousHome

Example: Task 1Task 2Task 3Task 4Task 5Task 6 Machin e A Machin e B Machin e C This Example will be used to demonstrate the usage of 6 of the Static Heuristics OLBMin-MinMax-MinMETMCTDuplex HomeNextPrevious

Static Heuristics details Opportunistic Load Balancing (OLB) Completion Times of ExampleExample Assigns each task, in arbitrary order to the machine expected to be available next. This is regardless of the task’s execution time on that machine. This is used to keep all machines as busy as possible and doesn’t care much about optimizations. Its advantages is in its simplicity, but can result in poor makespans. HomeNextPrevious

Static Heuristics details Minimum Execution Time (MET) Completion Times of ExampleExample Assigns each task in arbitrary order to the machine with the best expected execution time for that task. This is regardless of that machines availability This is used to give each task to its best machine. Can cause severe load imbalancing across machines. Task 1-20 all do best on machine A and are assigned to it, but no tasks are best on machine B so it gets no tasks. HomeNextPrevious

Static Heuristics details Minimum Completion Time (MCT) Completion Times of ExampleExample Assigns each task in arbitrary order to the machine with the minimum expected completion time for that task. This causes some problems with tasks assigned to machines that do not have the minimum execution time for them. HomeNextPrevious

Static Heuristics details Min-Min Completion Times of ExampleExample Consider all the unmapped tasks, with the known set of minimum completion times. The task with the overall minimum completion time is selected and assigned to the corresponding machine. Update machines and times, repeating till all tasks mapped. For all unmapped tasks find minimum completion time and assign. (Similar to MCT, but consider all tasks and not one at a time.) HomeNextPrevious

Static Heuristics details Max-Min Completion Times of ExampleExample Consider all the unmapped tasks, with the known set of minimum completion times. The task with the overall maximum completion time is selected and assigned to the corresponding machines. Update machines and times, repeat till all tasks mapped. For all unmapped tasks find maximum completion time and assign. HomeNextPrevious

Static Heuristics details Duplex Completion Times of ExampleExample This is a literal combination of the Min-Min and Max-Min heuristics. It performs both and uses the better solution. This can be performed to exploit the conditions in which either performs well with a negligible overhead. HomeNextPrevious

Static Heuristics details Genetic Algorithms (GA) Technique used to search large solution spaces. Pseudo code: Initial population generation; Evaluation; While(stopping criteria not met) Selection; Crossover; Mutation; Evaluation; Output best solution; Initial population made by 200 randomly generated chromosomes with uniform distribution or seeding by Min-Min. Stopping criteria usually 1k iterations, or 150 no change in best solution. Evaluation – finds which chromosomes are better and keeps them for subsequent populations. Selection – duplicates better chromosomes and deletes others. Crossover – selecting 2 random chromosomes, random point(s) within chromosomes, and swap data points in between points. Each chromosome chosen with 60% probability. Mutation – randomly selects chromosome, randomly selects task within and reassigns it to a new machine. Each chromosome chosen with 40% probability HomeNextPrevious

Static Heuristics details Simulated Annealing (SA) Iterative technique that considers only one possible solution (mapping) for each metatask at a time. Uses same representation of chromosome as GA. This allows poorer solutions to be accepted to attempt to obtain a better search space. Uses a cooling technique that makes it harder to accept a poorer solution the longer the task runs. A 50% probability. After each mutation only 90% of previous chromosomes kept for next iteration. Example: Trying to find the minimum on the graph. Starting at Area 1 the nearest min is in Area 2 but that’s not the overall lowest, by accepting a possible 2< could possible find the actual overall minimum in Area 5. This allows for possibilities in other solution spaces to be found. HomeNextPrevious

Static Heuristics details Genetic Simulated Annealing (GSA) It’s a combination of SA and GA, where it follows procedures similar to GA, but uses a cooling process similar to SA for accepting or rejecting new chromosomes. Each iteration the initial population is reduced 90% of its current value for each makespan. This makes it harder for poorer solutions to be accepted Example: Trying to find the minimum on the graph. Starting at Area 1 the nearest min is in Area 2 but that’s not the overall lowest, by accepting a possible 2< could possible find the actual overall minimum in Area 5. This allows for possibilities in other solution spaces to be found. But the cooling process means that after a certain time no more poorer possibilities would be accepted. HomeNextPrevious

Static Heuristics details Tabu This search of the solution space keeps track of the regions which have already been searched so as to not repeat a search near those areas. This also uses the same type of chromosome mapping as the GA approach. Beginning with random mapping and uniform distribution, perform short hops to find local minimum, then perform long hops to see if there is a overall minimum somewhere else on the mapping. Example: Trying to find the minimum on the graph. Starting at Area 1 make short hops, one area, to find the local min, in Area 2, make a long hop to Area 6, find local min, in Area 5, that is lower than Area 2, make long hop to Area 10, local min is bigger than Area 5’s min. Guess that Area 5 is min for solution space. HomeNextPrevious

Static Heuristics details A* This search technique is based on a µ-nary tree, beginning at a root node that is null solution. As the tree grows, nodes representing partial mappings (subsets of tasks to machines) are calculated, each child having one more task than the parent node. After generating µ children the parent becomes inactive. Keep a limit on the number of active nodes to limit the maximum execution time. All children are then evaluated to find best partial mapping. Those best children then become a parent and the process continues until maximum active nodes or found best mapping. HomeNextPrevious

Types of Dynamic Mappings Dynamic Mapping is when you have to account for a changing situation while mapping. Immediate Mode Mappings Minimum Completion Time Minimum Execution Time Switching Algorithm K-Percent Best Opportunistic Load Balancing Opportunistic Load Balancing Batch Mode Mappings Min-Min Max-Min Sufferage Notes on Batch Mode Mapping Notes on Batch Mode Mapping HomeNextPrevious

Dynamic Immediate Mode Mapping Minimum Execution Time Assigns each task to the machine that results in that task’s earliest completion time. This is a fast-greedy heuristic and is considered a benchmark for the Immediate mode. This is similar to the Static mapping, but different in that it has to be able to react to changing situation instead of a set task list. Takes only O(m) time to map a given task. Assigns each task to the machine that performs that task’s computation in the least amount of time. Also known as Limited Best Assignment. Does not consider machine ready times, which can cause a severe imbalance in loads across the machines. It is a very simple heuristic needing only O(m) time to find the machine that has the minimum execution time. Home Minimum Completion Time NextPrevious

Dynamic Immediate Mode Mapping Switching Algorithm K-Percent Best This uses the MCT and MET heuristics in a cyclic fashion depending on the load distribution across the machines. Takes O(m) time. The purpose of this is to use the MET to get the most tasks out there and then use the MCT to smooth things out for load balancing between the machines. Need to have an load balance index as to when to switch between one and the other. Or having lower and upper limits. This only considers a subset of machines while mapping a task. The task is assigned to a machine that provides the earliest completion time in that subset. The purpose is to avoid putting the current task onto a machine which might be more suitable for a yet- to-come task, “Foresight.” For each task O(m log m) time is spent in ranking the machines for the subset, and O(m) in assigning the task to the machine. Overall taking O(m log m) for this KPB. NextPrevious

Dynamic Immediate Mode Mapping Opportunistic Load Balancing Assigns a task to the machine that becomes available next, without considering the execution time of the task on that machine. If multiple machines become available at the same time, then one is arbitrarily chosen. Depending on the implementation the mapper may need to examine all m machines to find the one that will be available next. Therefore it takes O(m) to find the assignment. HomeNextPrevious

Dynamic Batch Mode Mapping Min-Min Max-Min One the machine that provides the earliest completion time is found for every task, the maximum earliest completion time is determined and assigned to the corresponding machine. This has the same complexity as min-min and takes O(S^2m) It is likely to do better than the Min-Min in cases where there are many more shorter tasks than longer tasks, because it can fill the shorter tasks in around the longer tasks to average out the system. Home Begins by scheduling the tasks that change the expected machines read time status by the least amount, making the task finish in its earliest completion time. The percentage of tasks assigned their best machine is higher in min-min than other batch mode heuristics. Takes O(S^2m) to complete his heuristic. The average meta-task size is S and because it is iterative it checks all tasks against all machines each time. NextPrevious

Dynamic Batch Mode Mapping cont Sufferage Other notes on Batch mode Based on the idea that by assigning a machine to a task that would “suffer” most in terms of expected completion time if that task were not assigned to it. An example would be two machines and two tasks M1 does t1 = 20 and t2 = 50 M2 does t1 = 25 and t2 = 90 The completion time of m1t1 & m2t2 is 110, and m1t2 & m2t1 is 75, By not doing minimum you get a better overall time, don’t “suffer” as much. The complexity of this heuristic makes its total time to completion O(wSm) where 1 ≤ w ≤ S. For Batch Mode there are two strategies maps used Regular time interval – map meta-tasks every ten seconds, where redundant mapping is avoided. Fixed count strategy maps meta-task M when one of following conditions are met: Arriving task makes M larger than or equal to predetermined number K or All tasks in set K have arrived, a task finishes, and those tasks yet to begin are larger than or equal to K HomeNextPrevious

Terminology Makespan – assignment of jobs to minimize completion time. Metatask – set of independent, non-communicating tasks. Fast-Greedy – makes the locally optimum choice at each stage. Doesn’t look back at previous results. O(m) time – look up Big O notation, describes how the size of the input affects algorithms usage. HomeNextPrevious Mapping of Heuristics is like tetris, fit the best task(piece) to the best machine(slot) for optimal results(score). Any black spaces are wasted computational resources.

References M. Maheswaran, S. Ali, H. J. Siegel, D. A. Hensgen, R. F. Freund, “Dynamic Mapping of a Class of Independent Tasks onto Heterogeneous Computing Systems,” in “Journal of Parallel and Distributed Computing 59,” pp (1999)H. J. Siegel T. D. Braun, H. J. Siegel, N. Beck, L. L. Boloni, M. Maheswaran, A. I. Reuther, J. P. Robertson, M. D. Theys, B. Yao, “A Comparison of Eleven Static Heuristics for Mapping a Class of Independent Tasks onto Heterogeneous Computing Systems,” in “Journal of Parallel and Distributed Computing 61,” pp (2001)H. J. Siegel HomeNextPrevious

Howard Jay Siegel He is a professor in the School of Electrical and Computer Engineering at Colorado State University. He is a fellow of the IEEE and a fellow of the ACM. He received two BS degrees from MIT and an MA, MSE, and Ph.D. from Princeton University. Professor Siegel has coauthored over 280 technical papers, has co-edited seven volumes, and wrote the book Interconnection Networks for Large-Scale Parallel Processing. He was a coeditor-in-Chief of the Journal of Parallel and Distributed Computing and was on the Editorial Boards of the IEEE Transactions on Parallel and Distributed Systems and the IEEE Transactions on Computers. He was Program Chair/Co-Chair of three conferences, General Chair/Co-Chair of four conferences, and Chair/C0-Chair of four workshops. He is an international keynote speaker and tutorial lecturer and a consultant for government and industry. HomePrevious End of Presentation