FreeCell Solitaire Optimization Team Solitaire: Chapp Brown Kylie Beasley.

Slides:



Advertisements
Similar presentations
Algorithm Design Techniques
Advertisements

Review: Search problem formulation
Heuristic Search techniques
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal.
Ch 4. Heuristic Search 4.0 Introduction(Heuristic)
Introduction to Computer Science 2 Lecture 7: Extended binary trees
Traveling Salesperson Problem
PROBLEM SOLVING AND SEARCH
11 Human Competitive Results of Evolutionary Computation Presenter: Mati Bot Course: Advance Seminar in Algorithms (Prof. Yefim Dinitz)
Types of Algorithms.
Tree Searching. Tree searches A tree search starts at the root and explores nodes from there, looking for a goal node (a node that satisfies certain conditions,
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
CMSC 471 Spring 2014 Class #4 Thu 2/6/14 Uninformed Search Professor Marie desJardins,
Branch & Bound Algorithms
CSC 423 ARTIFICIAL INTELLIGENCE
Artificial Intelligence (CS 461D)
Artificial Intelligence Lecture No. 7 Dr. Asad Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
12 Pontoon1May Pontoon program CE : Fundamental Programming Techniques.
Review: Search problem formulation
Evolving Heuristics for Searching Games Evolutionary Computation and Artificial Life Supervisor: Moshe Sipper Achiya Elyasaf June, 2010.
Using Search in Problem Solving
Intelligent Agents What is the basic framework we use to construct intelligent programs?
Chapter 10: Algorithm Design Techniques
Blind Search-Part 2 Ref: Chapter 2. Search Trees The search for a solution can be described by a tree - each node represents one state. The path from.
Using Search in Problem Solving
Recursion Chapter 7. Chapter 7: Recursion2 Chapter Objectives To understand how to think recursively To learn how to trace a recursive method To learn.
Backtracking.
Problem-solving agents
Solving Problems by Searching
State-Space Searches.
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
Genetic Algorithm.
Tree Searching Breadth First Search Dept First Search.
1 Trees Tree nomenclature Implementation strategies Traversals –Depth-first –Breadth-first Implementing binary search trees.
Recursion Chapter 7. Chapter Objectives  To understand how to think recursively  To learn how to trace a recursive method  To learn how to write recursive.
Zorica Stanimirović Faculty of Mathematics, University of Belgrade
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l inference l all learning.
Algorithm Paradigms High Level Approach To solving a Class of Problems.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
Graphs. 2 Graph definitions There are two kinds of graphs: directed graphs (sometimes called digraphs) and undirected graphs Birmingham Rugby London Cambridge.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.
Tetris Agent Optimization Using Harmony Search Algorithm
Goal-based Problem Solving Goal formation Based upon the current situation and performance measures. Result is moving into a desirable state (goal state).
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
Week 10 - Friday.  What did we talk about last time?  Graph representations  Adjacency matrix  Adjacency lists  Depth first search.
COSC 2007 Data Structures II
Week 15 – Wednesday.  What did we talk about last time?  Review up to Exam 1.
Search in State Spaces Problem solving as search Search consists of –state space –operators –start state –goal states A Search Tree is an efficient way.
Graph Search II GAM 376 Robin Burke. Outline Homework #3 Graph search review DFS, BFS A* search Iterative beam search IA* search Search in turn-based.
The Standard Genetic Algorithm Start with a “population” of “individuals” Rank these individuals according to their “fitness” Select pairs of individuals.
Source: David Lee Matuszek
Blind Search Russell and Norvig: Chapter 3, Sections 3.4 – 3.6 CS121 – Winter 2003.
USING MICROBIAL GENETIC ALGORITHM TO SOLVE CARD SPLITTING PROBLEM.
Finding Heuristics Using Abstraction
Types of Algorithms.
Searching for Solutions
Tree Searching.
Backtracking and Branch-and-Bound
Types of Algorithms.
Tree Searching.
Tree Searching.
Tree Searching.
CMSC 471 Fall 2011 Class #4 Tue 9/13/11 Uninformed Search
Lecture 4: Tree Search Strategies
Basic Search Methods How to solve the control problem in production-rule systems? Basic techniques to find paths through state- nets. For the moment: -
Presentation transcript:

FreeCell Solitaire Optimization Team Solitaire: Chapp Brown Kylie Beasley

The Game 8 columns, Initially, Columns 1-4 contain 7 cards And Columns 5-8 contain 6 cards All cards are dealt face up Ace has a value of 1, Jack, Queen, King have values of 11, 12, 13, respectively 4 foundations Cards cannot be removed from foundations Cards must be placed into foundations in ascending order from ace to king.

The Game, cont. 4 FreeCells (can each contain one card of any value) Cards can be stacked by being placed onto a card that is one value higher and the alternate suit color. To move a stack of cards of size n, the number of open free cells and the number of empty columns must add up to n-1. The objective of the game is to move all 52 cards into the foundation spaces.

To move a card To move a stack of n cards : N c ≤ C f + 1 N c = number of cards, C f = number of free cells V bc = V dc – 1 V = numerical value of the card SC bc ≠ SC dc SC = suit color bc = base card of stack, dc = destination card For the stack: V s0 = V s1 -1, V s1 = V s2 -1,….., V sn-2 = V sn-1 -1 SC s0 ≠SC s1, SC s1 ≠SC s2, ….., SC sn-1 ≠SC sn-2

Goal State Foundation: [KH, KC, KD, KS] FreeCells: [] Columns: []

Problem Statement Find an algorithm, F(x), which maximizes P when: P = F(M) / size(M) M is the set of the original 32,000 Microsoft deals F(x) is an algorithm that returns the number of deals it has successfully solved from a set of deals, where a solution is represented by a path of nodes on a search tree that reach the goal state And P is the percentage of deals the algorithm has solved

Breadth First Search A Breadth-first Search (BFS) approach to solving FreeCell Solitaire would solve the game in the fewest possible moves. The root node of the tree represents the initial board state. The search looks for all the possible board states that are one move away from the initial state. These board states become leaf nodes. From there, the algorithm recursively repeats the same process for each leaf node. Due to storing so many board states, memory will run out before finding a solution to any deal of the game. Even after looking only a few moves ahead the search tree size is very large

MovesTree size , , ,687 73,148, ,229, ,380, ,428, ,683,996, ,217,980, E E E E+13 Search tree size for board #14 for 16 moves.

Depth First Search Similar to a Breadth-first Search, it also uses a search tree and recursion. The search starts at the root node, and calculates a single possible first move. From there it calculates a possible move from that state, and so on. If it runs out of moves to make, it backtracks up the tree one move, looks for a different move from that state, and continues. Using a depth-bound of six allows the tree to search 6 moves ahead, without running out of memory

Depth First Search None of the original 32,000 deals can be solved in six moves. On the majority of the deals, a depth-bound of 7 exhausts physical memory

Heineman’s Staged Deepening (HSD) This is the second-best algorithm for solving FreeCell deals to date Modified version of DFS Takes advantage of several unique properties of Freecell that significantly cut down storage requirements

Useful Unique Properties of FreeCell Many moves in FreeCell cannot be undone: You cannot move a card off the foundation after placing it in the foundation You cannot move a card off an unsorted column and then move it back. Secondly, in most cases, there are multiple ways to arrive at the solution to a given deal of FreeCell. These two properties allow HSD algorithm to periodically throw out old board states when the array of board states reaches a length of 200,000

Heineman’s Staged Deepening Heuristic (HSDH) Heineman’s Staged Deepening algorithm uses a single heuristic to evaluate board states: For each foundation pile, locate within the columns the next card that should be placed there, and count the cards found on top of it. Return the sum of these 4 counts, multiplied by two if there are no available FreeCells or the foundation is empty. Therefore, the lower the integer returned by the heuristic, the “better” the board state has been evaluated to be Heineman also decided to store each board state as the integer returned from the heuristic, further reducing the storage requirements

So how does it work? Take a DFS with depth-bound 6, evaluate and store all board states exactly 6 moves away from the original board state in descending order in an array Take the best board state from the array and repeat. If the array reaches a size of 200,000, clear it. Retain only enough information to be able to backtrack and report the path from the original board state to the current board state When a path to the solution is found, return the path to the goal state from the original board state

Pseudo Code 2: T ← initial state 3: while T not empty do 4: s ← remove best state in T according to heuristic value 5: U ← all states exactly k moves away from s, discovered by DFS 6: T ← merge(T, U) 7: // merge maintains T sorted by descending heuristic value 8: // merge overwrites nodes in T with newer nodes from U of equal heuristic value 9: if size of transposition table ≥ N then 10: clear transposition table 11: end if 12: if goal ∈ T then 13: return path to goal

Disadvantages of HSD Since the algorithm throws out old board states, it is possible that it will remove an important board state that is needed to reach the goal state It is also possible to get stuck in an infinite loop since the algorithm cannot check to see if it has already discovered a specific board state

Genetic Algorithm A genetic algorithm mimics natural selection A solution to a problem is represented as a genome That genome is usually subjected to processes such as crossover and mutation, resulting in new “generations” of solutions These generations are subjected to some selection pressure to keep the good solutions alive and weed out the bad solutions

Genetic Algorithm, cont. Elyasaf, Hauptman and Sipper wished to improve Heineman’s staged deepening algorithm They created a new set of heuristics None of these heuristics were any better than the HSDH individually so they combined them hoping that the sum of all the heuristics would be better than the HSDH by itself. They combined them by multiplying each heuristic by a weight and adding them together. To find good combinations they turned to a genetic algorithm

Genetic Algorithm Heuristics They normalized these values by finding the maximum value and dividing by it. Each heuristic returns a value between 0 and 1. These values were then multiplied by a weight and added together.

Fitness for the Solutions How did Elyasaf et al. determine the fitness of evolving individuals? They ran HSD on the deal first, to calculate how many nodes it took HSD to solve the deal. If HSD couldn’t solve the deal, the deal was assigned a node requirement of 1000 nodes (the longest path successfully used by HSD times 2) Fitness of evolving individuals was evaluated as the node reduction between the individual and the HSD Fitness = nodes required / nodes required by HSD

Heuristics, cont. This is the genome for the best solver produced by their GA 1) DifferenceToGoal 2) DifferenceToNextStepHome 3) Free-Cells 4) DifferenceFromTop 5) LowestHomeCard 6) UppestHomeCard 7) NumOfWell-Arranged 8) DifferenceHome 9) BottomCardSum

Genetic Algorithm problems If the number of deals solutions were exposed to was too large, the solutions would take a really long time to evolve If they were only exposed to a single deal or a few deals, the solutions would quickly become good at solving those deals, but not so good at solving different deals

Coevolution Elyasaf et al. fixed this problem by using coevolution A population of problems evolve alongside the population of solutions An individual in the population of problems was a set of six deals – which would have a new generation for every five generations of the solutions

Comparing Algorithms NameAverage Time(seconds)Solved BFSN/ANo solutions DFSN/ANo Solutions Stage Deepening4496.4% Genetic Algorithm398.36%

Why 380? This relates to CSC 380 because it utilizes a couple blind searches, breadth first and depth first, as well as utilizing a modified depth-first search. Furthermore, it involves the analysis and comparison of these algorithms, in regards to requirements of both time and memory.

Future Work Specifically for solving FreeCell, one could use hand- crafted heuristics, break them down into components, and use a genetic algorithm to evolve those components

Discussion Questions Why are blind searches, such as Breadth-First Search and Depth-First Search, not feasible algorithms for solving FreeCell? What technique did Heineman use to keep his computer from exhausting its physical memory in his Staged Deepening algorithm? What characteristics of FreeCell allowed him to do this? Why was it important for the problem set to evolve alongside the heuristics of Elyasaf, Hauptman and Sipper’s genetic algorithm?

Works Cited [1] Heineman, G. (2009, January 17). January Column: Algorithm to Solve FreeCell Solitaire Games. Retrieved April 6, 2015, from algorithm.html algorithm.html [2] Mol, M. (2015, February 13). Deal cards for FreeCell. Retrieved April 6, 2015, from [3] Rules to Freecell. (n.d.). Retrieved April 6, 2015, from [4] Elyasaf, A., Hauptman, A., & Sipper, M. (2011). GA-FreeCell: Evolving Solvers for the Game of FreeCell. Retrieved April 10, 2015, from Elyasaf-Hauptmann-Sipper/Elyasaf-Hauptmann-Sipper-Paper.pdf