Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ana Wu aw81212n@pace.edu Daniel A. Sabol ds11298w@pace.edu A Novel Approach for Library Materials Acquisition using Discrete Particle Swarm Optimization.

Similar presentations


Presentation on theme: "Ana Wu aw81212n@pace.edu Daniel A. Sabol ds11298w@pace.edu A Novel Approach for Library Materials Acquisition using Discrete Particle Swarm Optimization."— Presentation transcript:

1 Ana Wu aw81212n@pace.edu Daniel A. Sabol ds11298w@pace.edu
A Novel Approach for Library Materials Acquisition using Discrete Particle Swarm Optimization Ana Wu Daniel A. Sabol

2 Content Problem Solutions Experiments Library Materials Acquisition
Discrete Particle Swarm Optimization (DPSO) Simulated Annealing (SA) Multi-threading Experiments

3 Problem Acquire materials for multiple departments n materials
material 1, material 2, material 3, … , material n Each material has its own cost Each material belongs to a specific category m departments dept. 1, dept. 2, dept. 3, … , dept. m Each department owns a budget Each department has a preference value for each material (ranges from 0 to 1)

4 Problem Simple example Materials Book A Book B Book C Book D Book E
Cost $100 $45 $70 $60 $38 Category Science Art Social Dept. Computer Science Business Art Budget $550 $880 $660 Preference Book A Book B Book C Book D Book E Computer Science 0.7 0.4 0.5 Business 0.3 1 Art 0.6 0.9

5 Problem If one material is acquired by more than one department, the cost will be apportioned by these departments in proportion to their preference values. Book A $100 Preference Book A Computer Science 0.7 Business 0.3 Art Cost 100 * (0.7 / ( )) = 70 100 * (0.3 / ( )) = 30

6 Problem Constrains Purpose Objectives
Budget constrain: the expense can't exceed its budget Category constrain: the number of materials in each category has an upper bound and a lower bound Purpose Determine which materials should be acquired under the budget constrain and category constrains Objectives Maximize the average preference and the budget execution rate Higer objective value means better solution To meet the requirement for various departments

7 Problem Library Materials Acquisition
A generalized version of the knapsack problem NP-Hard problem For larger scale instances, no algorithm can produce optimal solution within reasonable amount of time When n = 100, we need 1010 years for a supercomputer to run all the solutions Only heuristics optimization algorithm can help find a better solution. Heuristics Cannot make sure the solution is the optimal solution Simulated Annealing, Tabu Search, DPSO.

8 Weighted sum of the average preference.
Objective function. Objective is to maximize the combination of average preference and the budget execution rate. Weighted sum of the average preference. Symbol Description n Number of materials m Number of departments q Number of categories pij Preference value of department j for material i ci Cost of material i Bj Budget of department j CUK Upper bound of the amount of materials in category k CLK Lower bound of the amount of materials in category k bik 1 indicates material i belongs to category k; 0 otherwise xij 1 indicates material i is acquired by department j; 0 otherwise zi 1 indicates material i is acquired by at least one department; 0 otherwise aij Actual cost of department j for material i Budget Execution Rate Actual Expenses of all department divided by the budget of all department.

9 Solution-DPSO Discrete Particle Swarm Optimization
Proposed by Kennedy (a social psycholoist) & Eberhart (an electrical engineer) in 1995 Inspired by the observation on behavior of flocking birds and schooling fish

10 Solution-DPSO Flocking birds
In foraging, birds flock together and arrange themselves in specific shapes by sharing their information about food sources Each bird is associated with a position, a velocity, and a fitness value (how far away from the destination) The movement of each bird will be influenced by the experiences of itself and the peers itself: remember the best position it has found so far (pbest) peers: know the best position that one member of the flock has found so far (gbest)

11 Solution-DPSO Discrete Particle Swarm Optimization (DPSO)
The idea is similar to bird flocks Particle: a bird Solution: position of bird pbest: the best position (nearest) a bird has achieved so far gbest: the global best position of all birds within the swarm All particles move toward to its pbest and gbest

12 Solution-DPSO Process
1.Initialize position and velocity for each particle randomly; 2.Loop For each particle, evaluate the fitness value. If get a better solution, update pbest; For each particle, update its velocity by pbest and gbest; newV = w*V + c1r1(pbest-P) + c2r2(gbest - P) For each particle, update the position newP = P + newV 3.End loop when a criterion is met, such as the maximum number of iterations

13 Solution-DPSO Drawback of DPSO Our Improvement Premature convergence
All particles converge to current best solution Experiment: In case with 50 materials, all particles converge at around 200th iterations Our Improvement Simulated Annealing (neighbor solution) Every time it converges, we add SA To enhance the exploration range

14 Solution-DPSO+SA Detailed steps
When convergence occurs, use current solution as the initial solution for SA Do SA algorithm After SA ends, assign the result to the position of the first particle Rest the positions and velocities by randomly generating positions and velocities for other particles

15

16 Solution-DPSO+SA+Multithreading
To improve the performance One thread -> one particle self execution Threads: Generate and update velocity and position Evaluate the fitness value Calculate possibility Send solution to main process

17 Solution-DPSO+SA+Multithreading
Main process: Check whether the whole algorithm is converged Run SA and dispatch particles when converged Broadcast current best solution Control the iterations

18 Experiments Setups Cases Intel Core i7-4810MQ 2.80GHz CPU and 16G RAM
50 particles, cognition learning rate: 2.0, social learning rate: 2.0, max velocity: 6.0 Cases Case I 20 materials, 3 departments, 3 categories Case II 50 materials, 3 departments, 3 categories Case III 100 materials, 10 departments, 10 categories

19 Experiments The comparison of DPSO and DPSO+SA

20 Experiments The performance of DPSO+SA in single and multiple thread

21 Experiments The execution time of DPSO+SA in single and multiple thread

22 Thank you!


Download ppt "Ana Wu aw81212n@pace.edu Daniel A. Sabol ds11298w@pace.edu A Novel Approach for Library Materials Acquisition using Discrete Particle Swarm Optimization."

Similar presentations


Ads by Google