Presentation is loading. Please wait.

Presentation is loading. Please wait.

PARALLEL IMPLEMENTATIONS OF OPTIMIZING NEURAL NETWORKS 指導教授:梁廷宇 教授 姓名:紀欣呈 學號: 1095319128 系所:碩光電一甲 ANDREA DI BLAS ARUNJAGOTA RICHARD HUGHEY Baskin school.

Similar presentations


Presentation on theme: "PARALLEL IMPLEMENTATIONS OF OPTIMIZING NEURAL NETWORKS 指導教授:梁廷宇 教授 姓名:紀欣呈 學號: 1095319128 系所:碩光電一甲 ANDREA DI BLAS ARUNJAGOTA RICHARD HUGHEY Baskin school."— Presentation transcript:

1 PARALLEL IMPLEMENTATIONS OF OPTIMIZING NEURAL NETWORKS 指導教授:梁廷宇 教授 姓名:紀欣呈 學號: 1095319128 系所:碩光電一甲 ANDREA DI BLAS ARUNJAGOTA RICHARD HUGHEY Baskin school of Engineering, University of California at Santa Cruz Santa Cruz, California

2 Outline Introduction The Optimizing Neural Network Two Parallel Implementation Flowchart Fine-grain Kestrel Implementation “SIMD Phase Programming Model” MasPar Implementation Results Conclusions

3 Introduction Hopfield neural network approach to the maximum clique problem, an NP-hard problem on graphs One can easily trade execution time for solution quality The neural approach does not require backtracking

4 THE OPTIMIZING NEURAL NETWORK

5 if vertices i and j are connected by an edge, otherwise. The nodes is initialized to State The input to node is Any serial update of the form:

6 THE OPTIMIZING NEURAL NETWORK Minimizes the network energy :. a node is picked by a random roulette-wheel selection probability

7 TWO PARALLEL IMPLEMENTATIONS

8

9 Call find_a_clique() NO Yes Call Normalize_bias() r to Restart Next r<Restart r=Restart Flowchart

10 otherwise K=0? return yes no

11 Fine-grain Kestrel Implementation Kestrel is a 512-PE linear SIMD array on a single PCI board for NT/Linux/OSF platforms "classic" fine-grain implementation, with one network node per PE largest graph that can be solved with the current Kestrel system has 512

12 “SIMD Phase Programming Model” MasPar Implementation It is a SIMD bi-dimensional array with toroidal wraparound composed of 1K, 2K, 4K, 8K or 1GK PEs The amount of local memory in this system (64KB per PE) Adaptation can also be performed locally within each PE, but global adaptation has proved better for a small number of restarts

13 Result KestrelMasparSerial Runs20M Hz 12.5M Hz 143M Hz PEs51240961 Ram64K per PE 256M

14 Result

15

16 Conclusions The MasPar SPPM implementation offers a more flexible approach because all available PEs can be active and contribute to the solution at the same time The maximum clique problem has applications in many fields, and its parallel neural implementation can extend the scale of problems that can be efficiently solved

17 Thank for your attention Q & A


Download ppt "PARALLEL IMPLEMENTATIONS OF OPTIMIZING NEURAL NETWORKS 指導教授:梁廷宇 教授 姓名:紀欣呈 學號: 1095319128 系所:碩光電一甲 ANDREA DI BLAS ARUNJAGOTA RICHARD HUGHEY Baskin school."

Similar presentations


Ads by Google