Download presentation
Presentation is loading. Please wait.
1
Parallel Inversion of Polynomial Matrices
Alina Solovyova-Vincent Frederick C. Harris, Jr. M. Sami Fadali
2
Overview Introduction Existing algorithms Busłowicz’s algorithm
Parallel algorithm Results Conclusions and future work
3
Definitions A polynomial matrix is a matrix which has polynomials in all of its entries. H(s) = Hnsn+Hn-1sn-1+Hn-2sn-2+…+Ho, where Hi are constant r x r matrices, i=0, …, n.
4
Definitions Example: s+2 s3+ 3s2+s s3 s2+1
n=3 – degree of the polynomial matrix r=2 – the size of the matrix H Ho= H1= … 2 0 0 1 1 0 0
5
Definitions H-1(s) – inverse of the matrix H(s)
One of the ways to calculate it H-1(s) = adj H(s) /det H(s)
6
Definitions A rational matrix can be expressed as a ration of a numerator polynomial matrix and a denominator scalar polynomial.
7
Who Needs It??? Multivariable control systems
Analysis of power systems Robust stability analysis Design of linear decoupling controllers … and many more areas.
8
Existing Algorithms Leverrier’s algorithm ( 1840) Exact algorithms
[sI-H] - resolvent matrix Exact algorithms Approximation methods
9
The Selection of the Algorithm
Large degree of polynomial operations Lengthy calculations Not very general Before Buslowicz’s algorithm (1980) After Some improvements at the cost of increased computational complexity
10
Buslowicz’s Algorithm
Benefits: More general than methods proposed earlier Only requires operations on constant matrices Suitable for computer programming Drawback: the irreducible form cannot be ensured in general
11
Details of the Algorithm
Available upon request
12
Challenges Encountered (sequential)
Several inconsistencies in the original paper:
13
Challenges Encountered (parallel)
Dependent loops for (i=2; i<r+1; i++) { calculations requiring R[i-1][k] } for(k=0; k<n*i+1; k++) { } O(n2r4)
14
Challenges Encountered (parallel)
Loops of variable length for(k=0; k<n*i+1; k++) { for(ll=0; ll<min+1; ll++) main calculations } Varies with k
15
Shared and Distributed Memory
Main differences Synchronization of the processes Shared Memory (barrier) Distributed memory (data exchange) for (i=2; i<r+1; i++) { calculations requiring R[i-1] *Synchronization point }
16
Platforms Distributed memory platforms: SGI 02 NOW MIPS R5000 180MHz
P IV NOW GHz P III Cluster 1GHz P IV Cluster Zeon 2.2GHz
17
Platforms Shared memory platforms: SGI Power Challenge 10000
8 MPIS R10000 SGI Origin 2000 16 MPIS R MHz
18
Understanding the Results
n – degree of polynomial (<= 25) r – size of a matrix (<=25) Sequential algorithm – O(n2r5) Average of multiple runs Unloaded platforms
19
Sequential Run Times (n=25, r=25)
Platform Times (sec) SGI O2 NOW P IV NOW 22.94 P III Cluster 26.10 P IV Cluster 18.75 SGI Power Challenge 913.99 SGI Origin 2000 552.95
20
Results – Distributed Memory
Speedup SGI O2 NOW - slowdown P IV NOW minimal speedup
21
Speedup (P III & P IV Clusters)
22
Results – Shared Memory
Excellent results!!!
23
Speedup (SGI Power Challenge)
24
Speedup (SGI Origin 2000) Superlinear speedup!
25
Run times (SGI Power Challenge)
8 processors
26
Run times (SGI Origin 2000) n =25
27
Run times (SGI Power Challenge)
28
Efficiency 2 4 6 8 16 24 P III Cluster 89.7% 76.5% 61.3% 58.5% 40.1%
25.0% P IV 88.3% 68.2% 49.9% 46.9% 26.1% 15.5% SGI Power Challenge 99.7% 98.2% 97.9% 95.8% n/a SGI Origin 2000 99.9% 98.7% 99.0% 93.8%
29
Conclusions We have performed an exhaustive search of all available algorithms; We have implemented the sequential version of Busłowicz’s algorithm; We have implemented two versions of the parallel algorithm; We have tested parallel algorithm on 6 different platforms; We have obtained excellent speedup and efficiency in a shared memory environment.
30
Future Work Study the behavior of the algorithm for larger problem sizes (distributed memory). Re-evaluate message passing in distributed memory implementation. Extend Buslowicz’s algorithm to inverting multivariable polynomial matrices H(s1, s2 … sk).
31
Questions
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.