Progress Report—11/13 宗慶. Problem Statement Find kernels of large and sparse linear systems over GF(2)

Slides:



Advertisements
Similar presentations
The first-generation Cell Broadband Engine (BE) processor is a multi-core chip comprised of a 64-bit Power Architecture processor core and eight synergistic.
Advertisements

Fixed Points and The Fixed Point Algorithm. Fixed Points A fixed point for a function f(x) is a value x 0 in the domain of the function such that f(x.
Block LU Factorization Lecture 24 MA471 Fall 2003.
Factoring of Large Numbers using Number Field Sieve Matrix Step Chandana Anand, Arman Gungor, and Kimberly A. Thomas ECE 646 Fall 2006.
A NOVEL APPROACH TO SOLVING LARGE-SCALE LINEAR SYSTEMS Ken Habgood, Itamar Arel Department of Electrical Engineering & Computer Science GABRIEL CRAMER.
1 Parallel Algorithms II Topics: matrix and graph algorithms.
Solving Linear Systems (Numerical Recipes, Chap 2)
Chapter 2 The Algorithmic Foundations of Computer Science
May 29, Final Presentation Sajib Barua1 Development of a Parallel Fast Fourier Transform Algorithm for Derivative Pricing Using MPI Sajib Barua.
Emergy Berger, Calvin Lin, Samuel Z. Guyer CSE 231 Presentation by Jennifer Grau.
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
1 Parallel Simulations of Underground Flow in Porous and Fractured Media H. Mustapha 1,2, A. Beaudoin 1, J. Erhel 1 and J.R. De Dreuzy IRISA – INRIA.
Chapter 7 Matrix Mathematics Matrix Operations Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Using Inverse Matrices Solving Systems. You can use the inverse of the coefficient matrix to find the solution. 3x + 2y = 7 4x - 5y = 11 Solve the system.
Section 8.3 – Systems of Linear Equations - Determinants Using Determinants to Solve Systems of Equations A determinant is a value that is obtained from.
Feng Lu Chuan Heng Foh, Jianfei Cai and Liang- Tien Chia Information Theory, ISIT IEEE International Symposium on LT Codes Decoding: Design.
LU Decomposition 1. Introduction Another way of solving a system of equations is by using a factorization technique for matrices called LU decomposition.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
Finite Mathematics Dr. Saeid Moloudzadeh Using Matrices to Solve Systems of Equations 1 Contents Algebra Review Functions and Linear Models.
Data Partitioning on Heterogeneous Multicore and Multi-GPU Systems Using Functional Performance Models of Data-Parallel Applications Published in: Cluster.
Parallel Performance of Hierarchical Multipole Algorithms for Inductance Extraction Ananth Grama, Purdue University Vivek Sarin, Texas A&M University Hemant.
An approach for solving the Helmholtz Equation on heterogeneous platforms An approach for solving the Helmholtz Equation on heterogeneous platforms G.
Hadamard matrices and the hadamard conjecture
OpenMP in a Heterogeneous World Ayodunni Aribuki Advisor: Dr. Barbara Chapman HPCTools Group University of Houston.
Presenter : Kuang-Jui Hsu Date : 2011/5/3(Tues.).
Algebra I Chapter 10 Review
STE 6239 Simulering Friday, Week 1: 5. Scientific computing: basic solvers.
1.3 The Intersection Point of Lines System of Equation A system of two equations in two variables looks like: – Notice, these are both lines. Linear Systems.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Graph Algorithms. Definitions and Representation An undirected graph G is a pair (V,E), where V is a finite set of points called vertices and E is a finite.
1 Variational and Weighted Residual Methods. 2 The Weighted Residual Method The governing equation for 1-D heat conduction A solution to this equation.
Math /4.2/4.3 – Solving Systems of Linear Equations 1.
JAVA AND MATRIX COMPUTATION
Examples of linear transformation matrices Some special cases of linear transformations of two-dimensional space R 2 are illuminating:dimensional Dimoffree.svgDimoffree.svg‎
Section 3.2 Connections to Algebra.  In algebra, you learned a system of two linear equations in x and y can have exactly one solution, no solutions,
Sullivan Algebra and Trigonometry: Section 12.3 Objectives of this Section Write the Augmented Matrix of a System of Linear Equations Write the System.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA
HAMA: An Efficient Matrix Computation with the MapReduce Framework Sangwon Seo, Edward J. Woon, Jaehong Kim, Seongwook Jin, Jin-soo Kim, Seungryoul Maeng.
Data Structures and Algorithms in Parallel Computing Lecture 7.
Generating a d-dimensional linear subspace efficiently Raphael Yuster SODA’10.
By: Jesse Ehlert Dustin Wells Li Zhang Iterative Aggregation/Disaggregation(IAD)
Section 1.7 Linear Independence and Nonsingular Matrices
1.6 Further Results on Systems of Equations and Invertibility.
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA Shirley Moore CPS5401 Fall 2013 svmoore.pbworks.com November 12, 2012.
Nonlinear Adaptive Kernel Methods Dec. 1, 2009 Anthony Kuh Chaopin Zhu Nate Kowahl.
Algebra Review. Systems of Equations Review: Substitution Linear Combination 2 Methods to Solve:
Chapter 9: Markov Processes
Parallel Algorithm Design & Analysis Course Dr. Stephen V. Providence Motivation, Overview, Expectations, What’s next.
Numerical Algorithms Chapter 11.
Parallel Direct Methods for Sparse Linear Systems
A survey of Exascale Linear Algebra Libraries for Data Assimilation
Ioannis E. Venetis Department of Computer Engineering and Informatics
Progress Report— 11/06 宗慶.
Block Wiedemann Algorithm
A computational loop k k Integration Newton Iteration
1.6 Further Results on Systems
Nathan Grabaskas: Batched LA and Parallel Communication Optimization
High Performance Computing (CS 540)
Parallel Quadratic Sieve
Computers & Programming Languages
Matrix Solutions to Linear Systems
University of South Florida and Eindhoven University of Technology
Linear Algebra Lecture 3.
Quadratic Equations.
Solving simultaneous linear and quadratic equations
Recapitulation of Lecture 8
4.4 Objectives Day 1: Find the determinants of 2  2 and 3  3 matrices. Day 2: Use Cramer’s rule to solve systems of linear equations. Vocabulary Determinant:
4.3 Determinants and Cramer’s Rule
A computational loop k k Integration Newton Iteration
Presentation transcript:

Progress Report—11/13 宗慶

Problem Statement Find kernels of large and sparse linear systems over GF(2)

Well-Known Solutions Block Lanczos Algorithm Block Wiedemann Algorithm

Published in 1994  Solving Homogeneous Linear Equations Over GF(2) via Block Wiedmemann Algorithm by Don Coppersmith Don Coppersmith proposed a block version of the Wiedemann algorithm which take advantage of the ability to perform simultaneous operations on block of vectors.

Wiedemann algorithm Based on the fact that when a square matrix is repeatedly applied to a vector, the resulting vector sequence is linear recursive.

Advantages of Blocking Parallel implementation Faster sequential running time Better probability of success [Villard 1997]

Block Wiedemann Algorithm

Three-tier Parallelism on Cell blades SIMD on SPE Heterogeneous Multi-cores per node MPI between nodes

Available Resources Implementation of block Lanczos algorithm  openmp LinBox  A C++ template library for exact, high- performance linear algebra computation with dense, sparse, and structured matrices over the integers and over finite fields. Many papers….. Cell Blades  Connected via Ethernet ITRI plan to purchase Infiniband

Challenges awaits me Symbolic computation  Map algebra to computer Data structures  How to represent data efficiently Many similar block Wiedemann algorithms