Download presentation
Presentation is loading. Please wait.
Published byHenry Harmon Modified over 8 years ago
1
MA237: Linear Algebra I Chapters 1 and 2: What have we learned?
2
Chapter 1: Systems of Linear Equations 1.1 Definition of Vector Spaces, Vector Spaces of Matrices 1.2 Linear Systems - Ax=B - A is m n: m = # of equations n = # of variables(thus x is n 1) 1.3 Gaussian Elimination – solving linear systems 1.4 Column Space -- solvability Nullspace -- solution size
3
Chapter 2: Linear Independence and Dimension 2.1 The test for linear dependence - dependency equation 2.2 Dimension - basis of a vector space 2.3 Row Space and Rank-Nullity Theorem - rank + nullity = n
4
Advanced Directions in Linear Algebra
5
Solving Linear Systems Applications in: –Chemistry –Coding Theory –Cryptography –Image Analysis –Control Systems –Economics –Genetics
6
Gaussian Elimination Carl Friedrich Gauss: 1977 - 1855
7
Methods of Conjugate Gradients for Solving Linear Systems Methods of Conjugate Gradients for Solving Linear Systems Hestenes and Stiefel, 1952 Gaussian Elimination Carl Friedrich Gauss: 1977 - 1855
8
Methods of Conjugate Gradients for Solving Linear Systems First paragraph: The advent of electronic computers in the middle of the 20th century stimulated a flurry of activity in developing numerical algorithms that could be applied to computational problems much more difficult than those solved in the past. The work described [in this paper] was done at the Institute for Numerical Analysis, a part of NBS on the campus of UCLA [2]. This institute was an incredibly fertile environment for the development of algorithms that might exploit the potential of these new automatic computing engines, especially algorithms for the solution of linear systems and matrix eigenvalue problems. Some of these algorithms are classified today under the term Krylov Subspace Iteration, and this paper describes the first of these methods to solve linear systems. http://nvl.nist.gov/pub/nistpubs/sp958-lide/081-085.pdf
9
Methods of Conjugate Gradients for Solving Linear Systems Second paragraph: At this time, there were two commonly used types of algorithms for solving linear systems. The first, like Gauss elimination, modified a tableau of matrix entries in a systematic way in order to compute the solution. These methods were finite, but required a rather large amount of computational effort with work growing as the cube of the number of unknowns. The second type of algorithm used “relaxation techniques” to develop a sequence of iterates converging to the solution. Although convergence was often slow, these algorithms could be terminated, often with a reasonably accurate solution estimate, whenever the human “computers” ran out of time. http://nvl.nist.gov/pub/nistpubs/sp958-lide/081-085.pdf
10
Methods of Conjugate Gradients for Solving Linear Systems Third paragraph: The ideal algorithm would be one that had finite termination but, if stopped early, would give a useful approximate solution. Hestenes and Stiefel succeeded in developing an algorithm with exactly these characteristics, the method of conjugate gradients. http://nvl.nist.gov/pub/nistpubs/sp958-lide/081-085.pdf
11
Methods of Conjugate Gradients for Solving Linear Systems Fourth paragraph: The algorithm itself is beautiful, with deep connections to optimization theory, the Pad table, and quadratic forms. It is also a computational gem, the standard algorithm used today to solve large sparse systems of equations, involving symmetric (or Hermitian) positive definite matrices, for which matrix modification methods are impractical. http://nvl.nist.gov/pub/nistpubs/sp958-lide/081-085.pdf
12
A new algorithm for solving large inhomogeneous linear system of algebraic equations A new algorithm for solving large inhomogeneous linear system of algebraic equations S. Ramasesha, 1990 Cutting Edge…
13
Abstract An algorithm based on a small matrix approach to the solution of a system of inhomogeneous linear algebraic equations is developed and tested in this short communication. The solution is assumed to lie in an initial subspace and the dimension of the subspace is augmented iteratively by adding the component of the correction vector obtained from the Jacobi scheme on the coefficient matrix A (ATA, if the matrix A is nondefinite) that is orthogonal to the subspace. If the dimension of the subspace becomes inconveniently large, the iterative scheme can be restarted. The scheme is applicable to both symmetric and nonsymmetric matrices. The small matrix is symmetric (nonsymmetric), if the coefficient matrix is symmetric (nonsymmetric). The scheme has rapid convergence even for large nonsymmetric sparse systems.
14
Late universe dynamics with scale-independent linear couplings in the dark sector Late universe dynamics with scale-independent linear couplings in the dark sector Quercellini et al, 2008 Cosmology
15
Late universe dynamics with scale-independent linear couplings in the dark sector Page 4: The following equation develops: X´ = JX+C(15) http://arxiv.org/PS_cache/arxiv/pdf/0803/0803.1976v2.pdf
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.