A section of Algebra that deals with vectors and matrices.
See: LinearAlgebraVsNumericalAnalysis MatrixAnalysis
The above definition is somewhat misleading; although in practice I suppose that most people view it this way. If anyone cares, it would perhaps be more accurate to say that 'Linear Algebra' is the study of the algebra of linear transformations. Most elementary linear algebra does indeed involve linear transformations between inner product or vector spaces on the real/complex Field (i.e. 'vectors and matrices') but there is a lot of power in more abstraction (and matrices are more a notational convenience than an object of study)...
For more depth, see http://en.wikipedia.org/wiki/Linear_algebra.
An important part of LinearAlgebra is actually concerned with things like FunctionSpaces?, which are infinite-dimensional and thus difficult to represent in a computer.
- That is somewhat misleading. Power series are infinite-dimensional (in the model where each term is a basis vector of the total infinite-dimensional space, which is in fact a common model; each term is in fact linearly orthogonal (but in the general case, not orthonormal) to all the others), yet both the convergent and the divergent varieties are, in some sense, representable in a computer, with pretty much the sole exception being the very slowly convergent/divergent series.
- Generating functions are a subset of those notions. -- Doug
- We do have need to represent very-large dimensional spaces on computers at times, e.g. in cluster analysis via factor analysis or neural net or whatever, but in practice, those large-dimensional practical issues tend to be only loosely-related to the infinite-dimensional spaces of theory (Hilbert, Banach, etc). -- DougMerritt
- "Projection" from larger spaces to smaller is mentioned repeatedly here, usually but not always in the context of relational database projection. Absolutely all such things are potentially in the domain of projective geometry, which classically concerns 2D perspective views of 3D objects, but which more generally covers really any dimensional reduction -- and in the context of the discussion on this page, projection means truncation of an infinite series (since we're modeling the infinite series as an infinite dimensional space, and computer computation as a finite dimensional space).
- Thus projection is inherently a question of, equally, convergence/divergence of infinite series, remainder/accuracy upon truncation of a series (image of projection from a higher dimensional space), etc. -- DougMerritt
- I have struggled to communicate these kinds of issues previously on wiki. It is true but not at all widely appreciated that: questions of solvability of systems, and of convergence/divergence of systems, and of rate/nature of convergence/divergence of systems, are all addressed by modern 20th century analytic topology; it doesn't matter what the system is, nor whether it is finite or infinite, or whether we're talking about a TuringMachine or a root of a polynomial, all of these things have been unified in the domain of abstract math -- which is not to say "solved", just "unified".
- Nonlinear systems in particular are, in general, not solvable. There are of course many examples of linear systems that are also not effectively soluble, e.g. irreducible polynomials that do not split in the algebraic domain, but which may have e.g. Aurifeulian factorizations (universal over all polynomials, specific to a particular polynomial, etc) over particular fields, e.g. given certain values of x, whereas other quite similar polynomials may have irreducible complexity equal to factorial(degree of equation) and thus may not admit solutions based on their projected Galois group. (I am, I confess, speaking beyond my mathematical abilities here, but that's not the same as saying things that are wrong....I have had algorithms "succeed" by being lucky in having order 5, yet with other parameters of the same algorithm, "unlucky", by having order 5!; it often depends on values; few algorithms are inherently sub-exponential in these areas) -- DougMerritt
- Regarding practical issues, many people have found workable solutions for their isolated problems to highly non-linear systems by taking advantage of the differential (infinitesimal) linearity of those systems in the immediate surrounds of zeroes/singularities and/or units. -- DougMerritt
Crud...by saying too much I may sound fringe. I am merely appealing to (1) modern topology, all 3 branches, and (2) its known applicability to other math. fields. I do not claim anything beyond that, but that in itself is huge, and not that widely known.
CategoryTheory treats these issues more simply, but I am still a novice in that better approach to things; CategoryTheory is appealing because it seems to make so very much simple, with morphism diagrams, similarly to how Feynman diagram simplified particle interactions. -- DougMerritt
Linear systems in general are those with two operations, call them function F and operator '*', such that:
LinearAlgebra addresses the basics of some such systems, but that basic issue of "linearity" is vastly larger and pervades many branches of pure and applied math. There are relatively few branches of mathematics where linearity has no importance. --
DougMerritt
See LinearAlgebraPackage (LAPACK) for information about implementing LinearAlgebra in FortranLanguage or CeePlusPlus.
See also GeometricalVectors AffineTransformation LinearTransformation
CategoryMath