Selasa, 17 Juli 2018

Sponsored Links

Essence of linear algebra preview - YouTube
src: i.ytimg.com

Aljabar linier adalah cabang matematika tentang persamaan linear seperti

                                   a                         1                                         x                         1                                      ?                              a                         n                                         x                         n                              =          b         ,                  {\ displaystyle a_ {1} x_ {1} \ cdots a_ {n} x_ {n} = b,}   

fungsi linear seperti

                        (                     x                         1                             ,         ...         ,                     x                         n                             )         ?                     a                         1                                         x                         1                                      ...                              a                         n                                         x                         n                             ,                  {\ displaystyle (x_ {1}, \ ldots, x_ {n}) \ mapsto a_ {1} x_ {1} \ ldots a_ {n} x_ {n },}   

and their representation through matrices and vector spaces.

Linear algebra is very important for almost all areas of mathematics. For example, linear algebra is fundamental to modern geometry presentations, including for defining basic objects such as lines, planes and rotations. Also, functional analysis can basically be seen as a linear algebraic app for function spaces. Linear algebra is also used in most fields of science and engineering, as it allows modeling of many natural phenomena, and efficiently calculates with such models. For nonlinear systems, which can not be modeled with linear algebra, linear algebra is often used as a first-order approach.


Video Linear algebra



Histori

Dari studi determinan dan matriks untuk aljabar linier modern

The study of the first linear algebra arises from the introduction of determinants, to solve systems of linear equations. The determinant was considered by Leibniz in 1693, and then, in 1750, Gabriel Cramer used them to provide an explicit linear system solution, now called the Cramer Rules. Later, Gauss developed further the theory of solving linear systems using Gaussian elimination, which was originally listed as an advance in geodesy.

The study of algebraic matrices first appeared in England in the mid-1800s. In 1844, Hermann Grassmann published "Theory of Extension" which incorporated the fundamental new topics of what is now called linear algebra. In 1848, James Joseph Sylvester introduced the term matrix, which is Latin for "womb". When studying the composition of linear transformations, Arthur Cayley is guided to define multiplication and matrix inversion. Crucially, Cayley uses one letter to denote the matrix, thus treating the matrix as an aggregate object. He is also aware of the relationship between the matrix and the determinant, and writes "There will be much to say about matrix theory which should, in my opinion, precede the theory of determinants".

In 1882, HÃÆ'¼seyin Tevfik Pasha wrote a book entitled "Linear Algebra". The first and more precise modern definition of vector space was introduced by Peano in 1888; in 1900, the theory of linear transformations of the up-to-dimensional vector space has emerged. Linear algebra took its modern form in the first half of the 20th century, when many of the ideas and methods of the previous century were generalized as abstract algebra. The use of matrices in quantum mechanics, special relativity, and statistics helps spread the subject of linear algebra outside of pure mathematics. The development of computers led to increased research in efficient algorithms for Gaussian elimination and matrix decomposition, and linear algebra being an important tool for modeling and simulation.

The origin of many of these ideas is discussed in an article on the determinants and elimination of Gauss.

Education history

Linear algebra first appeared in American graduate textbooks in the 1940s and in undergraduate textbooks in the 1950s. After working by the School Mathematics Study Group, the US high school asked the 12th graders to perform an "algebra matrix, previously reserved for college" in the 1960s. In France during the 1960s, educators sought to teach linear algebra through dimensionless vector spaces-up to the first year of high school. This was greeted with a counterattack in the 1980s that removed linear algebra from the curriculum. In 1993, the US-based Linear Algebra Curriculum Study Group recommended that undergraduate linear algebra courses be given an application-based "orientation matrix" as opposed to a theoretical orientation. Reviews on teaching linear algebra calls for stress on the visualization and geometric interpretation of theoretical ideas, and for incorporating gems in the crown of linear algebra, singular-value decomposition (SVD), as' so many other disciplines use it. To better suit the 21st century applications, such as data mining and uncertainty analysis, linear algebra can be based on SVD, not Gaussian Elimination.

Maps Linear algebra



Study scope

vector spaces

The main structure of linear algebra is a vector space. Vector space above the F field (often the real field field) is a set of V which comes with two binary operations that meet the following axioms. The V element is called vector , and the element F is called scalar . The first operation, vector addition , takes two vectors v and w and produces a third vector v w . The second operation, scalar multiplication , takes a scalar a and any vector v and displays the new . Addition and multiplication operations in vector space must meet the following axioms. In the list below, leave u , v and w be random vectors in V , and a and b scalars at F .

The first four axioms are the V action which becomes an abelian group under additional vectors. The elements of the vector space may have various properties; for example, they can be sequences, functions, polynomials or matrices. Linear algebra deals with the general properties for all vector spaces.

Linear transformation

Just as in other algebraic theoretical structures, a linear algebraic study of the mapping between vector space preserves vector-space structure. Given two vector spaces V and W above the F field, linear transformations (also called linear maps, linear mapping or linear operators)

                   T         :         V         ->         W               {\ displaystyle T: V \ to W}  Â

yang kompatibel dengan penjumlahan tambahan dan skalar:

                        T          (          u                   v         )          =          T          (          u         )                   T          (          v         )         ,                   T          (          a          v         )          =          a          T          (          v         )                  {\ displaystyle T (u v) = T (u) T (v), \ quad T (av) = aT (v)}   

for each vector u , v ? V and a scalar a ? F .

Selain itu untuk setiap vektor u , v ? V dan skalar a , b ? F :

                                 T          (          a          u                   b          v         )          =          T          (          a          u         )                   T          (          b          v         )          =          a          T          (          u         )                   b          T          (          v         )                  {\ displaystyle \ quad T (au bv) = T (au) T (bv) = aT (u) bT (v)}   

When a bijective linear mapping exists between two vector spaces (that is, each vector of the second chamber is associated with exactly one in the first), we say that two spaces are isomorphic. Since the isomorphism retains a linear structure, two isomorphic vector spaces are "essentially the same" from the point of view of linear algebra. One important question in linear algebra is whether the mapping is an isomorphism or not, and this question can be answered by checking whether the determinant is not zero. If the mapping is not an isomorphism, linear algebra is interested in finding the range (or image) and the set of elements mapped to zero, called the mapping kernel.

Linear transformation has geometric significance. For example, 2 ÃÆ'â € "2 real matrices show a standard planar mapping that preserves its origin.

Subspaces, ranges, and bases

Again, in analogy with other algebraic theories, linear algebra is attracted to a set of vector space itself which is a vector space; This subset is called linear subspace. For example, the range and kernel of a linear mapping is a subspace, and therefore often called space ranges and null space; this is an important example of a subspace. Another important way to create a subspace is to take a linear combination of a vector set of v 1 v 2 ..., v k :

                             a                      1                                     v                      1                                             a                      2                                     v                      2                                   ?                           a                      k                                     v                      k                   ,           {\ displaystyle a_ {1} v_ {1} a_ {2} v_ {2} \ cdots a_ {k} v_ {k},}  Â

a 1 , a 2 > k is a scalar. All vector linear vector combinations v 1 , v 2 k is called their span, which forms a subspace.

The linear combination of all vector systems with all the zero coefficients is the zero vector V . If this is the only way to declare a zero vector as a linear combination v 1 , v 2 ,.. , v k then this vector is linearly independent. Given a set of vectors that spans space, if any vector w is a linear combination of the other vectors (and so the set is not linearly independent), then the range will remain the same if we delete w of the set. Thus, a set of redundant linear dependent vectors in the sense that there will be a linearly independent subset that will reach the same subspace. Therefore, we are mostly interested in an independent set of vectors spanning the vector space V , which we call the base V . Each set of vectors stretching V contains a base, and each independent vector circuit in V can be extended to the base. It turns out that if we accept the axiom of choice, every vector space has a base; However, this basis may not be natural, and indeed, may not even be understood. For example, there is a basis for a real number, regarded as a vector space above the ration, but no explicit base has been constructed.

Each of the two bases of vector space V has the same cardinality, called the V dimension. Vector space dimensions are well defined by dimensions theorem for vector space. If the base V has a limited number of elements, V is termed a limited dimension vector space. If V has a dimension-dimension and U is a V subspace, then dim U . If U 1 and U 2 are subspace of V

                   dim         ()                   U                      1                                             U                      2                           )         =         dim                           U                      1                                   dim                           U                      2                           -         dim         ()                   U                      1                           ?                   U                      2                           )           {\ Displaystyle \ dim (U_ {1} U_ {2}) = \ dim U_ {1} \ dim U_ {2} - \ dim (U_ { 1} \ cap U_ {2})}   .

One often limits considerations for dimension-to-dimensional vector spaces. The fundamental theorem of linear algebra states that all vector spaces of the same dimension are isomorphic, providing an easy way to characterize isomorphism.

Matrix theory

Dasar tertentu { v 1 , v 2 ,..., v n } dari V memungkinkan seseorang untuk membangun sistem koordinat di V : vektor dengan koordinat ( a 1 , a 2 ,..., a n ) adalah kombinasi linear

                                   a                         1                                         v                         1                                                  a                         2                                         v                         2                                      ?                              a                         n                                         v                         n                             .                           {\ displaystyle a_ {1} v_ {1} a_ {2} v_ {2} \ cdots a_ {n} v_ {n}. \,}   

v 1 v 2 sub> span V ensures that each vector v can be assigned coordinates, whereas linear independence v 1 , 2 ,..., n ensures that these coordinates are unique (ie, there is only one linear combination from the same vector base with v ). In this way, once the base of vector space V above F is selected, V can be identified with the coordinates n -space F n . Under this identification, the scalar scalar addition and scalar in V corresponds to the scalar addition and scalar of their coordinate vectors at F n . Furthermore, if V and W are n -imensions and m -dimension vectors above F , and the base V and base W have been fixed, then any linear transformation T : V -> W may be coded by n the A matrix with an entry in the F field >, is called matrix T in relation to these bases. Two matrices that encode the same linear transformation in different bases are called similar. Matrix theory replaces the linear transformation study, which is defined by axiomatically, by studying the matrix, which is a concrete object. This major technique distinguishes a linear algebra from other algebraic theoretical structures, which are usually unmatched by concrete.

There is an important distinction between the n -space R n and the general limited dimensional vector space V . While R n has a default base of e 1 , e 2 ,..., e n }, vector spaces V do not normally come with basic and many different bases exist (though they all consist of the same number of elements as the V dimension).

One of the main applications of matrix theory is the calculation of determinants, the central concepts in linear algebra. While determinants can be freely defined basis, they are usually introduced through a specific representation of the mapping; the determinant value does not depend on the specific basis. It turns out that the mapping has the opposite if and only if the determinant has the opposite (any real or nonzero complex has an inverse). If the determinant is zero, then the nontrivial space is trivial. The determinant has another application, including a systematic way to see if a linear set is independently linear (we write the vector as a column of the matrix, and if the determinant of the matrix is ​​zero, the vector depends linearly). Determinants can also be used to solve systems of linear equations (see Cramer's rule), but in real applications, Gauss elimination is a faster method.

Eigenvalues ​​and eigenvectors

In general, the action of linear transformation may be quite complicated. Attention to low-dimensional examples gives an indication of the variety of species. One strategy for the common n-dimensional transformation T is to find the "characteristic lines" which are the invariant sets under T . If v is a nonzero vector like Tt is a scalar multiple v , then the line through 0 and v is one the invariant set under T and v is called the characteristic vector or eigenvector . Scalar? like that Tv =? v is called the characteristic value or eigenvalue of T .

Untuk menemukan vektor eigen atau nilai eigen, kami mencatatnya

                        T          v          -         ?          v          =          (          T          -         ?                              Saya                  )          v          =          0         ,                  {\ displaystyle Tv- \ lambda v = (T- \ lambda \, {\ text {I}}) v = 0,}   

where I am the identity matrix. For that there is a trivial solution for that equation, det ( T -? I) = 0. The determinant is polynomial, so the eigenvalues ​​are not guaranteed to exist if the field is R . Thus, we often work with algebraic closed fields such as complex numbers when dealing with eigenvectors and eigenvalues ​​so that eigenvalues ​​will always exist. It would be nice to have the transformation T take the vector space V into itself we can find the basis for V which consists of the eigenvector. If such a base exists, we can easily calculate the action of the transformation in any vector: if v 1 , v 2 ,..., v n is a linearly independent eigenvector of mapping n -the dimension space T with eigenvalues ​​(not always different)? 1 ,? 2 ,...,? n , and if v = a 1 v 1 ... a n v n , then,

               T        (          v        )         =          T        (                  a              Â 1                             Â     v              Â 1                         )                 ?                  T        (                  a                 Â ·                              Â     v                 Â ·                          )         =                  a              Â 1                           T        (            Â     v              Â 1                         )                 ?                          a                 Â ·                            T        (            Â     v                 Â ·                          )         =                  a              Â 1                                   ?              Â 1                             Â     v              Â 1                                  ?                          a                 Â ·                                    ?                 Â ·                              Â     v                 Â ·                           .               {\ displaystyle T (v) = T (a_ {1} v_ {1}) \ cdots T (a_ {n} v_ {n}) = a_ {1} T (v_ {1}) \ cdots a_ {n} T (v_ {n}) = a_ {1} \ lambda _ {1} v_ {1} \ cdots a_ {n} \ lambda _ {n} v_ {n}.}  Â

Such a transformation is called a diagonalizable matrix because in eigenbasis, the transformation is represented by a diagonal matrix. Because operations such as matrix multiplication, matrix inversion, and simple determinant calculations on diagonal matrices, calculations involving matrices are much simpler if we can bring the matrix to diagonal shape. Not all matrices can be diagonalized (even above the algebra-covered plane).

In-house space

In addition to this basic concept, linear algebra also studies vector space with additional structures, such as in-product. The inner product is an example of a bilinear form, and gives the vector space a geometric structure by allowing the definition of length and angle. Formally, the product within is a map

                   ?         ? ,         ?         ?         :         V         ÃÆ' -         V         ->         F           {\ displaystyle \ langle \ cdot, \ cdot \ rangle: V \ kali V \ rightarrow F}  Â

yang memenuhi tiga aksioma berikut untuk semua vektor u , v , w di V dan semua skalar a di F :

  • Konjugasi simetri:
                       ?          u         ,          v         ?          =                                                ?                v               ,                u               ?                           ¯                             .                  {\ displaystyle \ langle u, v \ rangle = {\ overline {\ langle v, u \ rangle}}.}   

Perhatikan bahwa di R , itu simetris.

  • Linearitas dalam argumen pertama:
                       ?          a          u         ,          v         ?          =          a         ?          u         ,          v         ?         .                  {\ displaystyle \ langle au, v \ rangle = a \ langle u, v \ rangle.}   
                       ?          u                   v         ,          w         ?          =         ?          u         ,          w         ?                  ?          v         ,          w         ?         .                  {\ displaystyle \ langle u v, w \ rangle = \ langle u, w \ rangle \ langle v, w \ rangle.}   
  • Kepastian-positif:
                       ?          v         ,          v         ?         > =          0                  {\ displaystyle \ langle v, v \ rangle \ geq 0}    dengan persamaan hanya untuk v = 0.

Kita dapat menentukan panjang vektor v di V oleh

                       ?          v                    ?                         2                              =         ?          v         ,          v         ?         ,                  {\ displaystyle \ | v \ | ^ {2} = \ langle v, v \ rangle,}   

dan kita bisa membuktikan ketidaksetaraan Cauchy-Schwarz:

                                   |                  ?          u         ,          v         ?                     |                   <=         ?          u         ?         ?         ?          v         ?         .                  {\ displaystyle | \ langle u, v \ rangle | \ leq \ | u \ | \ cdot \ | v \ |.}   

Secara khusus, kuantitas

                                                                                |                              ?                u               ,                v               ?                                 |                                                        ?                u               ?               ?               ?                v               ?                                           <=          1         ,                  {\ displaystyle {\ frac {| \ langle u, v \ rangle |} {\ | u \ | \ cdot \ | v \ |}} \ leq 1,}   

so we can call this quantity a cosine of the angle between two vectors.

Dua vektor adalah orthogonal jika                        ?          u         ,          v         ?          =          0                  {\ displaystyle \ langle u, v \ rangle = 0}    . Suatu basis ortonormal adalah dasar di mana semua vektor basis memiliki panjang 1 dan saling orthogonal satu sama lain. Diberikan ruang vektor dimensi-hingga, basis ortonormal dapat ditemukan dengan prosedur Gram-Schmidt. Basis Orthonormal sangat bagus untuk ditangani, karena jika v = a 1 v 1 ... a n v n , lalu                                    a                         saya                              =         ?          v         ,                     v                         saya                             ?                  {\ displaystyle a_ {i} = \ langle v, v_ {i} \ rangle}    .

Produk dalam memfasilitasi pembangunan banyak konsep yang berguna. Misalnya, diberi transformasi T , kita dapat mendefinisikan konjugat Hermitiannya T * sebagai transformasi linear yang memuaskan

                       ?          T          u         ,          v         ?          =         ?          u         ,                     T                         *                              v         ?         .                  {\ displaystyle \ langle Tu, v \ rangle = \ langle u, T ^ {*} v \ rangle.}   

If T meets TT * = T * T , we call T normally. It turns out that the normal matrix is ​​a matrix that has an orthonormal eigenvector system that reaches V .

Linear Algebra Example Problems - Spanning Vectors #1 - YouTube
src: i.ytimg.com


Some useful primary theorems

  • The matrix can be reversed, or non-single, if and only if the linear map represented by the matrix is ​​isomorphism.
  • Each vector space is above the F isomorphic n field to F n as a vector space above F .
  • Corollary: Every two vectors space above F of the same finite dimension isomorphic to each other.
  • Linear map is isomorphism if and only if the determination is not zero.

Sammy teaches linear algebra and differential equations, page 1
src: www.massey.ac.nz


Applications

Because of the ubiquitous vector spaces, linear algebra is used in many areas of mathematics, natural sciences, computer science, and social sciences. Below are some examples of linear algebra applications.

Linear system solutions

The Gaussian-elimination algorithm is as follows: remove x from all equations below L 1 , and then remove y of all equations under L 2 . This will put the system in triangular form. Then, using back-substitution, each unknown is unbreakable.

Dalam contoh, x dihilangkan dari L 2 dengan menambahkan (3/2) L 1 ke L 2 . x kemudian dihilangkan dari L 3 dengan menambahkan L 1 ke L 3 . Secara formal:

                                   L                         2                                                                              3                2                                                      L                         1                              ->                     L                         2                                      {\ displaystyle L_ {2} {\ tfrac {3} {2}} L_ {1} \ rightarrow L_ {2}}   
                                   L                         3                                                  L                         1                              ->                     L                         3                                      {\ displaystyle L_ {3} L_ {1} \ rightarrow L_ {3}}   

Hasilnya adalah:

                                                                                2                  x            Ã

Source of the article : Wikipedia

Comments
0 Comments