Vectors and Linear Algebra
Most of the content on this page is adapted from The HyperPhysics Website – Georgia State University, the MIT’s Open Courseware Site, and Wikipedia.
This page is a complete study guide for learning Vectors and Linear Algebra.
Basic Vector Math
Basic Vector OperationsBoth a magnitude and a direction must be specified for a vector quantity, in contrast to a scalar quantity which can be quantified with just a number. Any number of vector quantities of the same type (i.e., same units) can be combined by basic vector operations. 
Index  

Go Back 
Graphical Vector Addition

Index  

Go Back 
Example of Vector Components

Index  

Go Back 
Polar Form Example

Index  

Go Back 
Combining Vector Components

Index  

Go Back 
Resolving a Vector Into Components
Note: this Javascript routine does not work for angle exactly equal to 90° . 
Index  

Go Back 
Magnitude and Direction from Components

Index  

Go Back 
Vector Addition, Two Vectors

Index  

Go Back 
Vector Addition, Three Vectors

Index  

Go Back 
Vector Addition, Four Vectors

Index  

Go Back 
Coordinate Systems
Cartesian coordinate system
The prototypical example of a coordinate system is the Cartesian coordinate system. In the plane, two perpendicular lines are chosen and the coordinates of a point are taken to be the signed distances to the lines.
In three dimensions, three perpendicular planes are chosen and the three coordinates of a point are the signed distances to each of the planes. This can be generalized to create n coordinates for any point in ndimensional Euclidean space.
Polar coordinate system
Vectors as Matrices
It is sometimes useful for systems of equations and groups of vectors to be arranged in the form of a matrix.
Vector Products
It is also sometimes useful to use a Matrix when dealing with Vector Multiplication.
Matrix notation
The definition of the cross product can also be represented by the determinant of a formal matrix:
This determinant can be computed using Sarrus’ rule or Cofactor expansion.
Using Sarrus’ Rule, it expands to
Using Cofactor expansion along the first row instead, it expands to^{[4]}
which gives the components of the resulting vector directly.
Geometric meaning
The magnitude of the cross product can be interpreted as the positive area of the parallelogram having a and b as sides (see Figure 1):
Indeed, one can also compute the volume V of a parallelepiped having a, b and c as sides by using a combination of a cross product and a dot product, called scalar triple product (see Figure 2):
Figure 2 demonstrates that this volume can be found in two ways, showing geometrically that the identity holds that a “dot” and a “cross” can be interchanged without changing the result. That is:
Because the magnitude of the cross product goes by the sine of the angle between its arguments, the cross product can be thought of as a measure of “perpendicularness” in the same way that the dot product is a measure of “parallelness”. Given two unit vectors, their cross product has a magnitude of 1 if the two are perpendicular and a magnitude of zero if the two are parallel.
Determinants
In algebra, the determinant is a special number associated with any square matrix. The fundamental geometric meaning of a determinant is a scale factor or coefficient for measure when the matrix is regarded as a linear transformation. Thus a 2 × 2 matrix with determinant 2 when applied to a set of points with finite area will transform those points into a set with twice the area. Determinants are important both in calculus, where they enter the substitution rule for several variables, and in multilinear algebra.
When its scalars are taken from a field F, a matrix is invertible if and only if its determinant is nonzero; more generally, when the scalars are taken from a commutative ring R, the matrix is invertible if and only if its determinant is a unit of R. Determinants are not that wellbehaved for noncommutative rings.
The determinant of a matrix A is denoted det(A), or without parentheses: det A. An alternative notation, used for compactness, especially in the case where the matrix entries are written out in full, is to denote the determinant of a matrix by surrounding the matrix entries by vertical bars instead of the usual brackets or parentheses. Thus
 denotes the determinant of the matrix
For a fixed nonnegative integer n, there is a unique determinant function for the n×n matrices over any commutative ring R. In particular, this unique function exists when R is the field of real or complex numbers.
Interpretation as the area of a parallelogram
The 2×2 matrix
has determinant
If A is a 2×2 matrix, its determinant det A can be viewed as the oriented area of the parallelogram with vertices at (0,0), (a,b), (a + c, b + d), and (c,d). The oriented area is the same as the usual area, except that it is negative when the vertices are listed in clockwise order.
Further, the parallelogram itself can be viewed as the unit square transformed by the matrix A. The assumption here is that a linear transformation is applied to row vectors as the vectormatrix product x^{T}A^{T}, where x is a column vector. The parallelogram in the figure is obtained by multiplying matrix A (which stores the coordinates of our parallelogram) with each of the row vectors and in turn. These row vectors define the vertices of the unit square. With the more common matrixvector product Ax, the parallelogram has vertices at and (note that Ax = (x^{T}A^{T})^{T} ).
Thus when the determinant is equal to one, then the matrix represents an equiareal mapping.
3by3 matrices
The determinant of a 3×3 matrix
is given by
The rule of Sarrus is a mnemonic for this formula: the sum of the products of three diagonal northwest to southeast lines of matrix elements, minus the sum of the products of three diagonal southwest to northeast lines of elements when the copies of the first two columns of the matrix are written beside it as below:
This mnemonic does not carry over into higher dimensions.
Other Properties of Determinants
Switching two rows in a matrix with change the sign (+/) of the determinant.
Row replacement operations (adding a constant multiple of one row to another) does nothing to the determinant.
Scaling a row (aka multiplying or dividing any row by a scalar) will multiply or divide the determinant by that scalar also.
Linear Algebra and Matrices
Groups of multiple vectors and systems of polynomial equations can be placed inside a matrix, which can be useful for a variety of operations. A Polynomial is anything of the form: P = a0 + a1x + a2x^{2} + a3x^{3} + a4x^{4} + … where a0an are constants. An example would be y = x^{2} + 2x – 4 or s = 3t^{4} – 7
It’s important to understand that the linear independence of x and x^{2} in polynomial equations conform to the same matrix mathematics as vectors which contain linearly independent components such as x, y, and z.
Is the system consistent? This is the first question you ask when you look at any matrix, or a system of polynomials or vectors. You may know from algebra that in order to solve M equations from N unknowns you need at least as many equations as unknowns in order to solve it, and even then a solution isn’t always guarunteed. You may have a case where two equations represent the same line, or where two or more vectors are not linearly independent, in other words, there are vectors which can be generated as multiples or combinations of other vectors. Two vectors are linearly dependent if and only if one is a constant multiple of the other. For example a=(2,4) and b=(4,8) are both along the same exact line if you were to draw them out on a graph, they are both linearly dependent. Another example would be s=(2,1,3,5) and t=(4,2,6,10) since t=2s they are constant multiples of the same vector so they are linearly dependent.
Beyond 2 vectors it gets a bit more tricky to tell linear independence of vectors
So if M < N
An [M x N] Matrix is (M Rows by N Columns):
Here is a 4×3 Matrix (4by3) for example:
Row Operations
Echelon Form and the Identity Matrix
Invertible Matrix Theorem