Some Basic Linear Algebra
Version .8 for Maple V R7.
Load the linear algebra package. Below is the list of routines. You can get
help on any of them by typing ?
command_name
and then hitting the <ENTER> key.
> with(linalg);
Warning, the protected names norm and trace have been redefined and unprotected
Entering Matrices and Vectors
Enter a 2 by 3 matrix:
> A1 := matrix(2,3,[1,2,3,4,5,6]);
These are viewed as column vectors even though they are written horizontally.
> v1 := vector([5,3,1]); v2 := vector([3,4]);
Matrix Multiplication, Evalm, Transpose, Determinant, Inverses
Enclose matrix computations within the function call evalm.
&* is the matrix multiplication operator.
> evalm(A1 &*v1 + v2);
> A2 := evalm(A1 &* transpose(A1) );
Ordinary scalar multiplication is indicated with a *.
> evalm(2*A2);
> det(A2);
Calculate the inverse of a matrix.
A2^(-1) would also work here.
> A2_inv := inverse(A2);
The Map Function - Applying a function to every entry of a matrix.
Convert to floating point.
> map(evalf, A2_inv);
See the worksheet "Entering Matrices" for more information on constructing matrices.
Bases, Rank and Subspaces Associated to a Matrix
Compute a basis for the range of A1.
> colspace(transpose(A1));
Basis for the kernel.
> nullspace(matrix(3,3,[[1,2,3],[2,4,6],[3,6,9]]));
> rank(A2);
Find a basis for the subspace spanned by a set of vectors.
> basis({vector([1,0,1]),vector([1,0,5]),vector([3,0,1])});
Reduced row echelon form.
> rref(A1);
Solution of a Linear System
> linsolve(matrix(2,2,[a,b,c,d]),vector([e,f]));
Dot Product and Cross Product
Dot product.
> dotprod(v1, vector([1,4,8]));
Cross product in R^3.
> crossprod(v1,vector([1,0,0]));
Rows, Columns, and Submatrices
Copy the all the entries of A1 into the matrix B. The upper left corner of A1
goes into the (2,1) entry of B.
> B:= matrix(3,3);copyinto(A1,B,2,1);
Row 1 of A1.
> row(A1,1);
Column 1 of A1.
> col(A1,1);
Construct a submatrix of A using rows 1..2 and column 1.
> submatrix(A1,1..2,1..1);
Norms
The norm function uses absolute value signs which is somewhat inconvenient for algebraic
simplification.
> norm(vector([x,y]),2);
So it may be better top define your own vector norm function:
> vnorm := u -> sqrt(dotprod(u,u,'orthogonal'));
> vnorm(vector([x,y]),2);
The default norm is the infinity one.
> norm(vector([x,y]));
Using Maple's Solve Command to find the set of matrices which
commute with a given one.
Matrix equations can be converted into sets of equations and then solved.
For example consider the problem of finding all matrices B which commute with the matrix A2 above (i.e. A2 B - B A2 = 0) :
> B := matrix(2,2,[a,b,c,d]);
> mat_with_unknowns := evalm( A2 &* B - B &* A2);
Now we solve for the unknown entries a,b,c, and d.
(This relies on the fact that Maple's solve command ordinarily wants a set of equations in its
first argument. But if given a first argument which is a set of algebraic equations,
it interprets these as a set of equations by viewing each expression as equal to 0.)
> soln1 := solve(convert(mat_with_unknowns,set),{a,b,c,d});
This globally assigns b and a to have the values mentioned in soln1.
> assign(soln1);
Even though print(B) does not show the substitutes expressions, the entries of B
(e.g. the upper left corner a) really does have the value specified in the solution.
> B[1,1]; print(B);
Fully evaluate all the entries of B.
> print(map(eval,B));
Gram Schmidt, Matrix Exponential
Convert a list of vectors into a list of orthogonal vectors. The first k vectors in
each list (for any k) span the same subspace.
> GramSchmidt([vector([1,1,1]),vector([1,2,3]),vector([1,4,5])]);
matrix exponential
> exponential(matrix(2,2,[0,theta,-theta,0]));
Calculus of R^n and Vector Valued Functions
The derivative of a function of several variables.
> deriv := jacobian([exp(x)*cos(y),exp(x)*sin(y)],[x,y]);
> eval_at_1_2 := a -> subs(x=1,y=2,a);
A function to evaluate anything at (x,y)=(1,2).
> map(eval_at_1_2,deriv);
Substitute (x,y) = (1,2) for each entry of deriv.
> r := (x^2+y^2+z^2)^(1/2);
Curl of a vector field.
> curl([x/r^3,y/r^3,z/r^3],[x,y,z]);
Eigenvalues, Eigenvectors, and the SVD
A list of symbolic eigenvalues.
> evals_of_A2 := eigenvals(A2);
> evals_of_A2[1];
Extract the first eigenvalue from the list.
Convert the first eigenvalue to a floating point number.
> evalf(evals_of_A2[1]);
Convert evals_of_A2 to a list, and change each entry to floating point.
> map(evalf, [evals_of_A2]);
Just doing
map(evalf, evals_of_A2);
would result in an error.
Directly calculate numerical eigenvalues.
> evalf(Eigenvals(A2));
Get the eigenvectors as well as the eigenvalues. Assign the eigenvectors of A2 to the
columns of w.
(NOTE: Use a fresh variable instead of w if you run into difficulties here.)
> evalf(Eigenvals(A2,w));
> print(w);
> evalm(inverse(w) &* A2 &* w);
Check the eigenvectors and eigenvalues.
> charpoly(evalm(transpose(A1)&*A1),lambda);
> singularvals(A1);
Compute the singular value decomposition of A1. (So transpose(U) A V will have nearly 0
entries except for these on the diagonal.)
> evalf(Svd(A1,U,V));
The orthogonal matrices computed above.
> print(U,V);
Check the SVD above.
> evalm(transpose(U) &* A1 &* V);
>
>