Basiclinalg.mws

Some Basic Linear Algebra

Version .8 for Maple V R7.

Load the linear algebra package. Below is the list of routines. You can get
help on any of them by typing ?
command_name and then hitting the <ENTER> key.

> with(linalg);

Warning, the protected names norm and trace have been redefined and unprotected

[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...
[BlockDiagonal, GramSchmidt, JordanBlock, LUdecomp,...

Entering Matrices and Vectors

Enter a 2 by 3 matrix:

> A1 := matrix(2,3,[1,2,3,4,5,6]);

A1 := matrix([[1, 2, 3], [4, 5, 6]])

These are viewed as column vectors even though they are written horizontally.

> v1 := vector([5,3,1]); v2 := vector([3,4]);

v1 := vector([5, 3, 1])

v2 := vector([3, 4])

Matrix Multiplication, Evalm, Transpose, Determinant, Inverses

Enclose matrix computations within the function call evalm.
&* is the matrix multiplication operator.

> evalm(A1 &*v1 + v2);

vector([17, 45])

> A2 := evalm(A1 &* transpose(A1) );

A2 := matrix([[14, 32], [32, 77]])

Ordinary scalar multiplication is indicated with a *.

> evalm(2*A2);

matrix([[28, 64], [64, 154]])

> det(A2);

54

Calculate the inverse of a matrix.
A2^(-1) would also work here.

> A2_inv := inverse(A2);

A2_inv := matrix([[77/54, -16/27], [-16/27, 7/27]])...

The Map Function - Applying a function to every entry of a matrix.

Convert to floating point.

> map(evalf, A2_inv);

matrix([[1.425925926, -.5925925926], [-.5925925926,...

See the worksheet "Entering Matrices" for more information on constructing matrices.

Bases, Rank and Subspaces Associated to a Matrix


Compute a basis for the range of A1.

> colspace(transpose(A1));

{vector([1, 0, -1]), vector([0, 1, 2])}

Basis for the kernel.

> nullspace(matrix(3,3,[[1,2,3],[2,4,6],[3,6,9]]));

{vector([-2, 1, 0]), vector([-3, 0, 1])}

> rank(A2);

2

Find a basis for the subspace spanned by a set of vectors.

> basis({vector([1,0,1]),vector([1,0,5]),vector([3,0,1])});

{vector([3, 0, 1]), vector([1, 0, 1])}

Reduced row echelon form.

> rref(A1);

matrix([[1, 0, -1], [0, 1, 2]])

Solution of a Linear System

> linsolve(matrix(2,2,[a,b,c,d]),vector([e,f]));

vector([(b*f-e*d)/(-a*d+c*b), (-a*f+c*e)/(-a*d+c*b)...

Dot Product and Cross Product

Dot product.

> dotprod(v1, vector([1,4,8]));

25

Cross product in R^3.

> crossprod(v1,vector([1,0,0]));

vector([0, 1, -3])

Rows, Columns, and Submatrices

Copy the all the entries of A1 into the matrix B. The upper left corner of A1
goes into the (2,1) entry of B.

> B:= matrix(3,3);copyinto(A1,B,2,1);

B := array(1 .. 3,1 .. 3,[])

matrix([[`?`[1,1], `?`[1,2], `?`[1,3]], [1, 2, 3], ...

Row 1 of A1.

> row(A1,1);

vector([1, 2, 3])

Column 1 of A1.

> col(A1,1);

vector([1, 4])

Construct a submatrix of A using rows 1..2 and column 1.

> submatrix(A1,1..2,1..1);

matrix([[1], [4]])

Norms

The norm function uses absolute value signs which is somewhat inconvenient for algebraic
simplification.

> norm(vector([x,y]),2);

sqrt(abs(x)^2+abs(y)^2)

So it may be better top define your own vector norm function:

> vnorm := u -> sqrt(dotprod(u,u,'orthogonal'));

vnorm := proc (u) options operator, arrow; sqrt(dot...

> vnorm(vector([x,y]),2);

sqrt(x^2+y^2)

The default norm is the infinity one.

> norm(vector([x,y]));

max(abs(y),abs(x))

Using Maple's Solve Command to find the set of matrices which
commute with a given one.

Matrix equations can be converted into sets of equations and then solved.
For example consider the problem of finding all matrices B which commute with the matrix A2 above (i.e. A2 B - B A2 = 0) :

> B := matrix(2,2,[a,b,c,d]);

B := matrix([[a, b], [c, d]])

> mat_with_unknowns := evalm( A2 &* B - B &* A2);

mat_with_unknowns := matrix([[32*c-32*b, -63*b+32*d...

Now we solve for the unknown entries a,b,c, and d.
(This relies on the fact that Maple's solve command ordinarily wants a set of equations in its
first argument. But if given a first argument which is a set of algebraic equations,
it interprets these as a set of equations by viewing each expression as equal to 0.)

> soln1 := solve(convert(mat_with_unknowns,set),{a,b,c,d});

soln1 := {d = d, a = a, b = 32/63*d-32/63*a, c = 32...

This globally assigns b and a to have the values mentioned in soln1.

> assign(soln1);

Even though print(B) does not show the substitutes expressions, the entries of B
(e.g. the upper left corner a) really does have the value specified in the solution.

> B[1,1]; print(B);

a

matrix([[a, b], [c, d]])

Fully evaluate all the entries of B.

> print(map(eval,B));

matrix([[a, 32/63*d-32/63*a], [32/63*d-32/63*a, d]]...

Gram Schmidt, Matrix Exponential

Convert a list of vectors into a list of orthogonal vectors. The first k vectors in
each list (for any k) span the same subspace.

> GramSchmidt([vector([1,1,1]),vector([1,2,3]),vector([1,4,5])]);

[[1, 1, 1], [-1, 0, 1], [-1/3, 2/3, -1/3]]

matrix exponential

> exponential(matrix(2,2,[0,theta,-theta,0]));

matrix([[cos(theta), sin(theta)], [-sin(theta), cos...

Calculus of R^n and Vector Valued Functions

The derivative of a function of several variables.

> deriv := jacobian([exp(x)*cos(y),exp(x)*sin(y)],[x,y]);

deriv := matrix([[exp(x)*cos(y), -exp(x)*sin(y)], [...

> eval_at_1_2 := a -> subs(x=1,y=2,a);

eval_at_1_2 := proc (a) options operator, arrow; su...

A function to evaluate anything at (x,y)=(1,2).

> map(eval_at_1_2,deriv);

matrix([[exp(1)*cos(2), -exp(1)*sin(2)], [exp(1)*si...

Substitute (x,y) = (1,2) for each entry of deriv.

> r := (x^2+y^2+z^2)^(1/2);

r := sqrt(x^2+y^2+z^2)

Curl of a vector field.

> curl([x/r^3,y/r^3,z/r^3],[x,y,z]);

vector([0, 0, 0])

Eigenvalues, Eigenvectors, and the SVD

A list of symbolic eigenvalues.

> evals_of_A2 := eigenvals(A2);

evals_of_A2 := 91/2+1/2*sqrt(8065), 91/2-1/2*sqrt(8...

> evals_of_A2[1];

91/2+1/2*sqrt(8065)

Extract the first eigenvalue from the list.

Convert the first eigenvalue to a floating point number.

> evalf(evals_of_A2[1]);

90.40267252

Convert evals_of_A2 to a list, and change each entry to floating point.

> map(evalf, [evals_of_A2]);

[90.40267252, .59732748]

Just doing
map(evalf, evals_of_A2);
would result in an error.
Directly calculate numerical eigenvalues.

> evalf(Eigenvals(A2));

vector([.5973274715, 90.40267253])

Get the eigenvectors as well as the eigenvalues. Assign the eigenvectors of A2 to the
columns of w.
(NOTE: Use a fresh variable instead of w if you run into difficulties here.)

> evalf(Eigenvals(A2,w));

vector([.59732748, 90.40267254])

> print(w);

matrix([[.9223657801, .3863177030], [-.3863177030, ...

> evalm(inverse(w) &* A2 &* w);

matrix([[.5973274696, .65e-8], [0., 90.40267253]])

Check the eigenvectors and eigenvalues.

> charpoly(evalm(transpose(A1)&*A1),lambda);

lambda^3-91*lambda^2+54*lambda

> singularvals(A1);

[1/2*sqrt(182+2*sqrt(8065)), 1/2*sqrt(182-2*sqrt(80...

Compute the singular value decomposition of A1. (So transpose(U) A V will have nearly 0
entries except for these on the diagonal.)

> evalf(Svd(A1,U,V));

vector([9.508032001, .7728696357])

The orthogonal matrices computed above.

> print(U,V);

matrix([[-.3863177031, -.9223657801], [-.9223657801...

Check the SVD above.

> evalm(transpose(U) &* A1 &* V);

matrix([[9.508032000, .1e-8, 0.], [-.5e-9, .7728696...

>

>