% In this lab we will see how Matlab handles % eigenvalues, orthogonal projections, %least squares and applications. % First we enter a matrix and find the eigenvalues % (Remember, each line below is a separate command to execute.) a = [1 2 3;4 5 6; 7 8 9] eig(a) % If you call the same command but supply two matrix names, % you get the matrices s and lambda containing the eigenvectors and eigenvalues [s,lambda] = eig(a) % The columns of s are the eigenvectors. s is the % matrix which diagonalizes a. You can check this as follows. inv(s)*a*s % Here is a matrix with complex eigenvalues a = [0 1; -2 -2] [s,lambda] = eig(a) inv(s)*a*s % Finally, we'll see what happens when the matrix is not diagonalizeable a = [5 -2;2 1] [s,lambda] = eig(a) % The eigenvalue 3 is repeated. Note that the % columns of s are linearly dependent, so s is not invertible. inv(s) % Next we will compute some orthogonal projections. Consider the plane in four-dimensional space % spanned by the vectors X1 = (1,2,3,4) and X2 = (-1 0 3 5) % We will project the vector b = (5,4,3,2) onto % this subspace. First define matrices a and b a = [1 2 3 4;-1 0 3 5]' b = [5 4 3 2]' % Note the transpose symbols ' in the previous commands. % We want the given vectors to appear as columns. % The projection will be some vector p = a xhat where % xhat solves the normal equations: % a'*a*xhat = a'*b % We will solve using Matlabs \ command a1 = a'*a b1 = a'*b xhat = a1\b1 p = a*xhat e = b-p % Actually, it is not necessary to go through all this to % solve the normal equations in Matlab % The command a\b automatically solves a'*a x=a'*b. In this way % Matlab always comes up with an answer % even for unsolveable linear equations. a\b xhat % It is easy to use Matlab to compute projection matrices using %the formula P = A'(A'*A)^{-1}A % which is a bit awkward by hand. The matrix A should contain a % basis for the subspace on which you % want to project. Let's try finding the projection matrix onto the subspace of R4 above. a'*a P = a*inv(a'*a)*a' % Now you can project any vector b onto the subspace and % find the complementary vector e p = P*b e = b-p % You can check that e is orthogonal to the vectors X1 and X2 % by multiplying it by A' a'*e % Well, it's almost zero ! % Next, we will use the least squares method to do some curve fitting. % Suppose you want to fit a line of the % for y = C + Dx to the (x,y) data points below: % (14,17) (47,102) (8,31) (100,111) (18,28) (16,25) (6,13) % x=exports and y=imports for several countries in billions of dollars. % First we get the data into some column vectors called x and y x = [14 47 8 100 18 16 6]' y = [17 102 31 111 28 25 13]' % You can plot one vector against another using *'s % as follows plot(x,y,'*') % Next we formulate the equations for the coefficients C and D as a % linear system AX=B a = [ones(7,1) x] b = y % Again, Matlab automatically uses least squares to solve it xhat = a\b % Recall that the vector p = a*xhat gives the values of the fitted line. % To compare we'll plot them both on the same graph using o's % for the fitted line. p = a*xhat plot(x,y,'*',x,p,'o') % To assess the error of the fit you can look at the error vector % b - p = y - p and compute its squared length e = b-p e'*e % Perhaps the root mean squared error is easier to compare % to the sizes of the data themselves rms = sqrt(e'*e/7) % Not a very good fit. % Here is some data pairs (x,y) where x = day of year and y=time of % sunrise at a certain location: % The times are GMT (Greenwich Mean Time and no daylight savings) x= [1, 32, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335]' y= [8.0, 7.6, 6.75, 5.67, 4.6, 3.8, 3.8, 4.5, 5.25, 6.0, 6.8, 7.6]' plot(x,y,'*') % It's pretty clear that a trigonometric fit is appropriate. % We'll try y = A + B sin(2*pi*x/365) + C Cos(2*pi*x/365) % Matlab will actually take the sine and cosine of a vector % componentwise so we can set up our equations as follows a = [ones(12,1) sin(2*pi*x/365) cos(2*pi*x/365)] b = y xhat = a\b p = a*xhat % Here is the rms error e = b-p rms=sqrt(e'*e/12) % Here's how it looks plot(x,y,'*',x,p,'o') % This time its a pretty good fit. % Next we turn to the Gram-Schmidt algorithm. Given a basis for % Rn we construct a matrix whose columns % are the basis vectors. Then the QR factorization produces a % matrix Q whose columns are an orthonormal % basis obtained by the Gram-Schmidt algorithm a = [1 2 3;4 5 6;7 8 10] [q,r] = qr(a) % The columns of q form an orthonormal basis for R3. The product % q*r should give the original matrix q*r % Suppose we want to find an orthonormal basis for the plane x+y+z = 0 in R3. % We start with the known basis % [-1 1 0]' and [-1 0 1]' a = [-1 1 0;-1 0 1]' [q,r] = qr(a) % The qr factorization gives an orthomormal basis for all of R3. % The first two columns of q are the result % of applying Gram-Schmidt to the two columns of a. So these first two columns of q are an orthonormal basis % for our plane. % Suppose we hadn't known a basis to begin with. Recall that you can % use the command null to get a % basis for the null space of a matrix. In this case, the plane % is the nullspace of [1 1 1] a = null([1 1 1]) [q,r] = qr(a) % This shows that the output of null([1 1 1]) was already an orthonormal basis % (but a different one than before). % Finally, we will do a little discrete Fourier analysis. To get a large % data set without typing we'll create % one by putting together trig functions with various amplitudes and % frequencies. Don't forget the semicolons % after these commands to suppress the large outputs. % The vector "times" contains 1001 time values between 0 and 10. % Then the "data" vector is an oscillating signal with 3 % frequencies: 1, 2 and 7 times = (0:1000)/100; data = 2*sin(2*pi*times) - 3*sin(2*2*pi*times) +4*cos(7*2*pi*times); plot(times, data) % The built-in fft() command computes the "fast" Fourier transform. % To get the actual fourier coefficients % we need to divide by the number of data points. four = fft(data)/1001; % Let's look at the first few entries of the transform four(1:20) % They are all complex numbers. To get an idea which are the % important coefficients % we'll compute the amplitudes of the corresponding oscillations. This can % be done by taking the absolute values of the complex fourier coefficients amps = abs(four); plot(amps) % As you can see there are prominent peaks in the amplitudes % meaning that certain % frequencies are stronger than others in the data. To see which % they are we need to % plot against frequency rather than just the index in the list % on the horizontal axis. % This can be done as follows. freqs = (0:1000)/10; plot(freqs,amps) % To see where the peaks are we will just plot the first part plot(freqs(1:100),amps(1:100)) % So the peaks are at frequencies 1, 2, and 7 as expected from the % construction of the data. % Here are some problems to try for yourself % % 1. Start from a symmetric matrix such as % a = [2 -1 0;-1 2 -1;0 -1 2] and use the % eig command to find a matrix s whose coluumns % are eigenvectors. You will find that the eigenvectors % are orthogonal. Check this by computing s'*s. Try the % same with a nonsymmetric matrix. % 2. Here is some US population data. After entering it try plotting % it and then fit it with a line y = C + Dx and with a parabola % y = C + Dx + Ex^2. For the latter problem you will need a matrix % a whose 3 columns consist of a vector of ones, the vector x and the % vector x.^2 where the "dot square" command .^2 takes the squares of % the individual entries of a matrix. % x = [1820,1840,1860,1880,1900,1920,1940,1960,1980] % y = [9.6,17.1,31.4,50.2,76.2,106.0,132.2,179.3,226.5] % 3. Can you figure out how to fit the population data above with % an exponential curve y = C*exp(Dx) ? Hint: Try taking logs. % 4. Add some normally distributated noise to the data signal above by % setting data2 = data + randn(1001,1). Try finding the amplitudes % of the Fourier coefficients again. Are the peaks still visible ? % 5. Find the projection matrix P for projection onto the line x + 2y = 0 % in R2. Then find the repflection matrix % through the line R = 2*P - I (recall I=eye(2)). Finally make a % picture showing the points below % and their reflections % (3,1) (5,1) (7,1) (9,3) (9,5) (9,7) (9,9) % (1,3) (1,5) (1,7) (1,9) (3,9) (5,9) (7,9) % (3,7) (7,7) (3,4) (4,3) (5,3) (6,3) (7,4) % If you define the right matrix you can compute the reflections of all % these points at the same time and % then extract the vectors containing their x and y coordinates for plotting.