Homework 1
Someone’s scanner stopped
working (that would be mine); my apologies for the delay in getting the
solutions up!
Out of 110 points (10 points
per problem).
Page 23, 3.
Page 28, 6.
Page 29, 7.
Page 61, 12.
Page 91, 10.
You were asked to run the script Zoom and plot the
expanded polynomial x^6 - 6x^5 + 15x^4 – 20x^3 + 15x^2 - x +1 for increasingly
smaller neighborhoods around x = 1. The
objective was to see how small numerical errors become relatively large under
increased zoom, so it was expected that you would plot the unexpanded (x-1)^6
over the same intervals and briefly confirm verbally that, while two algorithms
may be mathematically equivalent, they may behave differently.
[2 points for original code]
% Script file Zoom
close all
k=0;
for delta = [.1 .01 .008 .007 .005 .003]
x =
linspace(1-delta, 1+delta, 100)’;
y =
x.^6 – 6*x.^5 + 15*x.^4 – 20*x.^3 + 15*x.^2 - x +1
k=k+1
subplot(2,3k)
plot(x,y,x,zerps(1,100))
axis([1-delta
1+delta –max(abs(y)) max(abs(y))])
end
[3 points for graphs – 1 point taken off if 2 sets of
graphs not distinguished—labeling is important!]
[2 points for altered code or statement that the same code was used with the expanded form
replaced by unexpanded]
% Script file Zoom
close all
k=0;
for delta = [.1 .01 .008 .007 .005 .003]
x =
linspace(1-delta, 1+delta, 100)’;
y =
(x.-1)^6
k=k+1
subplot(2,3k)
plot(x,y,x,zerps(1,100))
axis([1-delta
1+delta –max(abs(y)) max(abs(y))])
end
[3 points for graphs]
The
With absolute and relative errors of this
approximation calculated using the script
[1 point for printout of the
file or parts of the file]
[6 points for printout of
file, with function defined]
[3 points for table, with
approximation to root at each iteration]
Again, we want to look at
the “speed of convergence,” so printing out the errors to the root would also
be helpful here.
[1 point for interpretation]
The root is found to be
3.1416, and a new digit of accuracy appears about every three iterations. This fact indicates linear convergence.
[5 points for printout, with
function defined]
[2 points for case 1
printout, with approximation to root at
each iteration]
Case 1 is x0=2, x1=4. After 8 iterations, the result if 3.1416—same
as the bisection method but reached much faster. (Use relative or absolute error to determine
this.)
[2 points for case 2
printout, with approximation to root at
each iteration]
Case 1 is x0=1, x1=2. After 8 iterations, the result if 3.1416—same
as the bisection method but reached much faster.
[1 point for interpretation]
The important thing to
notice about this method is that it converges even if the root is not in the original interval, not to mention
that its convergence is faster!
Page 30, 1. Create a Matlab code to perform
[10 points for the code]
2. Find a zero
of tan(x/4)-1, x0=2, ε=10-15.
[9 points for printout of
results, with approximation to root at
each iteration]
Again, it is also helpful to
include relative error in the printout of the results.
[1 point for interpretation
of results compared to bisection method and secant method]
All that was needed to state
here is that the Newton Method converges still faster than the secant method.
3. More fun
with
[1 point for reference to
code or function, as well as appropriate labeling of cases]
[1 point for each of five
cases, with approximation to root at
each iteration]
Listing the number of
iterations it takes to reach the root is helpful here, as you are asked to
explain how the convergence
[4 points for interpretation
of problem explained above]
Notice that for a=1, the
number of correct digits almost doubles with each iteration, but this is not
true as a gets smaller. If we look at
the structure of
f(x) = x2-a2
df/dx = 2x
which means that
xk+1=xk
– (xk2-a2)/2xk.
If we take the limit of this
as aà0, it is easy to see
that xk+1=xk – (x2-a2)/2x becomes xk+1=xk
– xk2/2xk = xk – xk/2 = xk/2,
which is a linear function!
Your grader is a geek, so
she thinks this is pretty cool.
4.
Again with the
[6 points
for printout]
[4
points for interpretation]
So,
in case you were wondering, the Newton’s Method is divergent here, as evidenced
by the NaN’s output by the computer.
Page 61, 12.
Page 91, 10.