# Kuliah Metode Numerik

Posted by sutantoSep 3

**TUGAS 1:**

Silakan mengumpulkan tugas Numerik dengan cara :

1. Ketik tugas dengan word

2. Copy tulisan anda

3. buka http://sutanto.staff.uns.ac.id

4. klik comment

5. Paste-kan tulisan anda dari word

Untuk anda yang mau mengambil modul lengkap silakan klik disini MODUL METODE NUMERIK

Tugas modul Metnum mulai hari ini bisa di kumpulkan melalui Box Comment ini.Thanks

Tolong disampaikan ke student kelompok A

█ Worked Example 3.4.1 Use the second order Taylor method with steplength h = 0.5 to find an approximation to y (2) where y (x) is the solution of the IVP y’ = y / (y-x), y(1) = 4.

Differentiating the ODE gives

Then the second order Taylor method is

where

Now h = 0.5, xo = 1 and yo = 4, so

Our estimate of y(2) is y2 = 5.468272

Results for the solution of the IVP

Using the second order Taylr method follow. Comparing these with the earlier results obtained using Euler’s method with the same steplength shows that they are much

For Euler’s method, subtracting equation (3.3.2) from (3.3.1) gives

h2 en+i = en + h[(xn,y(xn)) - f(xn,yn)] + —y”(cn)

= en + h—(xn,£n)(y(xn) – yn) + —y”(cn)

l + h~(xn,£n)

en+ yy”(cn)

(3.3.3)

propagated error local error

Consider the IVP y’ = 2x, y(0) = 0. The solution is y = x2, so y” = 2. Also, since f(x,y) = 2x, df/dy = 0. Substituting these values into the above equation gives

en+i = en + h2. Assuming that eo — 0,

ei = e0 + h2 = h? e2 = ei + h2 – 2h2 e3 SB e2 + h2 = 3/i2

en = nh*.

But xn = nh, so e„ s= xnh, and if we look at a fixed value of xn for different values of h, (n will be different for each h) then the error will vary proportionally with h.

It can be shown that for any IVP, if eo = 0 then \en\ 0 or equivalently y„ -> y(xn). Thus Euler’s method is convergent. Also, since en = 0(h) we say Euler’s method is a first order method.

The local truncation error of Euler’s method is 0(h2), but the accumulation of these errors of 0(h2) results in a global error of 0(h). For any method the order of the global truncation error (and hence of the methcd) is one less than the crder of the local truncation error.

ok

riza

auliasambi@yahoo.com | 222.124.162.140

At the and of the elimination phase, the equation are

For the elimination phase and for the back-substitution for each RHS.

The equations are now solved by back-substitution. The last equation gives

The ith equation is

Or

Solving for gives, for

In the pseudo-code, the back-substitution is

X[n]=b[n]/a[n,n]

For i=n-1 downto 1 do

Sum=b[i]

For j=i+1 to n do

Sum=sum – a[i,j]*x[j]

End for

X[i]=sum/a[I,i]

End for

It can be shown that the total number of multiplications and division is

Carrying out the back-substitution gives

The round-off errors introduced by the four digit arithmetic have resulted in a small (0.1%) error in x1 but a massive (200%) error in x2 (Note the small pivot.)

Suppose we interchange the equations before solving them :

E2 – 0.0003683 E1 → E2

Back-substitution gives x2 = 1.000 and x1 = 10.00. Thus the simple expedient of interchanging equations has resulted in an approximate solution that is correct to the 4 digits used in the calculation.

This example illustrates a pivoting strategy known as partial pivoting, where the equations are interchanged to make the pivot as large as possible. In the kth step the smallest p is determined such that

and equations p and k swapped. This strategy ensures that all the multipliers have a magnitude of less then 1.

• Worked Example 4.4.1 Solve the following system of linier equations using Gaussian elimination with partial pivoting :

x1 – x2 + 3×3 = 8

2×1 + 7×3 = 16

-x1 – 3×2 + 3×3 = -12

Since |a21| = max {|a11|,|a21|,|a31|} E1 dan E2 are interchanged. The equations are then

Performing the indicated operations gives the equivalent system

E2 ↔ E3

equations 2 and 3 must be interchanged since │a32│= max {│a22│, │a32│}

E3 + E2 → E3

Self Assessment Exercises 4.4

1. Use Gaussian elimination with partial pivoting to solve the equations

2×1 + 3×2 – x3 = 5

4×1 + 4×2 – 3×3 = 3

2×1 – 3×2 + x3 = 1

2. Solve the following system of equations using Gaussian elimination with partial pivoting :

2×1 + x2 – 3×3 = 5

-4×1 + 3×2 – 8×3 = 2

2×1 – 3×2 + 10×3 = -10

4.5 Scaled partial pivoting

Partial pivoting is not complete answer. For example, the equations considered in the previous section but with the first equation multiplied by 105 are

Solving these equations by Gaussian elimination with partial pivoting using four digit floating point arithmetic gives the same innacurrate solution as obtained for the original equations with no pivoting because the equations are not interchanged (│a11│>│a21│). Although the pivot is not small, it is small compared with the other coefficient in the pivot equation.

The problem is overcome by calculating the scale factors

,

and at the kth step of elimination ( k = 1,2,3,…,n-1 )finding the least value of p such that

then interchanging equations p and k (the scale factors sp and sk are also interchanged). This straegy is often called scaled partial pivoting.

The approximate value of y(2) is y2 = 5.464395

The result of using the improved euler method with h = 0.5 to solve

are given below. For this problem. The erros are slightly smaller than those for the second order Taylor method.

IMPROVED EULER METHOD

Enter Xo, Yo 1,4

Enter step-length h .5

Enter number of steps 4

Enter output frequency 1

X Y EXACT ERROR

1.000 4.000000 4.000000 0.000000

1.500 4.701755 4.701562 -0.000193

2.000 5.464396 5.464102 -0.000294

2.500 6.275253 6.274917 -0.000336

3.000 7.123449 7.123106 -0.000343

The second order methods described required two evaluations the fuction. f(x,y).third and fourth order methods requiring 3 and 4 function evaluations respectively can be found , but at least 6 function evaluations are needed for method of order 5. this is probably the reason why fourth order methods are so popular. The following fourth order method, often called the Classical Runge-Kutta method, is perhaps the

1.4 Floating point form

Any base, b number x can be put into the form, with d1 ≠ 0,

The number of digits that can be stored in a computer is limited. Suppose only p digits can be used for the mantissa. If the number of digits is larger than p, the mantissa must be chopped (truncated) or rounded back to p digits. If the mantissa has fewer than p digits it is filled with O’s. The floating-point representation of x is denoted by fl(x), and

The exponent E is stored as a base 6 integer and is restricted to some interval M1 ≤ E ≤ M2. The smallest number, other than 0, is

and the largest is

There is a finite number of possible floating-point or machine numbers, the number depending on 6,p, MI and M%. The positive machine numbers for b = 2, p = 3, M1 = -1 and M2 = 1 are shown on the real number line in Figure 1.1.

FIGURE 1.1. The positive binary numbers with three digit mantissas and exponents -1, 0 or 1.

If rounding is used, fl(x) is the closest machine number to x, but if chopping is used fl(x) is the next machine number in the direction of 0 (see Figure 1.2). In both cases the error of conversion is called the round-off error. Round-off errors are also incurred whenever floating point numbers are used in a calculation.

FIGURE 1.2. Conversion of a number x to the floating point number fl(x] by rounding or chopping.

If the result of any operation has a magnitude less than S underflow is said to have occurred (the number is usually set to 0), but if it has a magnitude larger than L overflow occurs and program generally “bombs” when an attempt is made to use the result.

The values of b, p, M1, M2, type of conversion (rounding or chopping) and handling of underflow and overflow are all compiler dependent. For example, the IEEE standard.

End For End For

The first method has the advantage of requiring fewer arithmetic operations as, for each i, only one + is needed rather than the one + and x for the second method. However, the second method gives a more accurate result because only two round-off errors are incurred in the calculation of each xi, while i – 1 round-off errors are accumulated in the calculation of may cause overflow for large x and y even though z is smaller than the largest possible number. This overflow can be avoided by writing

1.9 Error propagation

If a calculation is performed exactly using numbers that contain errors, the answer will contain a propagated error. Suppose an estimate of f(T) is calculated from A. Now the error of f(A) is

Error (f(A)) = f(T) – f(A)

= f ‘(c) (T-A) c between T and A (Mean Value Theorem)

≈ f ‘(A) (T-A)

= f ‘(A) Error (A)

The equivalent result for a function of 2 or more variables is

¬

Generally only bounds for the errors in A, yA,… are known, so this result is usually used in the form

more accurate. In fact, the second order Taylor method with h = 0.5 has produced more accurate results than Euler’s methods with h = 0.05, and at considerably lower computational cost.

TAYLOR METHOD OF ORDER 2

Enter X0,Y0 1.4

Enter-steplength 0.5

Enter number of steps 4

Enter output frequency 1

X Y EXACT ERROR

1.000 4.000000 4.000000 0.0000000

1.500 4.703704 4.701562 -.0021415

2.000 5.468272 5.464102 -.0041699

2.500 6.280657 6.274917 -.0057392

3.000 7.129893 7.123106 -.0067868

Self Assessment Exercises 3.4

1. Using the second order Taylor method with h = 0.1, calculate t5he approximate value of the solution of

,

at x = 3.2.

2. Find an estimate of the solution of the IVP

at x = 1 using three steps of the second order Taylor method.

3.5 Runge-Kutta methods

The Runge-Kutta method allows high-order schemes, without the need of calculating ( and evaluating ) higher-order derivatives, by evaluating f (x,y) at more points.

Runge-Kutta methods are all or the form

Where (x,y;h) is an estimate of the “average” slope of the solution on the interval

. For methods of order 2

The constants and are chosen so that the local truncation error

is O( h3). Equating the coefficients of h0,h1 and h2 to zero leads to the equations

In this case yn+1 is obtained by moving along a line whose slope is approximately the slope of the solution at the middle of the interval.

Worked Example 3.5.1 Find an approximation to the solution of the IVP

at x = 2 using the Improved Euler method with steplength h = 0.5.

The Improved Euler method is

and since = 1, = 4, h = 0.5 and f (x,y) = y / (y-x) we get , for n = 0,

= 1.333333

= f (1.5,4.666667)

= 1.473684

= 4 + 0.25(1.333333+1.473684)

= 4.701754

= + h

= 1 + 0.5

= 1.5,

and for n = 1

= 1.468493

= f(2,6.540002)

= 1.582072

more accurate. In fact, the second order Taylor method with h = 0.5 has produced more accurate results than Euler’s methods with h = 0.05, and at considerably lower computational cost.

TAYLOR METHOD OF ORDER 2

Enter X0,Y0 1.4

Enter-steplength 0.5

Enter number of steps 4

Enter output frequency 1

X Y EXACT ERROR

1.000 4.000000 4.000000 0.0000000

1.500 4.703704 4.701562 -.0021415

2.000 5.468272 5.464102 -.0041699

2.500 6.280657 6.274917 -.0057392

3.000 7.129893 7.123106 -.0067868

Self Assessment Exercises 3.4

1. Using the second order Taylor method with h = 0.1, calculate t5he approximate value of the solution of

,

at x = 3.2.

2. Find an estimate of the solution of the IVP

at x = 1 using three steps of the second order Taylor method.

3.5 Runge-Kutta methods

The Runge-Kutta method allows high-order schemes, without the need of calculating ( and evaluating ) higher-order derivatives, by evaluating f (x,y) at more points.

Runge-Kutta methods are all or the form

Where (x,y;h) is an estimate of the “average” slope of the solution on the interval

. For methods of order 2

The constants and are chosen so that the local truncation error

is O( h3). Equating the coefficients of h0,h1 and h2 to zero leads to the equations

In this case yn+1 is obtained by moving along a line whose slope is approximately the slope of the solution at the middle of the interval.

Worked Example 3.5.1 Find an approximation to the solution of the IVP

at x = 2 using the Improved Euler method with steplength h = 0.5.

The Improved Euler method is

and since = 1, = 4, h = 0.5 and f (x,y) = y / (y-x) we get , for n = 0,

= 1.333333

= f (1.5,4.666667)

= 1.473684

= 4 + 0.25(1.333333+1.473684)

= 4.701754

= + h

= 1 + 0.5

= 1.5,

and for n = 1

= 1.468493

= f(2,6.540002)

= 1.582072

pak Tanto,,kok Gambarnya tidak bisa kluar ya?????

Worked Example 1.9.1 Find a bound for the error and relativ if the function ex sin y is evaluated for x = 1.34 and y = 1.8, assuming all the digits of x and y are significant.

Denote f (x,y) = ex sin y, xA = 1.34 and yA = 1.8. Since all digits significant, Error (xA) 0.005 and Error (yA) 0.05. Now

Self Assessment Exercises 1.9

Find a bound for the error and relative error in calculating x sin x using correctly rounded value xA = 0.986.

Find an approximate interval in wich the true value of tan-1 lies evaluated using the correctly rounded values xA = 3.3 and yA = 0.234.

Chapter 2

Solution of Nonlinier Equations

Introduction

This chapter is concerned with the solution of a nonlinier equation f (x) = 0. While some nonlinier equation can be solved exactly, most can not.

If r is a real number such that f (r) = 0 then we say r is a root or solution of the equation f (x) = 0. We also call r a zero of the function f . If f (r) = 0 and f(r) 0 then r is a simple root ( see Figure 2.1 )

Figure 2.1 A simple root r

However, if f (x) = ( x-r )m g(x) where g(r) 0 and m 1 is an integer, we say that r is a root of multiplicity or order m. In this case f(k) (r) = 0 for k = 0, 1, …, m – 1 and f(m) (r) 0.

m 1, odd m 1, even

Figure 2.2 Roots r of multiplicity m 1.

The methods for solving nonlinier equations are iterative methods where a sequence of approximations to the solution is calculated. The iteration is terminated once a sufficiently accurate estimate is found.

An equation f(x) = 0 may have no roots, a finite number of roots, or infinitely many roots. The number of roots and their approximate value can often be found using a sketch.

pak..ini kalo ngirim tugas,dg email yg sama.ko tugas yang udah di upload pertama langsung hilang kalo ga ganti nama,gmn pak?

▌Worked Example 2.1.1 Determine the approximate values of the roots of χ2 + eχ = 4.

The equation can be written as ex = 4 – χ2, so we sketch the functions y = ex and y = 4 — x2 on the same axes and estimate the x coordinate of the points of intersection.

Self Assessment Exercises 2.1

1. Using a sketch, determine the number and approximate value of the roots of cos χ – χ + 1 = 0.

2. Determine the approximate value of all of the roots of e-χ + cos x = 0.

2.2 Bisection method

If f (x) is continuous on [a, b] and f (c) and f (b) have opposite sign (f(a)f(b) the next interval by (a1,b1) and so on.

Suppose that we know that the root lies in the interval (a, b) We calculate c = (a+b)/2 (the midpoint of the interval), and if f (b) f (c) < 0 then r Є (c, b) so a is replaced by c; otherwise r Є (a, c) so b is replaced by c. We now have an interval which is half as

sum = a (0)

x_power _i=i

For i=I to n

x_power_i= xxx_power_i

sum=sum+a(i) xx_power_i

End For

Operation count: n + ‘s, 2n x ‘s, (0 exponentiations).

sum=a(n)

For i=1 to n

sum=sum xx+a(n-i)

End For

Operation count: n + ‘s, n x ‘s.

The last algorithm is clearly the most efficient, and is equivalent to writing Pn (x) in

The nested form

1.3 Computer arithmetic

We use a base 10 (decimal) number system, but computers muse binary number (base 2) or some system based on binary (octal (base or hexadecimal (base 16)). Just as

376.21 = 3 x 102 + 7 x 101 + 6 x 100 + 2 x 10 – 1 + 1 x 10 – 2

The base b number

(d3d2d1d0.d¬ – 1 d – 2 )b = d3 x b3 + d2 x b2 + d1 x b1 + d0 x b0

+ d – 1 x b – 1 + d – 2 x b– 2.

Here d0, d1¬, … denote a base b digit, that is any of 0, 1, 2, … , t – 1. For hexadecimal numbers A, B, C, D, E, and F are used to represent the digits 10, 11, 12, 13, 14 and 15.

Worked Example 1.3.1 Express (A6.D2)16 as a decimal number.

(A6.D2)16 = 10 x 161 + 6 x 16 – 0 + 13 x 16 – 1 + 2 x 16 – 2

= 160 + 6 + +

= 166 + 0.8125 + 0.0078125

= 166.8203125

It is easy convert from decimal to base b. Suppose x is a decimal integer and

(dndn – 1 ¬…d1d0)b

= dn x bn + dn – 1 x bn – 1 + … + d1 x b1 + d0 x b0 .

Then dividing by b gives

dn x bn – 1 + dn – 1 x bn – 2 + … + d1 +

Integer

So d0 is the reminder when x is divided by b. Similary, d1 is the remainder when the integer part of x/b is devided by b, and so on.

Now suppose that x is the decimal fraction ( 0 < x < 1 )

x = (.d – 1 d – 2 … )b

= d – 1 x b – 1 + d – 2 x b – 2 + …

Multiplying by b gives

bx = d – 1 + d – 2 x b – 1 + …

integer fraction

so d – 1 is the integer part of bx. Repeated multiplication by b gives d – 2, d – 3, …

Worked Example 1.3.2 Convert the number 25.59375 to binary.

Start with the integer part :

2) 25 remainder

2) 12 1 = d0

2) 6 0 = d1

2) 3 0 = d2

2) 1 1 = d3

0 1 = d4

Thus 25 = (11001)2.

The fractional part is now converted :

0 .59375

x2 integer part

1 .18750 1 = d – 1

x2

0 .37500 0 = d – 2

x2

0 .75000 0 = d – 3

x2

1 .50000 1 = d – 4

x2

1 .00000 1 = d – 5

Therefore 0.59375 = (.100011)2 , and combining the two results gives

25.59375 = (11001.10011)2

Self Assessment Exercises 1.3

1. Fine the decimal representation of the following numbers : (a) (110101.1101)2, (b) (34.27)8 , and (c) (3D.9F)16.

2. Convert 46.703125 to (a) binary, (b) octal and (c) hexadecimal.

The approximate value of y(2) is y2 = 5.464395

The result of using the improved euler method with h = 0.5 to solve

are given below. For this problem. The erros are slightly smaller than those for the second order Taylor method.

IMPROVED EULER METHOD

Enter Xo, Yo 1,4

Enter step-length h .5

Enter number of steps 4

Enter output frequency 1

X Y EXACT ERROR

1.000 4.000000 4.000000 0.000000

1.500 4.701755 4.701562 -0.000193

2.000 5.464396 5.464102 -0.000294

2.500 6.275253 6.274917 -0.000336

3.000 7.123449 7.123106 -0.000343

The second order methods described required two evaluations the fuction. f(x,y).third and fourth order methods requiring 3 and 4 function evaluations respectively can be found , but at least 6 function evaluations are needed for method of order 5. this is probably the reason why fourth order methods are so popular. The following fourth order method, often called the Classical Runge-Kutta method, is perhaps the

pak rumus yang sudah dicopy dari mathematica tdk bisa di upload,gmn pak?

TABLE 1. Results of the iteration χ n+1 = 0.4 + 0.6

n χn Error K1

0 1.100000

1

1.092214

-0

.092214

-0

.007786

2

1

.084877

-0

.084877

-0

.007337

0

.942256

-0

.119716

3

1

.077988

-0

.077988

-0

.006889

0

.939000

-0

.106046

10

1

.041406

-0

.041406

-0

.004087

0

.921220

-0

.047793

20

1

.015467

-0

.015467

-0

.001646

0

.908072

-0

.016255

30

1

.005532

-0

.005532

-0

.000605

0

.902908

-0

.005628

40

1

.001946

-0

.001946

-0

.000215

0

.901026

-0

.001958

50

1

.000681

-0.000681

-0

.000076

0

.900359

-0

.000682

60

1

.000238

-0.000238

-0

.000026

0

.900125

-0

.000238

2.4 Newton’s method

In Newton’s method, an improved estimate χ1 of the root of f (χ) = 0 is calculated from an initial estimate Χ0 by finding the point where the tangent to f (x) at χ = χ0 cuts the χ -axis (see Figure 2.7).

FIGURE 2.7. In Newton’s method the function is approximated by its tangent. Equating two expressions for the slope of the tangent gives

▋Worked Example 5.4.1 Find a bound for the error in the using the cubic that interpolates sin π x at x = 0.2, 0.4, 0.6, 0.8 to approximate sin 0.45 π .

Let f(x) = sin π x and denote x0 = 0.2, x1 = 0.4, x2 = 0.6 and x3 = 0.8. since n = 3,

f(x) – p3 (x) = (x – x0)(x – x1)(x –x2)(x–x3)[(f(4)(ξx))/4!]

= (x – 0.2)( x – 0.4 )( x –0.6)( x –0.8 )[( π 4sin π ξx) / 24]

Now, assuming x ∈ (0.2,0.8), we have ξx ∈ (0.2,0.8), and since ׀sin π ξx׀ ≤1 for ξx ∈ (0.2,0.8)

| f(x) – p3 (x)| ≤ |(x – 0.2)(x – 0.4)( x –0.6)( x –0.8 )|[π4/24]

and substituting in x = 0.45 gives

| Error | ≤ | (0.25) (0.05)(-0.15)(-0.35) | [ π 4 / 24] ≈ 0.0027.

The actual error is 0.002575 (see Worked Example 5.4.2), which clearly satisfied the calculated bound.▋

Since the error formula above involves a derivative of f it can not be used if the only data is a table of values. An estimate of the error can often be found by approximating the function by a polynomial of higher degree by using more data points. If the additional point xn+1 is used then

Error ≈ pn+1(x) – pn(x)

= (x – x0)(x – x1)…(x – xn)f[x0,x1,...,xn+1].

▋Worked Example 5.4.2 From the following data, use cubic interpolation to estimate f(0.25), and estimate the error of the approximation.

x 0.0 0.2 0.4 0.6 0.8 1.0

f(x) 0.00000 0.587785 0.951057 0.951057 0.587785 0.000000

Since we are to use cubic, we need 4 data points. To keep the error as small as possible, we will use the four closest points to 0.45, that is 0.2, 0.4, 0.6 and 0.8. the divided difference table is:

xi f(xi) 1st dd 2nd dd 3rd dd

0.2 0.587785

1.816360

0.4 0.951057 - 4.540900

0.000000 0.000000

0.6 0.951057 - 4.540900

- 1.816360

0.8 0.587785

Hence the interpolating cubic is

p3(x) = 0.587785 + (x – 0.2){1.816360 + (x – 0.4)[(- 4.540900) + (x- 0.6)(0.000000)]}

and

f(0.45) ≈ p3(0.45) = 0.985114.

To obtain an estimate of the error, we add the next closest data point, x = 0.0, to bottom of the table‡ giving:

xi f(xi) 1st dd 2nd dd 3rd dd 4th dd

0.2 0.587785

1.816360

0.4 0.951057 - 4.50900

0.000000 0.000000

0.6 0.951057 - 4.540900 3.613540

- 1.816360 - 0.722708

0.8 0.587785 - 4.251817

0.734733

0.0 0.000000

Thus

Error ≈ (x – x0)(x – x1)(x – x2)(x – x3)f[x0, x1, x2, x3, x4]

= (x – 0.2) (x –0.4) (x –0.6) (x – 0.8) (3.613540)

≈ 0.0024 for x = 0.45.

The given data is a tabulation of the function sin π x, so we can calculate the error. It is

sin 0.45π – p3 (0.45) = 0.002574,

so the error estimate is slightly low.▋

Self Assessment Exercises 5.4

1. Find a bound for the error at x = 5 of the polynomial that interpolates √x at x = 2, 3, 4 and 6.

2. Use cubic interpolation to approximate f(1.7) from the following data, and obtain an estimate of the error of the approximation by using an additional data point.

x 0 0.4 0.9 1.5 2.2 3.0

f(x) 1.00000 0.81873 0.63763 0.47237 0.33287 0.22313

Appendix A:

Answers to Self Assessment Exercises

Most of the answers have been calculated to a greater precision than the listed results. There may be a small difference between your calculated result and the answers given below.

Self Assessment Exercises 1.3 (page 4)

1. (a) 53.8125, (b) 28.359375, (c) 61.62109375.

2. (a) (101110.101101)2, (b) (56.55)8, (c) (2E.B4)16.

Self Assessment Exercises 1.4 (page 7)

1. Error = 0.00205, Rel. Error = 0.0012, 3 significant digits.

2. Error = -0.00040, Rel. Error = -0.000040, 4 significant digits.

Self Assessment Exercises 1.7 (page

1. f(x) = x /( √(900+x) + 30) ( ≈ x /60 for |x| <>1)

3. f(x) = sin x2 / (1 + cos x2).

Self Assessment Exercises 1.9 (page 10)

1. |Error| ≤ 0.00069, |Rel. Error| ≤ 0.00084.

2. The true value lies in the interval [2.602, 2.676].

Self Assessment Exercises 2.1 (page 12)

1. One root, r ≈ 1.3.

2. r1 ≈ 1.6, rk ≈ (2k – 1)π/2 k = 2, 3, 4, …

Self Assessment Exercises 2.2 (page 15)

1. r ≈ c3 = 0.88357.

2. r ≈ c4 = 1.90625.

3. |e5|≤ 0.00625

4. 19 (or more).

more accurate. In fact, the second order Taylor method with h = 0.5 has produced more accurate results than Euler’s methods with h = 0.05, and at considerably lower computational cost.

TAYLOR METHOD OF ORDER 2

Enter X0,Y0 1.4

Enter-steplength 0.5

Enter number of steps 4

Enter output frequency 1

X Y EXACT ERROR

1.000 4.000000 4.000000 0.0000000

1.500 4.703704 4.701562 -.0021415

2.000 5.468272 5.464102 -.0041699

2.500 6.280657 6.274917 -.0057392

3.000 7.129893 7.123106 -.0067868

Self Assessment Exercises 3.4

1. Using the second order Taylor method with h = 0.1, calculate t5he approximate value of the solution of

,

at x = 3.2.

2. Find an estimate of the solution of the IVP

at x = 1 using three steps of the second order Taylor method.

3.5 Runge-Kutta methods

The Runge-Kutta method allows high-order schemes, without the need of calculating ( and evaluating ) higher-order derivatives, by evaluating f (x,y) at more points.

Runge-Kutta methods are all or the form

Where (x,y;h) is an estimate of the “average” slope of the solution on the interval

. For methods of order 2

The constants and are chosen so that the local truncation error

is O( h3). Equating the coefficients of h0,h1 and h2 to zero leads to the equations

Thus there are infinitelymany Runge-Kutta methodsof order 2, depending on the choice of 2 . One common methods, known as the Improved Euler methods, correspond to = , and is given by

However, it is usually written in the form

Note that is the Eu is the Euler estimate of . Denoting this estimate by the the value is the slope of the ODE that passe through the point . Now

So is obtained by moving along a line whose slope is the average of the slope and (see Figure 3.3)

Figure 3.3 An illutration of the improved Euler method

Another common second order Runge-Kutta method, the Modified Euler method corresponds to 2 = 1. This cheme is usually written

In this case yn+1 is obtained by moving along a line whose slope is approximately the slope of the solution at the middle of the interval.

Worked Example 3.5.1 Find an approximation to the solution of the IVP

at x = 2 using the Improved Euler method with steplength h = 0.5.

The Improved Euler method is

and since = 1, = 4, h = 0.5 and f (x,y) = y / (y-x) we get , for n = 0,

= 1.333333

= f (1.5,4.666667)

= 1.473684

= 4 + 0.25(1.333333+1.473684)

= 4.701754

= + h

= 1 + 0.5

= 1.5,

and for n = 1

= 1.468493

= f(2,6.540002)

= 1.582072

The approximate value of y(2) is y2 = 5.464395

The result of using the improved euler method with h = 0.5 to solve

are given below. For this problem. The erros are slightly smaller than those for the second order Taylor method.

IMPROVED EULER METHOD

Enter Xo, Yo 1,4

Enter step-length h .5

Enter number of steps 4

Enter output frequency 1

X Y EXACT ERROR

1.000 4.000000 4.000000 0.000000

1.500 4.701755 4.701562 -0.000193

2.000 5.464396 5.464102 -0.000294

2.500 6.275253 6.274917 -0.000336

3.000 7.123449 7.123106 -0.000343

The second order methods described required two evaluations the fuction. f(x,y).third and fourth order methods requiring 3 and 4 function evaluations respectively can be found , but at least 6 function evaluations are needed for method of order 5. this is probably the reason why fourth order methods are so popular. The following fourth order method, often called the Classical Runge-Kutta method, is perhaps the

34 Chapter 3 Solution of Ordinary Differential Equation

To verify the results given above consider the numerical solution of the IVP

, ,

On [1,3] using Euler’s method. It is easily verified that the solution to this probability is . The output below was produced by a simple program. It be seen that, for each value of x, dividing the steplength by 10 reduces the error a factor of approximately 10.

EULER ‘ S METHOD

Enter X0, Y0 1,4

Enter step-length h 0.5

Enter number of steps 4

Enter output frequency 1

X Y EXACT ERROR

1.000

1.500

2.000

2.500

3.000 4.000000

4.666667

5.403509

6.197323

7.035405 4.000000

4.701562

5.464102

6.274917

7.123106 0.0000000

0.0348954

0.0605931

0.0775943

0.0877004

Enter X0, Y0 1,4

Enter step-length h 0.05

Enter number of steps 40

Enter output frequency 10

X Y EXACT ERROR

1.000

1.500

2.000

2.500

3.000 4.000000

4.698277

5.458434

6.267671

7.114901 4.000000

4.701562

5.464102

6.274917

7.123106 0.0000000

0.0032854

0.0056682

0.0072465

0.0082049

Enter X0, Y0 1,4

Enter step-length h 0.005

Enter number of steps 400

Enter output frequency 100

X Y EXACT ERROR

1.000

1.500

2.000

2.500

3.000 4.000000

4.701236

5.463539

6.274199

7.122291 4.000000

4.701562

5.464102

6.274917

7.123106 0.0000000

0.0003257

0.0005627

0.0007186

0.0008149

§3.4 Taylor methods 35

3.4 Taylor methods

Euler’s method is the first order Tay;or methods. Higher order methods are obtained by taking more terms in the taylor series expansion of . For example, since

,

a numerical approximation can be calculated using

,

and the local truncation error is

.

As the truncation erroe has order 3, the above scheme is the second order Taylor method.

The th order Taylor methods is

with loca truncation error

In these schemes denotes the th derivative of evaluated at and . Now is given by the ODE (but do not forget that is a function of ). For example, if , repeated differentiation using the product rule gives so that . By substitution for the derivatives appearing in the RHS’s, all of these derivates could be written in terms of and alone, but in practice this is not necessary.

Chapter 3

Solution of Ordinary Differential Equations

3.1 Introduction

We consider the solution of the initial value problem (IVP)

y’ = f (x,y), y(a) = .

While some such IVP’s can be solved analytical (exactly), most cannot and so have to be “solved” numerically. We will see that the numerical methods that solve this single first order IVP can be used to solved a system of first order IVP’s, and hence to solve an IPV of any order.

The first order ordinary differential equation (ODE) y’ = f(x,y) has infinitely many solutions, and at the point (x,y) the value of f(x,y) gives the slope of the solution curve passing trough that point. The initial condition (IC) y(a) = usually singles out one solutions that passes through the point (a,) so that the IVP has a unique solution.

For example, consider the IVP

y’ = y, y(0) = 2

The ODE y’=y ha the general solution y = Aex , where A is an arbitrary constant. The IC y(0) = 2 singels out the one solution y = 2ex .

However, not all IVP’s have a unique solution: some have no solution while others have more than one solution. There are conditions on f(x,y) that guarantee a unique solution, and we will assume that condition apply.

Suppose that we require a solution to the above IVP on the interval [a,b]. we will determine approximate values of the solution at the equispaced points.

xn = a + nh, n = 0, 1, . . . ,N

Where the step length h=(b-a)/N. Note that x0 = a and xN = b. The approximation of y(xn) will be denoted by yn . We have y0 = from the IC, and will consider a number of methods of determining the approximate values yn , n = 1,2, . . ., N. These methods fall into 2 categories; Taylor series methods, and Runge-Kutta methods.

y Solution of IVP

Tangent

slope = f(xo , yo) Solution of ODE

(xo , yo) (x1 , y1)

h

x0 x1 x

FIGURE 3.1. Approximation of the solution by Euler’s method.

3.2 Euler’s Method

Euler’s method is the simplest numerical method of solving an IVP. It is based on locally approximating a solution of the ODE by is tangent ( see figure 3.1).

The slope of the tangent is

and solving for y1 gives

The solution through (x1, y1) ( not the solution of the IVP, but hopefully close to it is then approximated by its tangent to give

This procedure is repeated, so the general rule is

Note that not of all calculated values are output; if h is small than perhaps the solution is only required at every 20th or even 200th xn so outf is given the value 20 or 100. There is no need to store the xn and yn so the old values are overwritten by the new values in each step.

Work Example 3.2.1 find the approximae value of the solution on the IVP

at x= 2 using Euler’s method with steplenght h=0,5

from the IC we have x0 =1 and y0=4, and since h=0,5 we have for n=0

Similarly, for n = 1,

Thus the approximate value of y(2) = y(x2) is y2 =5.403509

Self Assessment Exercises 3.2

1. Using Euler’s method with h = 0.1, calculate the approximate value of the solution of

y’ = x2 + y2, y(3) = -I,

At x = 3.2.

2. Find an estimate of the solution of the IVP

y’ = , y(0) = 2,

At x = 1 using three steps of Euler’s method.

3.3 Local and global truncation errors

An alternative derivation of Euler’s method is as follows:

y(xn+1) = y(xn + h) h2

= y (xn) + hy(xn) + y” (cn)

for xn < cn < xn+1 (Taylor’s theorem). But y’ = f(x,y) so

y(xn+1) = y(xn() + hf(xn,y(xn) ) + y” (cn) (3.3.1)

The last term will usually be extremely small for small h, so it is omitted to give the approximation.

yn+1 = yn+ hf(xn,yn), (3.3.2)

which is, of course, Euler’s method.

The term that was omitted is called the local truncation error. The local truncation error is denoted by Tn+1, so for Euler’s method.

Tn+1 = y” (cn),

In general, the local truncation error of the scheme

y(xn+1) = F(xn,yn,h)

is defined as

Tn+1 = y(xn+1) – F(xn,yn,h);

it is the amount by which the exact solution of the IVP fails to satisfy the numerical scheme.

The global truncation error is defined as

en+1 = y(xn+1) – yn+1

and is the accumulation of the local errors as they propagate from step to step. If y*(x) the solution of the ODE that passes through the point (xn,yn) (the dotted curve in Figure 3.2), then

en+1 = y(xn+1) – yn+1

= (y(xn+1) – y*(xn+1)) + (y*(xn+1) – yn+1).

propagated error local error

Kelompok MetNum:

1. Dian Anggraeni -M0107028

2. Erni Setiyaningsih -M0107031

3. Esi Mestalefa -M0107032

4.5 Scaled Partial Pivoting

Using this method, the equations of the above example would be interchanged, resulting in the calculated solution being exact.

Worked Example 4.5.1 Solve the following system of linear equations using Gaussian elimination with scaled partial pivoting :

The scale factors are and . Since

and are interchanged. Note that the scale factors are also interchanged.

The equations are then

Performing the indicated operations gives the equivalent system

Equations 2 and 3 must be interchanged since

Back-substitution gives the solution and

Self Assessment Exercises 4.5

1. Solve the following equations by Gaussian elimination with scaled partial pivoting:

2. Use Gaussian eliminations with scaled partial pivoting to solve the equations

4.6 LU factorization

Solving the equations by Gaussian elimination produces an equivalent system , where U is upper triangular with and . If no row interchanges are made and L is lower triangular matrix

Then it can be shown that A = LU.

If such an L and U are known it is easy to solve the equations Ax = b for any RHS b. The equations can be written as (LU)x = b, or equivalently as L(Ux) = b. Putting Ux = y, the equations become Ly = b. These equations are solved for y (forward substitution), then Ux = y solved by back-substitution. Summarizing,

Ly = b is solved, then Ux = y solved.

If the multipliers are stored when the equations Ax = b are solved, the system of equations can be solved economically for any other RHS’s. We will see that this situation arises in the method of residual correction. The multipliers may be stored in the matrix A in the positions normally taken by the zero elements. The multiplier is chosen to make zero, so is stored in the ij position. At the end of the elimination the coefficient matrix will then be

There are direct methods of finding the LU factorization of matrix, and these can incorporate partial pivoting and scaled partial pivoting. The direct methods can be implemented so that the effect of round-off errors is further reduced.

Worked Example 4.6.1 Solve the following equations by first finding the LU factorization of the coefficient matrix :

Gaussian elimination (without any row interchanges) is used to reduce the coefficient matrix to an upper triangular matrix.

This is the matrix U. The lower matrix triangular matrix L is

To solve the equations, Ly = b is first solved for y:

4

The equations Ux = y are now solved. They are

and back-substitution leads to the solution and

Self Assessment Exercises 4.6

1. Solve the following system of equations by first finding the LU factorization of the coefficient matrix.

2. Using Gaussian elimination without any equation interchanges, find the LU factorization of the matrix

And use it to solve the system of equations Ax = b where .

4.7 Vector and matrix norms

In order to discuss the errors that arise when solving linear equations using a computer we need a measure of the “size” of vectors and matrices. The size of a vector x is denoted by real number which is called the norm of x. The norm of a vector must satisfy

1. , with

2. for any real

3.

There are many vector norms. One that will be familiar is the Euclidean norm or 2-norm

We will use the maximum norm or co-norm

Consistent matrix norms ire devined in terms of vector norms:

This leads to

Matrix norms satisfy

1. , with

2. for any real

3.

4.

5.

█ Worked Example 3.4.1 Use the second order Taylor method with steplength h = 0.5 to find an approximation to y (2) where y (x) is the solution of the IVP y’ = y / (y-x), y(1) = 4.

Differentiating the ODE gives

Then the second order Taylor method is

where

Now h = 0.5, xo = 1 and yo = 4, so

Our estimate of y(2) is y2 = 5.468272

Results for the solution of the IVP

Using the second order Taylr method follow. Comparing these with the earlier results obtained using Euler’s method with the same steplength shows that they are much

Thus

℮ = G℮

℮ = G℮ = G(G℮ )= G ℮

℮ = G℮ = G(G ℮ )= G ℮

.

.

℮ = G ℮

Taking norms give

= .

Hence if <1, 0 as k for any ℮ , so that <1 is a sufficient condition that the iteration converges to the exact solution for any initial estimate x . (It is not a necessary condition; the iteration may convergeeven though 1.)

Now the Jacobi method is

=

=

So the matrix G (the iteration matrix) is

G =

Therefore the Jacobi method will converge if

=

Which is the case if

for i= 1,2,…,n,or equivalently

for i = 1,2,…,n.

A matrix with this property is said to be strictly diagonally dominant. Strict diagonal dominance of the coefficient matrix is a sufficient condition for the gonvergence of the Jacobi method, but it is not a necessary condition.

Note that if, in each equation, there is a coefficient whose magnitude is larger than the sum of magnitudes of the other coefficients and these coefficients are in different columns, the equation can be reordered to make the coefficient matrix strictly diagonally dominant.

Self Assessment Exercises 4.12

Determine whether or not the Jacobi method converges for the following systems of equations:

1. 4 – – 2 = 7

2 – 5 – = 1

-2 + + 4 = 6.

2. 2 – + 7 = 1

-8 – 3 + 2 = 2

2 – 7 + 4 = 3.

4.13 Gauss-Seidel iterative method

In Jacobi iteration, is calculated from , , …, , but at this stage the improved estimates , , …, have already been calculated. In Gauss-Seidel iterative method the “most recent” iterative are used in the calculation. As before, the ith equation is solved for to give

=

Here the normal convention is employed in that a sum has no terms if the initial value of the summing variable is greater than the final value, so that the first sum is ignored if i=1 and the second sum is ignored if i=n.

Thus the gauss-Seidel iteration is

i =1,2,…,n k =0,1,….

A sufficient condition for convergence of this iteration is that A is strictly diagonally dominant. The Gauss-Seidel method usually converges faster than the Jacobi method,

Often requiring about half as many iterations to achieve the same accuracy. It is also easier to program as is not needed once has been calculated, so that the “old” value can be overwritten :

Worked example 4.13.1. Carry out two iterations of the Gauss Seidel iterative method, starting with =0 for the following system of equations :

Solving E1 for , E2 for and E3 for gives :

So that the Gauss Seidel iterative scheme is

Using =0, k = 0 gives

=

= 2

=

=

= –

= -1

And k = 1 givess

=

=

=

=

= –

=

Output from a program run for the problem of Worked Examples 4.11.1 and 4.13.1 follows, and shows that Gauss-Seidel iteration has converged faster than Jacobi iteration for this problem, attaining 10 decimal place accuracy in 14 iterations compared with 23 for Jacobi iteration.

SOLUTION OF LINEAR EQUATIONS BY GAUSS-SEIDEL ITERATION

Equations are

3.00 1.00 -1.00 : 6.00

1.00 4.00 1.00 : 8.00

1.00 2.00 -4.00 : 9.00

Error tolerance = 5.00D-11

K X1 X2 X3 ERROR BOUND

0 0.0000000000 0.0000000000 0.0000000000

1 2.0000000000 1.5000000000 -1.0000000000

2 1.1666666667 1.9583333333 -0.9791666667

3 1.0208333333 1.9895833333 -1.0000000000 1.042D-01

4 1.0034722222 1.9991319444 -0.9995659722 3.683D-03

5 1.0004340278 1.9997829861 -1.0000000000 6.445D-04

6 1.0000723380 1.9999819155 -0.9999909578 7.672D-05

7 1.0000090422 1.9999954789 -1.0000000000 1.343D-05

8 1.0000015070 1.9999996232 -0.9999998116 1.598D-06

9 1.0000001884 1.9999999058 -1.0000000000 2.797D-07

10 1.0000000314 1.9999999922 -0.9999999961 3.330D-08

11 1.0000000039 1.9999999980 -1.0000000000 5.827D-09

12 1.0000000007 1.9999999998 -0.9999999999 6.937D-10

13 1.0000000001 2.0000000000 -1.0000000000 1.214D-10

14 1.0000000000 2.0000000000 -1.0000000000 1.445D-11

Self Assessment Exercises 4.13

Using x(0) = 0, perform two iterations of the Gauss-Seidel method for the following systems of linear equations:

1. 4×1 - x2 + 2×3 = 7

2×1 - 5×2 + x3 = -1

-2×1 + x2 + 4×3 = 6.

2 3×1 - x2 + x3 = 3

x1 - 5×2 + 2×3 = 0

2×1 + 2×2 + 5×3 = 1.

4.14 Termination of iteration

A Common method of terminating the iteration is to stop when

||x(k+1) – x(k)|| ≤ є

Similarly, the Improved Euler method becomes

k1 = f(χn , yn )

k2 = f(χn + h, yn + hk1)

yn+1 = yn + h(k1 + k2)

The ability to solve systems is important because any ODE of order 2 or higher can be expressed as a system of first order ODE’s.

Worked Example 3.7.1 Using Euler’s method with a steplength of h = 0.1, find an approximation to y(0.2) where y(χ) is the solution of the second order IVP

y” – y = χ, y(0) = 1, y’(0) = 4

Setting u = y and v = y’ gives

u’ = v u(0) = 1

v’ = u + χ v(0) = 4

(The first ODE comes from equating two representation of y’, and the second from the original ODE.) Thus f(χ,u,v) = v and g(χ,u, v) = u + χ so Euler’s method becomes

un+1 = un + hvn

vn+1 = vn + h(un + χn)

Since χ0 = 0, u0 = 1 and v0 = 4 we get , for n = 0

u1 = u0 + hv0

= 1 + 0.1(4)

= 1.4

v1 = v0 + h(u0 + χ0)

= 4 + 0.1(1+0)

= 4.1

χ1 = χ0 + h

= 0 + 0.1

= 0.1,

And for n = 1

u2 = u1 + hv1

= 1.4 + 0.1(4.1)

= 1.81

v2 = v1 + h(u1 + χ1)

= 4.1 + 0.1(1.4+0.1)

= 4.25

χ2 = χ1 + h

= 0.1 + 0.1

= 0.2

Then y(0.2) ≈ u2 = 1.81 ( and y’(0.2) ≈ v2 = 4.25).

Self Assessment Exercises 3.7

1. Use Euler’s method with h = 0.1 to approximate y(3.2) where y(χ) is the solution of

y” – χy’ + y = 3χ, y(3) = 0, y’(3) = -2

2. use the Improved Euler method with h = 0.1 to obtain an approximation to y(1.2) if y(χ) is the solution of

y” – χy’ + χ2y = 2, y(1) = 3, y’(1) = -1

Chapter 4

Solution of Systems of Linear Equations

4.1 Introduction

Consider the general system of n linear in the n variables χ1,χ2,…..,χn,

a11χ1 + a12χ2 + …… + a1nχn = b1

a21χ1 + a22χ2 + …… + a2nχn = b2

.

.

.

.

An1χ1 + an2χ2 + …… + annχn = bn

Where aij is the coefficient of χj in equation and is i and bi is the right-hand of equation i. These equation can be write in matrix form as

Ax = b

Where

A = [aij]

Is the coefficient matrix,

x = [χ1, χ2, …,χn]T

Is the unknown vector, and

b = [b1, b2, …,bn]T

Is the right-hand side vector.

If the coefficient matrix A is non-singukar, that is det(A)≠ 0, then its inverse A-1 exits and the matrix equation can be premultiplied by A-1 to give the unique solution

x = A-1b

However , if A is singular (det(A) = 0) then the equations have either infinitely many solutions or no solution. Altought we are interested in solving equations that have a unique solution, we have to guard against the other possibilities.

It is not practical to solve the equations by finding A-1 as it takes about four times the computational effort to find A-1 than it does to solve the equations. Similarly, Cramer’s rule

χi = det (Ai) i = 1,2,….,n,

det (A)

Where Ai is the matrix A with column i replaced by b, is impractical due to the high computational cost of calculating a determinant.

There are essentially two classes of methods for solving systems of linear equations, direct methods and iterative methods. Direct methods have the advantage that the amount of computation is fixed, and if exact arithmetic were used they would produce the exact answer. For iterative methods, the amount of computation depends on the work per iteration, the rate of convergence,and the required accuracy of the solution. Generally, iterative methods are more suited to large sparse systems where most of the elements of A zero, altought in some cases there are extremely efficient direct methods, such as the tridiogonal algorithm, which exploit the structure of the coefficient matrix A.

4.2 Gaussian Elimination

The central idea of Gaussian elimination is to change the system of linear equations Ax = b into an equivalent system Ux = y whose coefficient matrix U is upper triangular (all elements below the diagonal equal to zero). Such a system of equations is easily solved. Because our interest is is in using a computer to solve the linear equations we must develop a systematic method.

Only the following operations may used in Gaussion elimination:

1. Equation j can be multiplied by a constant λ and subtracted from equations i, and the resulting equations used in place of equations i. this operation will be denoted by Ei – λEj -> Ei.

2. Equations i and j can be swapped, and this will be denoted by Ei Ej.

Clearly these operations will not change the solutions to the system of equations.

To illustrate the method of Gaussian elimination we consider the following system of equations:

2χ1 + 3χ2 – χ3 = 5

4χ1 – 3χ2 + χ3 = 4

2χ1 + 6χ2 – 4χ3 = 1

Rather than write out the equations in full at each step, we record the coefficients and RHS’s in the augmented matrix [A│b]. the augmented matrix for the above eguations is

2 3 -1 5

4 -3 1 4

2 6 -4 1

The first step is to eliminate χ1 from the second ang third equations by subtracting a multiple of the first equations from them. This is accomplissed by the operations

TABLE 1. A comparison of numerical solutions of 2×2 + bx – 10 = 0 with the exact solution. A box has been drawn around the digits that are incorrect.

b QBasic Turbo Pascal Exact

-1.1 2.527915E+00 2.5279147787E+00 2.52791477868E+00

-1.977915E+00 -1.9779147787E+00 -1.97791477868E+00

-11 6.294362E+00 6.2943617197E+00 6.29436171969E+00

-7.943618E-01 -7.9436171969E-01 -7.94361719689E-01

-110 5.509076E +01 5.5090759323E+01 5.50907593226E+01

-9.075928E-02 -9.0759322620E-02 -9.07593226428E-02

-1100 5.500091E+02 5.5000909076E-1-02 5.50009090759E+02.

-9.094238E-03 -9.0907583944E-03 -9.09075883292E-03

-11000 5.500001E+04 5.5000009091E+03 5.50000090909E+03

-9.1765625E-04 -9.09090 421 -04 -9.09090758828E-04

-110000 5.500000E+04 5.5000000091E+04 5.50000000909E + 04

0.000000E + 00 -9.08(97083282E – 05 -9.09090907588E-05

-1100000 5.500000E + 05 5.5 0 0 0 0 0 0 0 0 1E + 05 5.50000000009E-05

0.000000E + 06 -9.059 060059 E – 06 -9.09090909076E-06

-11000000 5.500000E+06 5.5000000000E+06 5.5 0 0 0 0 0 0 0 0 0 0 0 00 0E + 06

0.000000E+00 0.000000000E +00 -9.09090909091E-07

drawn around the incorrect digits in the computed solutions. In both cases the larger root has been found to machine precision, but the number of correct digits in second root diminishes as the coefficient of the linear term is increased. Due to the higher precision arithmetic, the Turbo Pascal result is more accurate than the corresponding . QBasic result.

Obviously something is going wrong with the calculation of the second root. We will see later what this is, and see how a simple modification to the algorithm eliminates the problem.

1.1 Computational efficiency

The computational efficiency of algorithms can be compared by counting the number of multiplications, divisions, additions and subtractions required for each algorithm (i.e. by comparing their operation count).

As an example, consider the following algorithms for evaluating the polynomial

In each case it is assumed that and x are given

Sum =a(0)

For i=i to n

sum=sum + a (i) xxi

End For

Operation count: n + ‘s, n x ‘s,n exponentiations

Chapter 1

Errors in Numerical Computation

1.1 Introduction

This course is concerned with the use of computers to “solve” mathematical prob¬lems. We use an algorithm, a well defined sequence of operations that leads to the approximate solution of the problem. In some cases the algorithm may give the exact solution if exact arithmetic is used, but as computers do not use exact arithmetic (except for integers) the final result will usually be approximate. We will compare different algorithms and their implementation with regard to efficiency and accuracy, and discuss how the error in the final result can be estimated.

Consider the numerical solution of the quadratic equation

We know the solution is

and that there are 3 possibilities; 2 distinct real roots, 1 repeated root, or 2 complex roots. Of course we can determine which case applies from the sign of the discriminant

If we are only concerned with the real roots, then we could use the following algorithm:

The solutions given by this algorithm when coded in QBasic and Turbo Pascal and the exact solutions are given in Table 1 for a number of quadratic equations. In standard precision, which was used to produce the results, QBasic has about 7 decimal digit accuracy and Turbo Pascal has about 11 decimal digit accuracy. A box has been

Enter X0,Y0 1.4

Enter-steplength 0.5

Enter number of steps 4

Enter output frequency 1

1.000 4.000000 4.000000 0.0000000

1.500 4.703704 4.701562 -.0021415

2.000 5.468272 5.464102 -.0041699

2.500 6.280657 6.274917 -.0057392

3.000 7.129893 7.123106 -.0067868

Self Assessment Exercises 3.4

,

at x = 3.2.

2. Find an estimate of the solution of the IVP

at x = 1 using three steps of the second order Taylor method.

3.5 Runge-Kutta methods

Runge-Kutta methods are all or the form

Where f (x,y;h) is an estimate of the “average” slope of the solution on the interval

. For methods of order 2

The constants and b are chosen so that the local truncation error

is O( h3). Equating the coefficients of h0,h1 and h2 to zero leads to the equations

Thus there are infinitelymany Runge-Kutta methodsof order 2, depending on the choice of g2 . One common methods, known as the Improved Euler methods, correspond to = , and is given by

However, it is usually written in the form

Note that is the Eu is the Euler estimate of . Denoting this estimate by the the value is the slope of the ODE that passe through the point . Now

So is obtained by moving along a line whose slope is the average of the slope and (see Figure 3.3)

Figure 3.3 An illutration of the improved Euler method

Another common second order Runge-Kutta method, the Modified Euler method corresponds to g2 = 1. This cheme is usually written

ð Worked Example 3.5.1 Find an approximation to the solution of the IVP

at x = 2 using the Improved Euler method with steplength h = 0.5.

The Improved Euler method is

and since = 1, = 4, h = 0.5 and f (x,y) = y / (y-x) we get , for n = 0,

= 1.333333

= f (1.5,4.666667)

= 1.473684

= 4 + 0.25(1.333333+1.473684)

= 4.701754

= + h

= 1 + 0.5

= 1.5,

and for n = 1

= 1.468493

= f(2,6.540002)

= 1.582072

The approximate value of y(2) is y2 = 5.464395

The result of using the improved euler method with h = 0.5 to solve

IMPROVED EULER METHOD

Enter Xo, Yo 1,4

Enter step-length h .5

Enter number of steps 4

Enter output frequency 1

1.000 4.000000 4.000000 0.000000

1.500 4.701755 4.701562 -0.000193

2.000 5.464396 5.464102 -0.000294

2.500 6.275253 6.274917 -0.000336

3.000 7.123449 7.123106 -0.000343

terima kasih, atas commentsnya. Untuk yg gagal uploas, bisa dikirim melalui email saya : sutanto@uns.ac.id

1.4 Floating point form

Any base b number can be pt into the form, with ,

The number of digits that can be stored in a computer is limited. Suppose only p digits can be used for the mantissa. If the number of digits is larger than p, the mantissa must be chopped (truncated) or rounded back to p digits. If the mantissa has fewer than p digits it is filled with 0’s. The floating-point representation of is denoted by , and

The exponent E is stored as a base b integer and is restricted to some interval . The smallest number, other than 0, is

,

And the largest is

. (c is the largest digit)

There is a finite number of possible floating-point or machine number, the number depending on b, p, and . The positive machine number for , , , and are shown on the real number line in figure 1.1.

x x x x x x x x x x x x x

FIGURE 1.1. The positive binary number with three digit mantissas and exponents -1, 0 or 1.

If rounding is used, is the closest machine number to , but if chopping is used is the next machine number in the direction of 0 (see Figure 1.2). In both cases the error of conversion is called the round-off error. Round-off error are also incurred whenever floating point number are used in a calculation.

x x x x

Rounding Chopping

FIGURE 1.2.Conversion of a number to the floating point number by rounding or chopping.

If the result of any operation has a magnitude less than S underflow is said to have occurred (the number is usually set to 0), but if it has a magnitude larger than L overflow occurs and program generally “bombs” when an attempt is made to use the result.

The values of b, p, , , type of conversion (rounding or chopping) and handling of underflow and overflow are all compiler dependent. For example, the IEEE standard for single precision real is 32 bit (4 byte) words with , , , and .

1.5 Errors

It is often impossible or impractical to find the exact solution to a mathematical problem. For example, it is impossible to find exactly, and impractical to solve a large number of linear equations exactly.

The error of computed value is defined as

Error = exact value – approximate value.

Note that the sign is retained; it tells us whether the approximation is too large or too small. The relative error is defined as

Relative Error = .

The following notation will be used:

= true or exact value

= approximation to

Error =

Rel. Error =

Usually we will not be able to calculate the error because the exact value is not known, but will only be able to show that for some number which is call an error bound. A relative error bound is defined similarly.

The approximate value has m significant digits with respect to if the (m+1)th digit of the error , counting from the first non-zero digit of , is < 5 and the previous digits are all zero (the maximum value of m is understood).

▌Worked Example 1.5.1 Calculate the error, relative error and number of significant digits of 125 / 46 as an approximation to e.

Error =

=

= 0.00089

Rel. Error =

=

= 0.00033

= 2.71 8281…..

Error = 0.00 089

The vertical line is placed as for to the right as possible while keeping the next digit of the error < 5, and the number of digits of to the left of this line is the number of significant digits. Thus 125/46 = 2.72739… has 3 significant digits.▐

Self Assessment Exercise 1.5

1. Find the error, relative error and number of significant digits of 1.73 as an approximation of .

2. Find the error, relative error and number of significant digits of 9.87 as an approximation of .

Self Assessment Exercise 1.7

1. Suggest an alternative form of the function = that avoid loss of significance errors for small values of .

2. Modify the function = so that the loss of significance error are avoided when it is evaluated for large .

3. Express the function = in a form that overcomes the loss of significance error when it is evaluated for small values of .

1.8 Errors in computer arithmetic

We have seen that a small error, called a round-off error, is usually incurred where ever a number is input into the computer or an arithmetic operation is performed. This errors can not be avoided, but they can be reduced by using higher precision arithmetic.

The effect of round-off errors can often be reduced by careful design of the algorithma. As an example, consider the situation where = , is used in a calculation inside a loop. The value of can be calculated in the following ways:

1.4 Floating point form

Any base b number can be pt into the form, with ,

The number of digits that can be stored in a computer is limited. Suppose only p digits can be used for the mantissa. If the number of digits is larger than p, the mantissa must be chopped (truncated) or rounded back to p digits. If the mantissa has fewer than p digits it is filled with 0’s. The floating-point representation of is denoted by , and

The exponent E is stored as a base b integer and is restricted to some interval . The smallest number, other than 0, is

,

And the largest is

. (c is the largest digit)

There is a finite number of possible floating-point or machine number, the number depending on b, p, and . The positive machine number for , , , and are shown on the real number line in figure 1.1.

x x x x x x x x x x x x x

FIGURE 1.1. The positive binary number with three digit mantissas and exponents -1, 0 or 1.

If rounding is used, is the closest machine number to , but if chopping is used is the next machine number in the direction of 0 (see Figure 1.2). In both cases the error of conversion is called the round-off error. Round-off error are also incurred whenever floating point number are used in a calculation.

x x x x

Rounding Chopping

FIGURE 1.2.Conversion of a number to the floating point number by rounding or chopping.

If the result of any operation has a magnitude less than S underflow is said to have occurred (the number is usually set to 0), but if it has a magnitude larger than L overflow occurs and program generally “bombs” when an attempt is made to use the result.

The values of b, p, , , type of conversion (rounding or chopping) and handling of underflow and overflow are all compiler dependent. For example, the IEEE standard for single precision real is 32 bit (4 byte) words with , , , and .

1.5 Errors

It is often impossible or impractical to find the exact solution to a mathematical problem. For example, it is impossible to find exactly, and impractical to solve a large number of linear equations exactly.

The error of computed value is defined as

Error = exact value – approximate value.

Note that the sign is retained; it tells us whether the approximation is too large or too small. The relative error is defined as

Relative Error = .

The following notation will be used:

= true or exact value

= approximation to

Error =

Rel. Error =

Usually we will not be able to calculate the error because the exact value is not known, but will only be able to show that for some number which is call an error bound. A relative error bound is defined similarly.

The approximate value has m significant digits with respect to if the (m+1)th digit of the error , counting from the first non-zero digit of , is < 5 and the previous digits are all zero (the maximum value of m is understood).

▌Worked Example 1.5.1 Calculate the error, relative error and number of significant digits of 125 / 46 as an approximation to e.

Error =

=

= 0.00089

Rel. Error =

=

= 0.00033

= 2.71 8281…..

Error = 0.00 089

The vertical line is placed as for to the right as possible while keeping the next digit of the error < 5, and the number of digits of to the left of this line is the number of significant digits. Thus 125/46 = 2.72739… has 3 significant digits.▐

Self Assessment Exercise 1.5

1. Find the error, relative error and number of significant digits of 1.73 as an approximation of .

2. Find the error, relative error and number of significant digits of 9.87 as an approximation of .

1.6 Sources of errors

The numerical solution of practical problem usually is affected by several difference errors. The errors arise from :

1) Formulation of mathematical model-usually some approximations or assumptions are made.

2) Blunders-normally associated with hand calculations, but can be made in a program (test with cases where the answer is known).

3) Errors in data-any measured quantity will not be exact.

4) Round-off errors-result from using a finite number of digits in calculations

5) Truncations errors-mathematical error resulting from approximating an infinite process by a finite one (making the solution possible!).

Some of these errors are unavoidable; the aim of numerical analysis is to design algorithms that minimize their effect.

1.7 Loss of significance errors

Whenever two numbers of almost equal value are subtracted significant digits are lost.

For example ;

13.13579

-13.13621

-00.00042

Lost significant digits

The problem of loss significant digits can often be overcome by restructuring the calculation. Finding the roots of a quadratic is a good example. The roots are given by

And if | 4 ac | <> 1.▌

Self Assessment Exercise 1.7

1. Suggest an alternative form of the function = that avoid loss of significance errors for small values of .

2. Modify the function = so that the loss of significance error are avoided when it is evaluated for large .

3. Express the function = in a form that overcomes the loss of significance error when it is evaluated for small values of .

1.8 Errors in computer arithmetic

We have seen that a small error, called a round-off error, is usually incurred where ever a number is input into the computer or an arithmetic operation is performed. This errors can not be avoided, but they can be reduced by using higher precision arithmetic.

The effect of round-off errors can often be reduced by careful design of the algorithma. As an example, consider the situation where = , is used in a calculation inside a loop. The value of can be calculated in the following ways:

maaf pak tadi saya kirun tugasnya salah dengan nama afni seharusya itu tugas kelompok dari Airenda M0505020,Afni M0105019, dan dyah asri M0105033 sekian terima kasih

tugas akan saya attach ke email anda.

Tugas saya sudah attch ke email bapak, karena kalo di upload lewat shoutbox grafiknya tidak bisa keluar

most frequently used Runge-Kutta method:

The solution of

by the Classical Range-Kutta method follows. As expeted for a method of order 4, the errors are extremely small.

CLASSICAL RUNGE-KUTTA METHOD

Enter X0,Y0 1,4

Enter step-length h .5

Enter number of steps 4

Enter output frequency 1

X Y EXACT ERROR

1.000 4.000000 4.000000 0.0000000

1.500 4.701564 4.701562 -0.0000021

2.000 5.464105 5.464102 -0.0000035

2.500 6.274921 6.274917 -0.0000042

3.000 7.123110 7.123106 -0.0000044

Self Assessment Exercises 3.5

1. Using the Improved Euler method with h = 0.1, calculate the approximate value of the solution of

at x = 3.2.

2. Find an estimate of the solution of the IVP

at x = 1 using three steps of the Modified Euler method.

3. Find an approximation to the solution of the IVP

at x = 2 using the Classical fourth order Runge-Kutta method with steplength h = 1.

3.6 Stability

A numerical procedure is stable if an error introduced at some stage of the calculation does not grow as the calculation proceeds. Sometimes instability is inherent in the problem so that no numerical scheme can be stable. It can be seen from the diagram on page 33 that the error will grow as it is propagated if the solution curves of the ODE diverge, and this is the case if f/ f>0. Thus if f/ f >0 any numerical method of solving the IVP is inherently unstable

However, not all instabilities are inherent. For example, suppose the IVP

y’ = 30 – 2x2y, y(0) = 0

Is solved using the Classical (fourth-order) Runge-Kutta method. The exact solution decreases for x>1, but if h =0.1 is used the numerical solution increases rapidly for x>4.2. Reducing h to 0.05 delays the rapid increase until about x =5.8 (see Figure 3.4).

To examine the stability of a numerical scheme to solve an IVP we consider the “test problem”

y’ = y, y(0) = 1,

Which has the solution y(x) = . Solving this problem using Euler’s method gives

yn+1 = yn + hf(xn, yn)

= yn + h yn

= (1 + h) yn

As s consequence, if an error is introduced into yn, the resulting error in will be (1+ h) . Thus, for Euler’s method to be stable we require|1+ h|0 the error will grow, but since the relative error will not grow (the solution is also increasing) Euler’s method is said to be relatively stable. (If >0 the ODE is inherently unstable.) If h <-2 the error will oscillate with increasing amplitude, while the solution will decrease. In this case Euler’s method is unstable, and the numerical solution will bear no resemblance to the true solution.

If the test problem is solved using the Improved Euler’s method, we get

k1 = f(xn,yn) = yn

k2 = f(xn+h, yn+hk1) = ( )yn

It turns out that all two stage order Runge-Kutta methods give the same result. (Try the Modified Euler method yourself!) They are absolutely stable if <1. It is easily shown that <1 if –2< 0, and unstable if 0, Runge-Kutta and Taylor methods are relatively stable. In either case h should be small enough to ensure that the global truncation error remains small.

If is negative but has a very large magnitude the stability requirement may mean that an excessively small h must be used. There are special methods which have very large intervals of stability that are suitable for such problems.

Returning to the earlier example,

y(0) = 0,

we have = , so a fourth order Runge-Kutta method will be stable if h is chosen so that

That is if

To successfully compute the solution up to x = 6 requires

Alternatively , for given h a fourth order Runge-Kutta method will become unstable if

or

For h = 0.1 it is unstable for x > 3.74, and if h = 0.05 unstable for x> 5.27. The instability becomes apparent for values of x slightly larger than these values, but this is because the errors are initially very small, and although increasing in size in each step, can not be seen until x is about 0.5 larger.

3.7 First order systems

For simplicity we consider only a system of 2 first order ODE’s with their initial conditions, but what follows may be easily extended to any number of ODE’s.

Consider the system

Using the notation this system may be written as

Any method of solving the IVP

can be used to solve the system by replacing by .For example, Euler’s method becomes

Writing this vector equation in component form gives

.

Self Assessment Exercises 2.3 (page 17)

1. x3 = 1.89432 .

2. For the rearrangement g’(r) 14.1 so the iteration will not converge.

The second rearrangement, g’(r) 0.11 so the it iteration will converge(rapidly).

Self Assessment Exercises 2.4 (page 22)

1. x1 = 2.076558, x2 = 1.910507 .

2. x1 = – 1.205264, x2 = – 1.25250 (iteration terminates); r– 1.20525 .

Self Assessment Exercises 2.5 (page 24)

1. x2 = 1.790125, x3 = 1.889121 .

2. x2 = –1.205012, x3 = –1.205239 (iteration terminates); r –1.205 .

Self Assessment Exercises 3.2 (page 31)

1.y1 = 0 , y(3.2) y2 = 0.916 .

2.y1 = 1.666667, y2 = 1.444444, y(1) y3 = 1.321637 .

Self Assessment Exercises 3.4 (page 37)

1.y1 = – 0.07 , y(3.2) y2 = 0.915760 .

2.y1 = 1.722222, y2 = 1.548208, y(1) y3 = 1.456614 .

Self Assessment Exercises 3.5 (page 41)

1.y1 = – 0.0195 , y(3.2) y2 = 1.017344 .

2.y1 = 1.722222, y2 = 1.546896, y(1) y3 = 1.453652 .

3.y(2)y1 = 5.464166 .

Self Assessment Exercises 3.7 (page 46)

1. u1 = – 0.2, v1 = – 1.7, y(3.2) u2 = – 0.37 .

2. u1 = 2.89, v1 = – 1.24145, y(1.2) u2 = 2.75154 .

Self Assessment Exercises 4.3 (page 54)

1. E2 – E1 → E2, E3 + E1 → E3; E3 + E2 → E3; x1 = –, x2 = –, x3 = –.

2. E2 + 2 E1 → E2, E3 – E1 → E3; E2 ↔ E3; x1 = 3, x2 = 2, x3 = – 1.

Self Assessment Exercises 4.4 (page 56)

1.E1↔E2; E2 –E1→E2, E3 –E1→E3; E2 ↔E3; E3 + E2→E3; x1 = , x2 =, x3 =.

2.E1↔E2; E2 + E1→E2, E3 + E1→E3; E3 + E2→E3; x1 =3, x2 =2, x3 = –1.

Self Assessment Exercises 4.5 (page 57)

1.E1↔E3; E2 – 2E1→E2, E3 – E1→E3; E2↔E3; E3 – E2→E3; x1 = , x2 = – 2, x3 = 2.

2.E1↔E2; E2 – E1→E2, E3 + 2E1→E3; E3 + E2→E3; x1 = 3, x2 = 2, x3 = –1.

Self Assessment Exercises 4.6 (page 60)

1. L = U =

2. L = U =

Self Assessment Exercises 4.7 (page 61)

1.||x||∞ = 4, ||A||∞ = 13.

2.||Ax||∞ = 40 < 4 x 13.

Self Assessment Exercises 4.9 (page 63)

1.||x –||∞ ∕ ||x||∞ ≤ 4.2 .

2.||x –||∞ ∕ ||x||∞ ≤ 13.6 .

Self Assessment Exercises 4.10 (page 65)

1.x = 1, x = 1 (the exact solution since the equations were solved exactly).

2.x = 19.837398, x = –11.869919 (the exact solution is x= 20, x= – 12).

Self Assessment Exercises 4.11 (page 67)

1.x = , x =, x =; x = , x =, x =. (Of course, decimals could have been used.)

2.x = 1, x = 0, x =; x =, x =, x = –.

Self Assessment Exercises 4.12 (page 70)

1.Will converge.

2.May converge or diverge, but equations can be reordered to guarantee convergence by shifting the first equation to after the others.

Self Assessment Exercises 4.13 (page 72)

1.x =, x =, x =; x =, x =, x =. (Of course, decimals could have been used.)

2.x = 1, x =, x = –; x =, x =, x = –.

Self Assessment Exercises 5.2 (page 77)

1.p2 (x) = 1 – 2x + 4×2.

2.p3 (x) = –(x – 3)(x – 4)(x– 6) + (x – 2)(x – 4)(x – 6)

– (x – 2)(x – 3)(x – 6) + (x – 2)(x – 3)(x – 4);

Error = – p3(5) 0.0022 .

Self Assessment Exercises 5.3 (page 79)

1.p2 (x) = 1 – 2x + 4×2.

2.p3 (x) = 1.000000 + (x – 0){– 0.786939 + (x – 0.5)[0.309636 + (x – 1)(– 0.0.73232]}; p3 (1.4) 0.251518, Error – 0.0049

Self Assessment Exercises 5.4 (page 82)

1. 0.021

2. Using the closest data points 0.9, 1.5, 2.2 and 3.0 gives f(1.7) p3(1.7) = 0.42731 . Including the data point 0.4 leads to the estimate of 0.00012 for the error. (The data was generated from the function f(x) = e- x/ 2 , so the exact value is 0.427415 and the actual error in the estimate is 0.00011 .)

Kelompok MetNum:

1. Tatik Hariyanti M0107084

2. Triyono M0107086

pak saya attach ajah y k email bpk….saole grafik nya ga masuk…makasih pak

Worked example 5.2.1. Find the polynomial that interpolates the functions

Let now :

i = 0,1,…….,n

And since n = 2 we have

=

=

=

=

=

=

Thus

=

=

=

Self assessment exercise 5.2

1. Find the polynomial that passes through the points (0,1), (1,3) and (2,13)

2. Find the spolynomial that interpolates at the points x = 2,3,4 and 6 and find the error in using it to appropriate

5.3. Newton’s divided difference formula

The Lagrange from the interpolating polynomial requires a large number of arithmetic operations to evaluate, and it is not very easy to increase the degree of the polynomial by adding extra data points. We will see that Newton’s divided differences formula for the interpolating polynomial overcomes both of these difficulties.

78 Chapter 5 Approximation of function

We begin by defining the divided differences of the function f. The first divided difference of f is defined by

The second divided difference is

and the third divided difference is

The kth divided difference is defined by

The divided different are usually calculated and displayed in a divided difference table as shown below (of course higher order difference are normally calculated; due to lack of space only the first four columns are given).

xi

x0

x1

x2

x3

x4

x5

The numerators of the divided differences are the difference of the adjacent values in the column to the left, and the denominators are the difference of the x values obtained by moving back along the diagonals.

Newton’s divided difference formula for the polynomial pn(x) that interpolates f at the points x0, x1, …, xn is

5.5 Newton’s divided difference formula 79

Note that that function value and the differences used in Newton’s divided difference formula are at the top of each column of the divided difference table. Also, the data in the difference table may be in any order. If it is required to increase the degree of the approximation, the new data values may be simply added to the bottom of the table and the divided differences calculated in the usual way. This ease of increase of the degree of the polynomial is an important advantage of Newton’s divided difference formula.

Newton’s divided difference formula is computationally much better than the La grange formula, but for efficiency should be evaluated using the nested form

As this requires only n multiplications to evaluate rather than the n(n + 1)/2 multiplications of the earlier form.

Worked Example 5.3.1 Use Newton’s divided difference formula to determine the polynomial that interpolates at x = 2, 3, 4 and 6.

The difference table is as follows :

xi f(xi) 1st dd 2nd dd 3rd dd

2 1.414214

0.317837

3 1.732051 -0.024944

0.267949 0.002636

4 2.000000 -0.014401

0.224745

6 2.449490

Here, for example

Thus

The error of this approximation is shown in Figure 5.2. Note that the error is zero at the interpolation points.

Self Assessment Exercises 5.3

1. Use Newton’s divided difference formula to find the polynomial that passes through the points (0,1), (1,3), and (2,13).

2. Use divided difference to find the polynomial that interpolates e-x at x = 0, 0.5,1 and 2. Use it to estimate e-1.4. What is the error of the approximation?

80 Chapter 5 Approximation of function

5.4 Error of the interpolating polynomial

Suppose that pn (x) is the polynomial of degree n that interpolates the function f at xi , I = 0, 1, …, n. if f and its first n + 1 derivatives are continuous on [a,b], then for x the error

where lies in the smallest interval containing x, x0, x1, …, xn. As the actual value of is not known, this formula can only be normally used to give an error bound. However, it does confirm that the error is zero at the interpolation points.

Note that for equispaced points the functions

gets quite large in magnitude near the ends of the internal that contains the points x0, x1, …, xn (see Figure 5.3). Therefore, if the data points are equispaced, avoid using the interpolating polynomial near the ends of the interval.

2.8 Some numerical result 25

2.6 Regula-falsi method

The regula-falsi method or method of false position is a “hybrid” of the bisection and secant methods. We can consider it as the secant method modified in-the following manner. The initial estimates x0 and x1 must bracket the root r (f(x0)f(x1)<0), and once x2 is calculated in the usual way, x3 is calculated from x2 and whichever of x0 or x1 that lies on the opposite side of r. In this way the new estimate is always calculated from values that bracket the root. In the situation shown in the diagram. for the secant method, x3 would be calculated from x0 and x2.

The Advantage of this method is that, like the bisection method, it will converge for any choice of the initial estimates as tong as they bracket the solution. However, once the interval becomes small enough so that f”(x) does not change sign on it, one end of the interval remains fixed and the length of the interval does not 0 as n

(see diagram for secant method-one end of the interval would remain at ). In this situation the convergence reduces to being linear, and may even be slower than for the bisection method.

2.7 Multiple roots

Newton’s method and the secant method both converge more slowly to a root that is not simple, and the bisection method and regula-falsi method can not be used for multiple roots unless they have odd multiplicity. The stopping test of Newton’s method and the bisection method are not valid for multiple roots.

However, Newton’s method can be modified to retain its quadratic convergence for multiple roots. If the multiplicity_m_of the root is known, then the iteration

n = 0, 1, ….

will converge quadratically to the root for a sufficiently accurate initial estimate.! Alternatively, the function f(x)/f’(x) has the same roots as f (z), but then are all simple. Applying Newton’s method to this function leads to the iteration.

n = 0, 1, ….

Note that this iteration will work for simple roots, but it is computationally more expensive because of the additional function value f” (xn) that must be calculated”.

2.8 Some numerical results

Results for solving the equation using the methods described are presented in this section. These results were obtained using simple programs written in QBasic. In order to minimize the effects of round-off errors, double precision arithmetic was used (giving about 14 significant decimal digit accuracy).

First the programs were used to find the positive solution of the equation x — 2 sin x = 0. The root, is simple, and has the value 1.89549426703… All methods have succeeded in finding the root to the required accuracy. Newton’s method required fewest it¬erations (out not function evaluations). The secant method achieved the requires accuracy with one less iteration than the regula-falsi method. The bisection method.

pak tugas metnum klompok saya sudah dikirim ke e-mail bapak…!!!!

pak tugas metnum klompok saya sudah dikirim ke e-mail bapak…!!!!

mhon segera di cek!!!

matur nuwun ……………..

pak saya sudah attach ke email bapak

Pak saya attack ke email bapak saja, soalnya ada rumus yang da kebaca.

Tugas saya sudah saya attach ke e mail Bapak. Terima kasih

Kelompok : 1. Budi Agung Prasojo M0105020

2. Ismiyati D M0104037

3. Romadhona M0104060

MA011 November Examination, 1995 (continued)

(b) Calculate and . Use the inequality

to find a bound for the relative error in xA if = 0.55 and the relative error in bA is 0.4.

(c) Why are pivoting strategies used in the computer solution of systems of equations?

(d) Use the following LU factorization of the matrix A

A = LU =

to obtain the solution of the system Ax = b where b = (4,-3,1)T.

(e) Perform two iterations of the Gauss-Seidel method to find an approximate solution of the system Ax = b where

A = and b =

starting with the initial guess x(0) = (0,0,0).

(f) What is the difference between the Gauss-Seidel method and Jacobi method?

(9+3+1+7+8+1 = 29) .

Answers to 1995 Examination

1.(a) Error = 0.062, Rel. Error = 0.0017, 2 significant digits.

(b) f(x) = sin2x/(1+cos x), f(0.0500) = 0.00125.

(c) |Error| ≤ 0.0264.

(d) P2(x) = P2(1.5) = 1.36667,

Error ≈ -0.05833. More efficient, easier to add additional data points.

2. (a) (i) x1 = 0.78535, x2 = 0.89990, x3 = 0.95157.

(ii) Quadratic In this case convergence is not quadratic due root being a multiple root.

(b) x2 = 0.37912, x3 = 0.50026, x4 = 0.54425.

(c) Advantage: Newton’s method converges faster. Disadvantage: Need one more function evaluation per step (f’(x)).

(d) xn+1 = x1 = 1.09975, x2 = 1.12222, x3 = 1.0980, x4 = 1.11656. (The iteration xn+1 = does not converge.)

3. (a) y1 = 0.3125, y2 = 0.87357.

(b) No need to calculate derivatives for Runge-Kutta methods.

(c) (i) Put giving

y(1) = 1

z(1) = 3

(ii) Using h = 0.5 gives = 3.375, = 8.0756, y2 = 9.39694, z2 = 19.31864.

(iii) Error = 1.47619. Error so Error for

The error of the fourth order method would be much smaller than that of the second order method.

4.(a)

.

(b)

(c) To reduce the accumulation of round-off errors.

(d) .

(e) , ,

(f) The Gauss-Seidel method uses the most recent values of the iterates.

RMIT

NOVEMBER EXAMINATION

MA011 – NUMERICAL METHOD A

Date : Thursday 3, November, 1994

Time: 9.30 a.m. to 11.30 a.m.

Time allowed: 2 hours

5 pages

1.Candidates should attempt three (3) question only

2.Each question is alloted 29 marks in the manner shown at the end of the question.

3.Electronic calculators and published RMIT Lecture Notes in Mathematics “Tables” may be taken into the examination room.

November Examination, 1994

1. (a) Find the error and relative error if is used as an approximation to

(b) For x close to the evaluation of

Will result in a loss of significant digits

Find a suitable rearrangement of f(x) to avold this loss and then evaluate f(0.750) using three digit floating point arithmetic with rounding.

(Hint: )

(c) Given that

Find a bound for the error in f(x,y) if rounded values of 8.1 and -0.91 are used for x and y respectively.

(d) Use Newton’s divided difference formula.

To find a polynomial, of degree 3, that passes through the last 4 data points in the following table:

Evaluate at x = 0.6 and estimate the error in the approximation by using all of the available data.

(5+7+6+11 = 29)

Kelompok : 1. Budi Agung Prasojo M0105020

2. Ismiyati D M0104037

3. Romadhona M0104055

MA011 November Examination, 1995 (continued)

(b) Calculate and . Use the inequality

to find a bound for the relative error in xA if = 0.55 and the relative error in bA is 0.4.

(c) Why are pivoting strategies used in the computer solution of systems of equations?

(d) Use the following LU factorization of the matrix A

A = LU =

to obtain the solution of the system Ax = b where b = (4,-3,1)T.

(e) Perform two iterations of the Gauss-Seidel method to find an approximate solution of the system Ax = b where

A = and b =

starting with the initial guess x(0) = (0,0,0).

(f) What is the difference between the Gauss-Seidel method and Jacobi method?

(9+3+1+7+8+1 = 29) .

Answers to 1995 Examination

1.(a) Error = 0.062, Rel. Error = 0.0017, 2 significant digits.

(b) f(x) = sin2x/(1+cos x), f(0.0500) = 0.00125.

(c) |Error| ≤ 0.0264.

(d) P2(x) = P2(1.5) = 1.36667,

Error ≈ -0.05833. More efficient, easier to add additional data points.

2. (a) (i) x1 = 0.78535, x2 = 0.89990, x3 = 0.95157.

(ii) Quadratic In this case convergence is not quadratic due root being a multiple root.

(b) x2 = 0.37912, x3 = 0.50026, x4 = 0.54425.

(c) Advantage: Newton’s method converges faster. Disadvantage: Need one more function evaluation per step (f’(x)).

(d) xn+1 = x1 = 1.09975, x2 = 1.12222, x3 = 1.0980, x4 = 1.11656. (The iteration xn+1 = does not converge.)

3. (a) y1 = 0.3125, y2 = 0.87357.

(b) No need to calculate derivatives for Runge-Kutta methods.

(c) (i) Put giving

y(1) = 1

z(1) = 3

(ii) Using h = 0.5 gives = 3.375, = 8.0756, y2 = 9.39694, z2 = 19.31864.

(iii) Error = 1.47619. Error so Error for

The error of the fourth order method would be much smaller than that of the second order method.

4.(a)

.

(b)

(c) To reduce the accumulation of round-off errors.

(d) .

(e) , ,

(f) The Gauss-Seidel method uses the most recent values of the iterates.

RMIT

NOVEMBER EXAMINATION

MA011 – NUMERICAL METHOD A

Date : Thursday 3, November, 1994

Time: 9.30 a.m. to 11.30 a.m.

Time allowed: 2 hours

5 pages

1.Candidates should attempt three (3) question only

2.Each question is alloted 29 marks in the manner shown at the end of the question.

3.Electronic calculators and published RMIT Lecture Notes in Mathematics “Tables” may be taken into the examination room.

November Examination, 1994

1. (a) Find the error and relative error if is used as an approximation to

(b) For x close to the evaluation of

Will result in a loss of significant digits

Find a suitable rearrangement of f(x) to avold this loss and then evaluate f(0.750) using three digit floating point arithmetic with rounding.

(Hint: )

(c) Given that

Find a bound for the error in f(x,y) if rounded values of 8.1 and -0.91 are used for x and y respectively.

(d) Use Newton’s divided difference formula.

To find a polynomial, of degree 3, that passes through the last 4 data points in the following table:

Evaluate at x = 0.6 and estimate the error in the approximation by using all of the available data.

(5+7+6+11 = 29)

pak saya g dpt tugas

Bibliography

[1] Atkinson, K.E., An Introduction to Numerical Analysis, 2nd end., John Wiley & Sons, New York, 1989.

[2] Atkinson, K.E., Elementary Numerical Analysis, John Wiley & Sons, New York, 1985.

[3] Burden, R.L. and Faires, J.D., Numerical Analysis, 5th end., PWS-KENT, Boston, 1993.

[4] Hulquis, P.F., Numerical Methods for Engineers end Computer Scientists, Benjamin/Cummings, California, 1988.

[5] Mathews, J.F., Numerical Methods for Computer Scientists, Engineering, and Mathematics, Prentice-Hall, New Jersey, 1987.

[6] Plybon, B.F., An Introduction to Applied Numerical Analysis, PWS-KENT, Boston, 1992.

pak tugas saya sudah saya attach ke e mail Bapak. Terima kasih

elf Assessment Exercises 2.3 (page 17)

1. x3 = 1.89432 .

2. For the rearrangement g’(r) 14.1 so the iteration will not converge.

The second rearrangement, g’(r) 0.11 so the it iteration will converge(rapidly).

Self Assessment Exercises 2.4 (page 22)

1. x1 = 2.076558, x2 = 1.910507 .

2. x1 = – 1.205264, x2 = – 1.25250 (iteration terminates); r– 1.20525 .

Self Assessment Exercises 2.5 (page 24)

1. x2 = 1.790125, x3 = 1.889121 .

2. x2 = –1.205012, x3 = –1.205239 (iteration terminates); r –1.205 .

Self Assessment Exercises 3.2 (page 31)

1.y1 = 0 , y(3.2) y2 = 0.916 .

2.y1 = 1.666667, y2 = 1.444444, y(1) y3 = 1.321637 .

Self Assessment Exercises 3.4 (page 37)

1.y1 = – 0.07 , y(3.2) y2 = 0.915760 .

2.y1 = 1.722222, y2 = 1.548208, y(1) y3 = 1.456614 .

Self Assessment Exercises 3.5 (page 41)

1.y1 = – 0.0195 , y(3.2) y2 = 1.017344 .

2.y1 = 1.722222, y2 = 1.546896, y(1) y3 = 1.453652 .

3.y(2)y1 = 5.464166 .

Self Assessment Exercises 3.7 (page 46)

1. u1 = – 0.2, v1 = – 1.7, y(3.2) u2 = – 0.37 .

2. u1 = 2.89, v1 = – 1.24145, y(1.2) u2 = 2.75154 .

Self Assessment Exercises 4.3 (page 54)

1. E2 – E1 → E2, E3 + E1 → E3; E3 + E2 → E3; x1 = –, x2 = –, x3 = –.

2. E2 + 2 E1 → E2, E3 – E1 → E3; E2 ↔ E3; x1 = 3, x2 = 2, x3 = – 1.

Self Assessment Exercises 4.4 (page 56)

1.E1↔E2; E2 –E1→E2, E3 –E1→E3; E2 ↔E3; E3 + E2→E3; x1 = , x2 =, x3 =.

2.E1↔E2; E2 + E1→E2, E3 + E1→E3; E3 + E2→E3; x1 =3, x2 =2, x3 = –1.

Self Assessment Exercises 4.5 (page 57)

1.E1↔E3; E2 – 2E1→E2, E3 – E1→E3; E2↔E3; E3 – E2→E3; x1 = , x2 = – 2, x3 = 2.

2.E1↔E2; E2 – E1→E2, E3 + 2E1→E3; E3 + E2→E3; x1 = 3, x2 = 2, x3 = –1.

Self Assessment Exercises 4.6 (page 60)

1. L = U =

2. L = U =

Self Assessment Exercises 4.7 (page 61)

1.||x||∞ = 4, ||A||∞ = 13.

2.||Ax||∞ = 40 < 4 x 13.

Self Assessment Exercises 4.9 (page 63)

1.||x –||∞ ∕ ||x||∞ ≤ 4.2 .

2.||x –||∞ ∕ ||x||∞ ≤ 13.6 .

Self Assessment Exercises 4.10 (page 65)

1.x = 1, x = 1 (the exact solution since the equations were solved exactly).

2.x = 19.837398, x = –11.869919 (the exact solution is x= 20, x= – 12).

Self Assessment Exercises 4.11 (page 67)

1.x = , x =, x =; x = , x =, x =. (Of course, decimals could have been used.)

2.x = 1, x = 0, x =; x =, x =, x = –.

Self Assessment Exercises 4.12 (page 70)

1.Will converge.

2.May converge or diverge, but equations can be reordered to guarantee convergence by shifting the first equation to after the others.

Self Assessment Exercises 4.13 (page 72)

1.x =, x =, x =; x =, x =, x =. (Of course, decimals could have been used.)

2.x = 1, x =, x = –; x =, x =, x = –.

Self Assessment Exercises 5.2 (page 77)

1.p2 (x) = 1 – 2x + 4×2.

2.p3 (x) = –(x – 3)(x – 4)(x– 6) + (x – 2)(x – 4)(x – 6)

– (x – 2)(x – 3)(x – 6) + (x – 2)(x – 3)(x – 4);

Error = – p3(5) 0.0022 .

Self Assessment Exercises 5.3 (page 79)

1.p2 (x) = 1 – 2x + 4×2.

2.p3 (x) = 1.000000 + (x – 0){– 0.786939 + (x – 0.5)[0.309636 + (x – 1)(– 0.0.73232]}; p3 (1.4) 0.251518, Error – 0.0049

Self Assessment Exercises 5.4 (page 82)

1. 0.021

2. Using the closest data points 0.9, 1.5, 2.2 and 3.0 gives f(1.7) p3(1.7) = 0.42731 . Including the data point 0.4 leads to the estimate of 0.00012 for the error. (The data was generated from the function f(x) = e- x/ 2 , so the exact value is 0.427415 and the actual error in the estimate is 0.00011 .)

click me

Now if K < 1, 0 and therefore r as n .

Thus the fix point iteration = g( converges to r for any (r-, r+) if (x) K 1.

- Worked Example 2.3.1 Two rearrangements of the equation = Sin x are x = -log(Sin x) and x = arcSin x . Determine if the fixed

point iteration will converge to the first positive solution r 0.6 for these rearrangements. If the iteration does converge, calculate (use = 0.6).

If g(x) = -log(Sin x) then (x) = -Cos x / Sin x = – Cot x giving (r) (0.6) = -1.46. Since (r) is clearly larger than 1 the fixed point iteration

= g( will not converge.

If g(x) = arcSin then (x) = so (r) -0.66. As (x) is obviosly less than 1 in a neighbourhood of the root the fixed point iteration

will converge.

Employing the iteration = arcSin with = 0.6 gives

= arcSin

= 0.580942

= arcSin

= 0.593627

= arcSin

= 0.585145

Note that the converge is rather slow (as expected from the large value of (r)).

Self Assesment Exercise 2.3

1. Show that the iteration = 2 Sin will converge to the first positive root (r 1.9) of x – 2 Sin x = 0. Using = 1.9, find .

2. Two rearrangements of the equation = 2 Cos x are x = -ln(2 Cos x) and x= arcCos ( /2). Are either of these suitable to find the root

r 1.5 using a fixed point iteration ?

Rate of convergence

Suppose , , , … is a sequence of approximations to a root r. If r as n we say the sequence is convergent.

The rate of convergence is important. If

Lim = Lim = K

n n

For positive constants K and p, we say that converges to r with order of convergence p. The larger the value of p and the smaller the value of K

the faster the convergence. If p = 1 and K < 1 the sequence is said to converge linearly, and if p = 2 the convergence is quadratic.

For the fixed point iteration = g( ) it was shown that = ( ) where lies between and r. Thus

Lim = Lim ( ) = (r)

n n

so simple iteration converges linearly if 0 < (r) < 1. (If (r) = 0 the order of convergence will be 2 or greater.)

The bisection method (or more precisely its error bound) convergence linearly with K = .

Termination of the iteration

We would like to stop the iteration when is sufficiently close to r, that is when r – e for some prescribed tolerance e. One criterion that

is used sometimes is to stop when f( ) e, but this does not ensure that r- e as can be seen from the earlier diagrams. Another criterion

that is often used is to stop when – e. This also does not always ensure that r – e.

For simple iteration

r – ( – )

so if (r) is close to 1 then r – will be much larger than – . However, (r) may be approximated by = ( – ) / ( – )

and the iteration terminated when ( – ) / (1 – ) e.

These points are illustrated by the iteration = 0.4 + 0.6 which gave the results shown in table 1. The iterates are converging slowly

to the root r = 1. The slow convergence is to be expected since (1) = 0.9. The value of – is very much smaller than the error in .

Note that is tending to (1), and as it approaches this value the error estimate (last column) approaches the actual error.

No matter what criterion is used to terminate an iteration, the total number of iterations should be limited in case of program error, slow

divergence, too small a tolerance, etc.

Pak besok ada kuliah ga pak?

dan tugasnya dikumpulin ga?

maturnuwun……

Saya terkesan membaca seperti cerita yang kuat tentang Kuliah Metode Numerik :Math Motivation. Saya akan posting link di situs kupon saya untuk posting blog ini . Aku akan kembali untuk membaca lebih.