1) Use Taylor expansions to show that f(t+h)-f(t-h)/(2h)=f '(t)+O(h^2)
2) Let f(x)=e^x and t=0. Plot the error (using Matlab) in the approximation of f '(t) by the finite difference in 1) for h=logspace(-1,-16,100). Explain your results
3) Use calculus to show that the optimal h to use in the approximation in 2) - the one that gives the smallest error- is h_opt is approximately CE^1/3
where E is approximately 2.2 x 10^-16 is a machine precision. Does this result agree with the data you plotted in 2)?© BrainMass Inc. brainmass.com October 10, 2019, 4:20 am ad1c9bdddf
** Please see the attached file for the complete solution response **
I have posted the solution in a pdf. I have also attached the accompanying .m file.
See plots when the ...
This solution provides step-by-step explanations for how to solve the given calculus problems.