This can take a really long time if there is a sharp jump in the function somewhere, and for some functions, the error may never get small enough. To prevent that, there is a limit on the number of times the trapezoids are divided in half before the routine gives up.


There are three ways we can deal with this problem. The first is to change the allowed error. Looking at the online help system, we see that Math.trap actually has five parameters, not just the three that we used. The fourth parameter is the allowed error, which defaults to 10-6, giving about 5 digits of precision. Drop that to three digits of precision by changing the line


   ans = Math.trap(FUNCTION f, x0, x1)


to


   ans = Math.trap(FUNCTION f, x0, x1, 1e-4)


and try again. Now we get an answer, and we can also see the problem. There is a sharp discontinuity at x=1.


Well, this shouldn’t really stop us from evaluating the integral, it will just cause it to take a lot longer. Change the error back to 1e-6 and try increasing the number of allowed steps from the default of 16 to 25.


   ans = Math.trap(FUNCTION f, x0, x1, 1e-6, 25)


The program takes a lot longer to run, but does give an answer to five significant digits.


So does this do our calculus homework for us? Not really. We’ll still need to show our work, but it does help make sure we got the right answer. When we solve the integral analytically, we get 41/3 for an answer. This agrees nicely with 13.6666, giving us confidence that the analytic solution is correct.


Problems With Regular Functions


Let’s try another one. This time we’ll integrate






by changing the function to


   FUNCTION f (x AS DOUBLE) AS DOUBLE

   f = ABS(SIN(x))

   END FUNCTION


and at the top of the program, the second limit of integration from


   x1 = 4


to


   x1 = 4*Math.PI


Run the program. OK that’s not good. Clearly the answer is not zero, but that’s what we get to five significant digits. Remember how trapezoidal integration works, though? Taking the first, last, and middle point, f(x) is zero in each case. Divide the polygons in half and try again, and we still have points where f(x) is zero. That’s why a program like the one we’re using is nice. We can see from inspection that the answer is nonsense, and a little knowledge of how trapezoidal integration works tells us why. If it had only evaluated one more step, the integration routine would have found some non-zero points and continued on, giving a good answer.


To solve this problem, we’ll drop back to a less sophisticated version of the trapezoid rule that doesn’t look for the difference between successive evaluations to decide when to stop, it simply evaluates the integral by dividing the trapezoids in half a specific number of times. Math.trapf does this.


Change the line that evaluates the integral from


   ans = Math.trap(FUNCTION f, x0, x1)


to


   ans = Math.trapF(FUNCTION f, x0, x1)


Run the program again, and the answer is 7.999599, much closer to the correct value of 8. Of course, we may want to change the number of iterations manually to get an idea of the accuracy. Checking the help files, we see that Math.trapf defaults to 10 steps. Change that to 11 by modifying the line that does the integration to read


   ans = Math.trapF(FUNCTION f, x0, x1, 11)


and the answer changes to 7.999899, showing that the answer is good to four digits.


Polynomial Integration Methods


If you’ve looked at numeric integration before, you might wonder why we’ve used trapezoidal integration up to this point. The famous Simpson’s Rule gets results a lot quicker by fitting a curve to three successive points rather than using a straight line, like the trapezoid rule. The reason is that the trapezoid rule is about as foolproof as a numeric integration method can be. Other than using a function that isn’t defined at all at some point in the area we want to integrate, the only two problems it faces are the two we’ve already looked at, and a simple plot of the function helps avoid nonsense answers or helps show us why a function might not converge to the accuracy we want quickly enough. When polynomial fits are applied, there are new problems that we have to deal with. Discontinuities in the first derivative cause the same sorts of problem in Simpson’s Rule that we already saw with the trapezoid rule when the function was discontinuous. The function has to be “sufficiently smooth” to use the polynomial methods of integration. For Simpson’s Rule, that generally means a continuous first derivative. For Romberg integration, which can apply even higher orders of polynomials, the second derivative should be continuous for polynomials with an order of 3, the third derivative should be continuous for polynomials with an order of 4, and so on.


But if the function is smooth enough, Romberg integration can be blazingly fast.


Let’s take a look at the first function we evaluated. Change the function being integrated back to


   FUNCTION f (x AS DOUBLE) AS DOUBLE

   f = x*x*x*x*LOG(x + SQR(x*x + 1))

   END FUNCTION


and the second limit of integration to


   x1 = 4


then run the program. The answer of 389.961823 pops up pretty fast. Now change the line that evaluates the integral to


   ans = Math.romb(FUNCTION f, x0, x1)


and try again. This time the answer is 389.961761, but the answer is still there pretty quickly.


Now let’s try something different. Rather than evaluating the function one time, let’s evaluate it 250 times and record the time it takes to do so. Replace the line that integrates the function with


   time& = System.ticks

   FOR i = 1 TO 250

     ans = Math.romb(FUNCTION f, x0, x1)

   NEXT

   PRINT "Elapsed time = "; System.ticks - time&


The program still seems to run pretty quickly, but there is a noticeable pause. Now change from Romberg integration to the trapezoid rule by changing romb to trap, and try again. The difference is dramatic—my times showed that Romberg integration was 16 times faster than trapezoid integration!


Some people might be tempted to use Romberg integration as the default method of integration, but remember that the function has to be sufficiently smooth. That’s why I recommend using the trapezoid rule when an integral only has to be evaluated once, or at most a few times. If you are writing a program that will be evaluating an integral many times, or evaluating the integral in a time-sensitive part of the program, it’s worth using Romberg integration—but only after making sure the function is smooth enough.


Back at the start of this section, we talked about Simpson’s Rule. So where is it? It turns out that Simpson’s Rule is a subset of Romberg integration. Taking a look at the help file, you see that Romberg integration has three optional parameters. The first two are the desired accuracy and maximum number of steps, just like for the trapezoid rule. The last gives the polynomial order, which defaults to 5. Setting the order to 2 tells Romberg integration to use a second order polynomial, which is what Simpson’s Rule uses.


It’s worth playing with the order when using Romberg integration many times on a specific function. The default order of 5 is a good starting point, but lower orders may give an answer just as fast for some functions, and higher orders may work better for others. Don’t automatically pump the order up, though—eventually, round off errors start to accumulate, and accuracy suffers. You may get the dreaded “Runtime error: Desired accuracy was not achieved in the allowed number of steps.” if you specify an order that is too large.


Further Reading


Quick Introduction to Numeric Integration on Wikipedia

Numerical Recipes 3rd Edition: The Art of Scientific Computing