Brief Solutions

Here are solutions to the problems based on the work of the Macalester team of Danny Kaplan and Stan Wagon, with additional notes regarding better solutions when we know them. In most cases there are better solutions than ours to the 10-digit problem (it is arguable, of course, but our approach was the right one, or close to it, for #1, #2, #8, and #9). One hundred digits are given in all cases but one; thanks to F. Bornemann and J. Boersma's team) for the extra digits in cases we did not have them. Mathematica code giving a complete solution is included in all cases except #2, which, while not difficult, requires the most programming. As far as our own solutions go, we can say that we had 14 digits to all the problems, with 14 or 15 occurring on #3, #5, #7, and #10. Using our work and that of others, all problems except #3 yield 100 digits with no problem (when one uses the proper algorithm, which can be hard to find). Problem 3 is more difficult.

On the other hand, it is tricky to come up with a good measure of difficulty, as different people have different expertise. For our team, the last four problems to be solved were #8, #6, #10, and #5, in that order, with #5 being last (Trefethen called #5 "exceptionally difficult"; three of the teams scoring 99 missed the tenth digit on #5). The other six we could handle without much research or code, but the last four required library work or heavy programming.

As far as getting 500 digits in reasonable time, that can be done for #2, #4, and #8 by our methods; for #5, #6, #7, and #10 by others (and probably #1 and #9). But we must not get too caught up in the search for digits; the real point of this exercise is the search for algorithms. Note that all problems can be solved using machine precision, except #2, for which it is probably impossible to avoid high precision.

1  Divide and Conquer

The Oscillatory method to Mathematica's NIntegrate solves this, once the integral is transformed using Lambert's W function (also known as ProductLog) to . This method evaluates the integral between zeroes and extrapolates the partial sums to the limit. One can get more by summing an initial segment by hand and then using the method just described on the tail. Our technique can be used to obtain 60 digits. Bornemann, using fancier techniques to accelerate convergence, obtained 100 digits. Boersma and Jansen used contour integration to transform the integral to two integrals that can be easily computed numerically. The first 100 digits:
0.3233674316777787613993700879521704466510466257254696616810364434317903372106728944319303704641024514  Boersma & Jansen transformed the integral to: . The second integral takes some effort and I am able to get 300 digits out of this approach.

2  A Sea of Mirrors

Program the geometry and start with the initial conditions to, say, 50 digits. Then Mathematica's automatic precision control will show how precision is lost (about 3 digits per reflection). Starting with 50 digits is enough to get the answer to 10 digits. We can get 500 digits easily. The answer is  0.9952629194433541608903118094267216210294669227341543498032088580729861796228306320991749818976188760.

3  A Singular Problem

The value sought is the largest singular value of the matrix. Approximate the matrix by the upper-left 2048 × 2048 matrix and use Mathematica's SingularValues function (which uses the QR algorithm) or use Matlab's norm function. Confidence in the answer is obtained by using extrapolation on the squence of values one gets by looking at all upper-left submatrices whose dimension is a power of 2. We got 13 digits. Bornemann obtained 20 digits. R. Strebel of ETH-Zurich obtained 40 digits:
1.274224152821228188212340639725078099472  Here is the extrapolation method. We look at 13 powers of two and then extrapolate. There are ways (using higher precision) to learn that the extrapolated result is likely correct to 13 digits. In fact, the extrapolated result below turns out to be correct when rounded to 15 digits.    4  How Low Can You Go?

It is not hard to see that the global minimum is at least as low as -3.24, and it then follows from the structure of the function (two of the six terms have easy-to-measure positive values; the sines are never less then –1 or, in one case, –0.84) that the true minimum lies inside the unit circle. Then we used a root-finding method to find all the critical points and choose the correct one. The root-finding method was developed by Wagon in 1996 for the VisualDSolve project in differential equations, and uses data from a contour plot to find all solutions to a pair of equations. We can get 500 digits easily. The answer to 100 digits is
-0.3306868647475237280076113770898515657166482361476288217501293085496309199837888295035825488075283499.
Here is a solution that finds all the critical poiunts by using the FindAllCrossings function just alluded to.   We now find all critical points (there are 2720 in the unit square), keeping any for which the function value is under -3.2. We used a collection of 64 subsquares of the 2×2 square around the origin, to save memory. And by increasing the resolution and watching the results, we gain evidence for the completeness of the count. We include here the complete code for FindAllCrossings2D (from Wagon's book, Mathematica in Action, chap. 12), which finds all solutions to two equations in an interval. The resolution is controlled by the PlotPoints option which controls the resolution of the contour plot used to get started. We did not attempt to prove that a certain resolution was enough for the problem at hand, but we did examine images of the crossing contours and that convinced us that we had all the roots. We call the functions with a high resolution setting (160, chosen so that the number of roots is stable; that is, we used higher settings too and it appears certain that we have found all the critical points).   Get the global minimum.   And now it is easy to zoom in to a more precise value.  Here is an alternative solution (code is for Mathematica 4.2) using a combinatorial search technique that we learned about for problem 5. This is not the ideal way, but it gets the answer very quickly; the contour-based root-finding given above is better, but the root-finder (code not given here) takes a little programming. The SimulatedAnnealing setting to Method also works for this problem.  Our methods described here do not provide proof of correctness. The Wolfram Research team devleoped a method using interval arithmetic that does provide such proof.

5  Lost in Hyperspace

The best algorithm we found for this (which is far from the best algorithm that exists!) is the differential evolution method, a type of genetic algorithm included in Mathematica's NMinimize function. Of course, one must first get an efficient way to get the error for a given cubic. Several observations help get that quickly, the most important of which is that the maximum error occurs on the upper half of the unit circle.

The code that follows (for version 4.2 of Mathematica) produces 15 digits (but with no proof that they are correct). Note that the problem did not specify that p(z) was to have real coefficients. The Eindhoven team's solution (link given earlier) provides a proof. We did find another method that reduced the problem from a 4-dimensional optimization to a 1-dimensional optimization of the results of a 3-dimensional optimization, and that technique was good enough to get us 15 digits.

But now I see that experts (Laurie, Bornemann, Jansen & Boersma) use the method of Chebyshev approximation, which is efficient and provides a proof of correctness. Jansen and Boersma obtained 100 digits (and their method yields 500 with no problem), which are:

0.21433523459045963946152640018474939611340728778951670807393498597098767607405459816947988478883053305903475633.  We tried to get more digits. The best cubic we found is ,

for which the maximum error is 0.21433 52345 90459 64277. It seemed likely that 16 or 17 of these digits are correct. It turns out that 16 are; 17 if one rounds.

An earlier dimenson-reduction approach of ours is interesting and did get us 15 digits. First one observes that, when one is close, the error curve has local maxima at 1.40, 2.26, and π, where the first two are approximate and the last is exact. Moreover, the error at π is a-b+c-d. So we can specify e, the error at π, and look at the family of cubics of the form . We can try to find the cubic in this 3-dimensional family for which the overall max-error is least. Then it is simply a matter of finding the value of e for which this abc-minimum value is least, and FindMinimum is up to the task on this 3-dimensional problem (and then on the 1-dimensional problem).

This approach can be viewed as a coordinate transformation of the 4-dimensional problem. Simply using FindMinimum on the raw 4-dimensional problem did not work well. Trying to lower the dimensions directly (fixing d, say, and optimizing a, b, c) does not work. A possible reason for this difference is that as we change e a little bit the overall shape of the curve does not change much; this is analogous to using Bézier polynomials as a design tool. This means that the location of the other two bumps does not change much, and the seed for that search is good. We store the locations found by the abc problem and use them for the next value of e, thus adding efficiency to the 3-dimensional search.

The solution by Boersma and Jansen of the Eindhoven team is noteworthy, as they found a lemma of Vidensky in a 1968 book by Smirnov and Lebedev [Functions of a Complex Variable, Constructive Theory, p. 450-452] that reduces the problem in quite a simple way to six simultaneous equations that can be solved by standard root-finding. Here is their code, which easily yields 500 digits.  6  To Be Fair, It Must Be Biased

The return probability has the form where p(n, ε) is the probability of return to the origin after 2n steps; this can be written in terms of binomial coefficients as . The sum can be estimated by using a partial sum or, for more digits, using an extrapolation technique to estimate the tail. Then standard root-finding techniques tell when the expression is 2. The answer is
0.06191395447399094284817521647321217699963877499836207606146725885993101029759615845907105645752087861.

There is also a closed form for this sum in terms of elliptic functions. Several people observed this. Here is how to use the elliptic function.  And there is a well-known connection between this elliptic function and the very-fast-to-compute arithmetic-geometric mean. Thus, as observed by Bornemann, it is faster and more elegant to use the following, which gives the same result, and gives 500 digits in under a second.  7  Inverting an Almost-Zero Matrix

The conjugate gradient method solves this in about 2000 iterations, and is easy to program. Matlab had this built-in in a form that could solve the problem. The answer is  0.72507834626840. The Mathematica code that follows uses compilation for speed. Bornemann obtained 100 digits. Here they are:
0.7250783462684011674686877192511609688691805944795089578781647692077731899945962835735923927864782020. D. Laurie reports that even Jacobi's iterative method is able to solve this.   8  It Gets Hot, but When?

The standard technique of separation of variables and Fourier series works. Many texts explain how to use these techniques for the case that the boundary is held constant at 0 and an initial temperature is given for the interior,  and for the case where the boundary is nonconstant and the steady-state solution is sought. It is not hard to combine these two methods to solve the problem at hand. The answer is:
0.4240113870336883637974336685932564512477620906642747621971124959133101769575636922970724422944770112.
The following is a very terse solution, where some numerical work was used to deduce that the steady state solution at the center is exactly 5/4 (I discovered this numerically, but it is a special case of a known family of series (I. J. Zucker, 1984, see Problem 10), and also follows from the symmetry of the problem. Only three terms of the series were used, which additional computation shows is adequate. Using more terms yields 500 digits easily. By using the explicit partial sum in the code and simplifying the equation a bit, the final expression is nicely simple (essentially a ninth-degree polynomial equation).  Here is the series that converges to 5/4 (the result alluded to above, with a multiplicative constant stripped off, so the sum is different). F. Bornemann observes that this series is a special case of a more general one which can be expressed in terms of the modular λ elliptic function (see #10).

9  Surprise

The integral has a closed form in terms of the Meijer G function. Since routines to compute that are included in Mathematica, it is easy to use a standard optimization method to find the answer. We can get 50 digits, but beyond that ran into a problem with the G computation. Bornemann obtained 100 digits. They are:
0.7859336743503714545652439863275455829623954590618668175812318070989103971494123651167706337659944953.

For the solution, we first discover the closed form, and then optimize it.    10 Think Outside the Box

We found that a very similar problem could be easily solved and that it agrees with the given problem to within . Since the answer is about , this is enough. The variation is to consider the region with no right boundary and ask for the probability that a particle hits the left before hitting the top or bottom. This turns out to have the simple and exact answer: . Doubling this, and doing some very rough estimation on the discrepancy between our variation and the given problem, leads to the slightly better formula , which gives the answer to the problem posed to 14 digits. The answer to 100 digits is:
0.00000003837587979251226103407133186204839100793005594072509569030022799173436606852743276500842845647269910.

So, given the mathematical work we did, one can easily get 14 digits as follows.  Or 13 digits from:  Because the argument is small, one gets the same digits even if the arctangent is ignored. We were lucky, since our method would fail if the rectangle was closer to a square!

Bornemann showed that the exact answer to the given question is , which is very similar to the steady-state series in #8. One term of this series gives 14 digits. Moreover, Bornemann observed that this series equals (the first term of which is our arctan approximation) and it is well known (Borwein & Borwein, Pi and the AGM, §3.2, exer. 12) that these series equal , where λ is the modular λ elliptic function. This formula is fast, taking only a small fraction of a second to get 500 digits.  