Isn't this just a rehash of the lates 19th/early 20th century intuitionits vs. formalists debate? Even though Gödel had shown that the ultimate goal of the formalist program is unattainable, they still mostly won that debate by being able to produce more meaningful results. The radical intuitionist, with whom the author seems to sympathize, did not only reject the axiom of choice, but even things like the law of the excluded middle (i.E. they assumed that "A or not A" is not necessarily true). That makes many proofs considerably harder to perform, if not totally impossible.<p>Not surprisingly, the formalist program was also more fruitful for endeavours like computational logic and the theory of computation, which resonate with the formalist tenet that mathematics is just the manipulation of strings of abstract symbols according to specific rules.<p>This <a href="http://en.wikipedia.org/wiki/Brouwer-Hilbert_controversy" rel="nofollow">http://en.wikipedia.org/wiki/Brouwer-Hilbert_controversy</a> gives some of the context.
One can prove, ab initio, that 1+1=2 in less than ten lines in Coq.<p>Algorithms are _meaningless_ without accompanying proofs. There should be --- and is --- a continuum of formality. The 20th Century has been unique in recognizing the fallacies and contradictions of completely informal reasoning; why give up when we're finally ahead?
It's hard to see how a computer would produce a proof of the Banach-Tarski "Paradox"
( <a href="http://news.ycombinator.com/item?id=411043" rel="nofollow">http://news.ycombinator.com/item?id=411043</a> )
without going through effectively the same steps as a human. It doesn't seem likely that it will be reduced to calculations, or simple formula manipulations such as can be performed by Mathematica.<p>And in case you think that B-T is irrelevant, informal reasoning about infinities produced all sorts of results that caused people problems. Formal reasoning about infinities gave us calculus for real, that works wherever it's known to work. And I use infinities on a daily basis designing and implementing systems to assist with tasks similar to air traffic control and automated target tracking.<p>Computers can be used to explore and experiment with numbers, geometry and some structures, and will probably one day be able to assist with exploring more abstract concepts, but as things stand today, there's a great deal of mathematics that no one can see how to do with computers.<p>To say otherwise is to demonstrate ignorance of both fields.
I'd agree with the author that only logicians are interested in formal proofs.<p>But what the author describes as the Hilbet-Bourbaki method is simply what math <i>is</i>. Any mathematician will tell you that if you're not proving theorems then you're not really doing math.<p>Anyway, what good is a computer proof if it doesn't provide any insight ? (which is usually the case). A good proof usually illuminates <i>why</i> some mathematical statement is true, and that is why computer proofs are usually looked upon with suspicion by many mathematicians.<p>After all, a computer simply churning out the answer (42 ?) doesn't advance the state of human knowledge at all.
The maniacal obsession with unassailable formal proof gives mathematics its credibility, above all else. There is a lesson here for all other endeavors that seek truth: Rigor uber alles. Take nothing for granted.
Here's the link to the Dec 2008 issue of the Notices of the American Mathematical Society that this article is in response: <a href="http://www.ams.org/notices/200811/index.html" rel="nofollow">http://www.ams.org/notices/200811/index.html</a>
The question I see here is whether you are interested in Mathematics as end, or in the Application of Mathematics as an end. Certainly, if all you need is an algorithm that works, don't write a formal proof. However, if your mathematical work is foundational for some future algorithm, you need to know that p(x) holds for every x, not just a bunch that you tested.<p>Most students do not need to be concerned with (and indeed, are not instructed in) formal proofs. But, if we are going to push math forward, we need to lay a solid foundation of axioms and proofs.
It is not true that the best way to do geometry is by converting everything into algebra. Proving simple geometric stuff like, the three altitudes of a triangle all intersect at a single point, is a pain using coordinates. Just because it's possible to convert problems in domain X into problems in domain Y doesn't mean that domain X is pointless.