Saturday, September 9, 2023

Why Mathematical Proof Is a Social Compact



An odd idea surely.  Mathematics is about mathematical rigor.  and we do get better at it.

The concept of zero was invented and introduced in historical times along with double entry bookkeeping.  Before, we had other excellent protocols including Mayan Math and the Abacas.  Yet the zero unleashed algebra and graphs on geometry.

today we are missing the math induced by empirical infinity and that the complex number system limits us to the second order.  just how do you think i was able to introduce the third and forth Pythagorean and by natural extension all higher orders?  This is all hugely important for tackling particle physics and hte n body problem as i have shown through cloud cosmology.

As an aside, delta epsilon proofs depend on mathematical infinity.  this is an increditably important consideration when addtressing the physical world and a hand wave produces a Black Hole, rather than a quaser.


Why Mathematical Proof Is a Social Compact

Number theorist Andrew Granville on what mathematics really is — and why objectivity is never quite within reach.


Andrew Granville on the campus of the University of Montreal.


Alex Tran for Quanta Magazine


ByJordana Cepelewicz


August 31, 2023


Introduction


In 2012, the mathematician Shinichi Mochizuki claimed he had solved the abc conjecture, a major open question in number theory about the relationship between addition and multiplication. There was just one problem: His proof, which was more than 500 pages long, was completely impenetrable. It relied on a snarl of new definitions, notation, and theories that nearly all mathematicians found impossible to make sense of. Years later, when two mathematicians translated large parts of the proof into more familiar terms, they pointed to what one called a “serious, unfixable gap” in its logic — only for Mochizuki to reject their argument on the basis that they’d simply failed to understand his work.

The incident raises a fundamental question: What is a mathematical proof? We tend to think of it as a revelation of some eternal truth, but perhaps it is better understood as something of a social construct.

Andrew Granville, a mathematician at the University of Montreal, has been thinking about that a lot recently. After being contacted by a philosopher about some of his writing, “I got to thinking about how we arrive at our truths,” he said. “And once you start pushing at that door, you find it’s a vast subject.”

Granville enjoyed arithmetic from an early age, but he never considered a career in mathematics research because he didn’t know such a thing existed. “My father left school at 14, my mother at 15 or 16,” he said. “They were born in what was then the working-class area of London, and university was just beyond what they saw as possible. So we had no clue.”

After graduating from the University of Cambridge, where he studied math, he started adapting The Rachel Papers, a novel by Martin Amis, into a screenplay. While working on and seeking funding for the project, he wanted to avoid taking a desk job — he’d worked at an insurance company during a gap year between high school and college and didn’t want to return to it — “so I went to grad school,” he said. The film never got off the ground (the novel was later independently made into a movie), but Granville got a master’s degree in math and then moved to Canada to complete his doctorate. He never looked back.






Video: Andrew Granville muses on how artificial intelligence could profoundly change math.


Video: Emily Buder/Quanta Magazine; Photo: Alex Tran for Quanta Magazine


Introduction


“It was an adventure, really,” he said. “I didn’t really go in expecting much. I didn’t really know what a Ph.D. was.”

In the decades since, he has authored more than 175 papers, mostly in number theory. He’s also become well known for writing about math for a popular audience: In 2019, he co-authored a graphic novel about prime numbers and related concepts with his older sister, Jennifer, a screenwriter. Last month, one of his papers on “how we arrive at our truths” was published in the Annals of Mathematics and Philosophy. And along with other mathematicians, computer scientists and philosophers, he is planning to publish a collection of articles in next year’s Bulletin of the American Mathematical Society about how machines might change mathematics.

Quanta spoke with Granville about the nature of mathematical proof — from how proofs work in practice to popular misconceptions about them, to how proof-writing might evolve in the age of artificial intelligence. The interview has been edited and condensed for clarity.
You recently published a paper on the nature of mathematical proof. Why did you decide that this was important to write about?

How mathematicians go about research isn’t generally portrayed well in popular media. People tend to see mathematics as this pure quest, where we just arrive at great truths by pure thought alone. But mathematics is about guesses — often wrong guesses. It’s an experimental process. We learn in stages.

For example, when the Riemann hypothesis first appeared in a paper in 1859, it was like magic: Here’s this amazing conjecture, out of nowhere. For 70 years, people talked about what a great thinker can do by pure thought alone. Then the mathematician Carl Siegel found Riemann’s scratch notes in the Göttingen archives. Riemann had actually done pages of calculations of zeros of the Riemann zeta function. Siegel’s famous words were, “So much for pure thought alone.”

So there is this tension in the way people write about mathematics — some philosophers and historians in particular. They seem to think that we’re some pure magical creature, some unicorn of science. But we’re not, typically. It’s rarely pure thought alone.



Granville flipping through a copy of Prime Suspects, a math-themed murder mystery graphic novel he co-wrote with his older sister.


Alex Tran for Quanta Magazine


Introduction

How would you characterize what mathematicians do?

The culture of mathematics is all about proof. We sit around and think, and 95% of what we do is proof. A lot of the understanding we gain is from struggling with proofs and interpreting the issues that come up when we struggle with them.

We often think of a proof as a mathematical argument. Through a series of logical steps, it demonstrates that a given statement is true. But you write that this shouldn’t be mistaken for pure, objective truth. What do you mean by that?

The main point of a proof is to persuade the reader of the truth of an assertion. That means verification is key. The best verification system we have in mathematics is that lots of people look at a proof from different perspectives, and it fits well in a context that they know and believe. In some sense, we’re not saying we know it’s true. We’re saying we hope it’s correct, because lots of people have tried it from different perspectives. Proofs are accepted by these community standards.

Then there’s this notion of objectivity — of being sure that what is claimed is right, of feeling like you have an ultimate truth. But how can we know we’re being objective? It’s hard to take yourself out of the context in which you’ve made a statement — to have a perspective outside of the paradigm that has been put in place by society. This is just as true for scientific ideas as it is for anything else.

One can also ask what is objectively interesting or important in mathematics. But this is also clearly subjective. Why do we consider Shakespeare to be a good writer? Shakespeare wasn’t as popular in his own time as he is today. There are obviously social conventions around what’s interesting, what’s important. And that depends on the current paradigm.



Granville inherited this copy of a 19th-cenutry mathematical text, the 1811 Cribrum Arithméticum by Ladislaud Chernac, when a colleague passed it to him in his will.


Alex Tran for Quanta Magazine


Introduction

In math, what does that look like?

One of the most famous examples of a change in paradigm is calculus. When calculus was invented, it involved dividing something that’s going toward zero by something else that’s going toward zero — leading to zero divided by zero, which doesn’t have any meaning. Initially, Newton and Leibniz came up with objects called infinitesimals. It made their equations work, but by today’s standards it wasn’t sensible or rigorous.

We now have the epsilon-delta formulation, which was introduced at the end of the 19th century. This modern formulation is so stunningly, obviously good for getting these concepts right that when you look at the old formulations, you’re like, what were they thinking? But at the time, that was considered the only way you could do it. To be fair to Leibniz and Newton, they probably would have loved the modern way. They didn’t think to do it, because of the paradigms of their era. So it just took an awfully long time to get there.



The problem is, we don’t know when we’re behaving like that. We’re entrapped in the society we’re in.

Andrew Granville

The problem is, we don’t know when we’re behaving like that. We’re entrapped in the society we’re in. We don’t have an outside perspective to say what assumptions we’re making. One of the dangers in mathematics is that you can conceive of something as not being important because it’s not easily expressed or discussed in the language you’ve chosen to use. It doesn’t mean you’re right.

I really like this quote by Descartes, where he essentially says: “I think I know everything there is to know about a triangle, but who’s to say I do? I mean, somebody in the future might come up with a radically different perspective, leading to a much better way of thinking about a triangle.” And I think he’s right. You see that in mathematics.

As you wrote in your paper, you can think of a proof as a social compact — a sort of mutual agreement between the author and their mathematical community. We’ve seen an extreme example of this not working, with Mochizuki’s claimed proof of the abc conjecture.

It’s extreme, because Mochizuki did not want to play the game in the way it’s played. He has made this choice to be obscure. When people make big breakthroughs, with really new and difficult ideas, I feel it’s incumbent on them to try and include other people by explaining their ideas in as accessible a way as possible. And he was more like, well, if you don’t want to read it the way I wrote it, that’s not my problem. He has the right to play the game he wants to play. But it’s nothing to do with community. It’s nothing to do with the ways that we make progress.



“We get on and try to prove what we can,” said Granville.


Alex Tran for Quanta Magazine


Introduction

If proofs exist in a social context, how have they changed over time?

It all starts with Aristotle. He said that there needs to be some sort of deductive system — that you can only prove new things by basing them on things you already know and are certain of, going back to certain “primitive statements,” or axioms.

So then the question is: What are those basic things that you know to be true? For a very long time, people just said, well, a line is a line, a circle is a circle; there are a few things that are simple and obvious, and those should be the assumptions we start from.



Our future has to rely on the sense of community that we have relied on throughout the history of science: that we bounce things off one another.

Andrew Granville

That perspective has lasted forever. It’s still around today to a large extent. But the Euclidean axiomatic system that developed — “a line is a line” — had its problems. There were these paradoxes discovered by Bertrand Russell based on the notion of a set. Moreover, one could play word games with the mathematical language, creating problematic statements like “this statement is false” (if it’s true, then it’s false; if it’s false, then it’s true) that indicated there were problems with the axiomatic system.

So Russell and Alfred Whitehead tried to create a new system of doing math that could avoid all these problems. But it was ludicrously complicated, and it was hard to believe that these were the right primitives to start from. Nobody was comfortable with it. Something like proving 2 + 2 = 4 took a vast amount of space from the starting point. What’s the point of such a system?

Then David Hilbert came along and had this amazing idea: that maybe we shouldn’t be telling anyone what’s the right thing to start with at all. Instead, anything that works — a starting point that’s simple, coherent and consistent — is worth exploring. You can’t deduce two things from your axioms that contradict each other, and you should be able to describe most of mathematics in terms of the selected axioms. But you shouldn’t a priori say what they are.

This, too, seems to fit into our earlier discussion of objective truth in math. So at the turn of the 20th century, mathematicians were realizing that there could be a plurality of axiomatic systems — that one given set of axioms shouldn’t be taken as a universal or self-evident truth?

Right. And I should say, Hilbert didn’t start off doing this for abstract reasons. He was very interested in different notions of geometry: non-Euclidean geometry. It was very controversial. People at the time were like, if you give me this definition of a line that goes around the corners of a box, why on earth should I listen to you? And Hilbert said that if he could make it coherent and consistent, you should listen, because this may be another geometry that we need to understand. And this change in viewpoint — that you can allow any axiomatic system — didn’t just apply to geometry; it applied to all of mathematics.

But of course, some things are more useful than others. So most of us work with the same 10 axioms, a system called ZFC.



Something like proving 2 + 2 = 4 took a vast amount of space from the starting point. What’s the point of such a system?

Andrew Granville

Which leads to the question of what can and can’t be deduced from it. There are statements, like the continuum hypothesis, which cannot be proved using ZFC. There must be an 11th axiom. And you can resolve it either way, because you can choose your axiomatic system. It’s pretty cool. We continue with this sort of plurality. It’s not clear what’s right, what’s wrong. According to Kurt Gödel, we still need to make choices based on taste, and we hopefully have good taste. We should do things that make sense. And we do.

Speaking of Gödel, he plays a pretty big role here, too.

To discuss mathematics, you need a language, and a set of rules to follow in that language. In the 1930s, Gödel proved that no matter how you select your language, there are always statements in that language that are true but that can’t be proved from your starting axioms. It’s actually more complicated than that, but still, you have this philosophical dilemma immediately: What is a true statement if you can’t justify it? It’s crazy.

So there’s a big mess. We are limited in what we can do.

Professional mathematicians largely ignore this. We focus on what’s doable. As Peter Sarnak likes to say, “We’re working people.” We get on and try to prove what we can.



“One of the dangers in mathematics is that you can conceive of something as not being important because it’s not easily expressed or discussed in the language you’ve chosen to use. It doesn’t mean you’re right,” said Granville.


Alex Tran for Quanta Magazine


Introduction

Now, with the use of not just computers but even AI, how is the notion of proof changing?

We’ve moved to a different place, where computers can do some wild things. Now people say, oh, we’ve got this computer, it can do things people can’t. But can it? Can it actually do things people can’t? Back in the 1950s, Alan Turing said that a computer is designed to do what humans can do, just faster. Not much has changed.

For decades, mathematicians have been using computers — to make calculations that can help guide their understanding, for instance. What AI can do that’s new is to verify what we believe to be true. Some terrific developments have happened with proof verification. Like [the proof assistant] Lean, which has allowed mathematicians to verify many proofs, while also helping the authors better understand their own work, because they have to break down some of their ideas into simpler steps to feed into Lean for verification.

But is this foolproof? Is a proof a proof just because Lean agrees it’s one? In some ways, it’s as good as the people who convert the proof into inputs for Lean. Which sounds very much like how we do traditional mathematics. So I’m not saying that I believe something like Lean is going to make a lot of errors. I’m just not sure it’s any more secure than most things done by humans.

I’m afraid I have a lot of skepticism about the role of computers. They can be a very valuable tool for getting things right — particularly for verifying mathematics that rests heavily on new definitions that are not easy to analyze at first sight. There’s no debate that it’s helpful to have new perspectives, new tools and new technology in our armory. But what I shy away from is the concept that we’re now going to have perfect logical machines that produce correct theorems.


JUNE 29, 2022




READ LATER


You have to acknowledge that we can’t be sure things are correct with computers. Our future has to rely on the sense of community that we have relied on throughout the history of science: that we bounce things off one another. That we talk to people who look at the same thing from a completely different perspective. And so on.
Where do you see this going in the future, though, as these technologies get more sophisticated?

Perhaps it could assist in creating a proof. Maybe in five years’ time, I’ll be saying to an AI model like ChatGPT, “I’m pretty sure I’ve seen this somewhere. Would you check it out?” And it’ll come back with a similar statement that’s correct.

And then once it gets very, very good at that, perhaps you could go one step further and say, “I don’t know how to do this, but is there anybody who’s done something like this?” Perhaps eventually an AI model could find skilled ways to search the literature to bring tools to bear that have been used elsewhere — in a way that a mathematician might not foresee.

However, I don’t understand how ChatGPT can go beyond a certain level to do proofs in a way that outstrips us. ChatGPT and other machine learning programs are not thinking. They are using word associations based on many examples. So it seems unlikely that they will transcend their training data. But if that were to happen, what will mathematicians do? So much of what we do is proof. If you take proofs away from us, I’m not sure who we become.


Regardless, when we think about where we’re going to take computer assistance, we need to take into account all the lessons we’ve learned from human endeavor: the importance of using different languages, working together, carrying different perspectives. There’s a robustness, a health, in how different communities come together to work on and understand a proof. If we’re going to have computer assistance in mathematics, we need to enrich it in the same way.

No comments:

Post a Comment