The construction of the type 'Map<string, Type>' is entirely standard in languages like Agda and Coq (and I bet Idris too). In these languages, Type is itself a type, and can be treated like any other type (like string or int). Nothing clunky about it. (If you're curious, Type is usually referred to as a "universe type" in type theory circles.)
Mechanized proofs ensure the correctness of the results they prove. That's useful. Indeed, the point of the article was that Voevodsky sought out proof assistants in order to ensure his math was correct.
As for your other questions: No, formal proofs are not intelligible to the average mathematician. They do not on their own grant insights, but the process of formalising a proof does often yield insights. (I speak from experience.) Not sure what you mean by "better"; they certainly don't replace ("informal") mathematical proofs.
Most mathematicians aren't interested in refactoring their mathematical "codebase", nor experimenting with axioms. They simply want to understand and discover more math. The reasons you state for your interest in formalisation don't appeal to most mathematicians.
Concerning analogies and borrowing techniques between fields, this is absolutely something humans are good at and which it is very hard for computers to do. Why do you think otherwise? To take a very simple example, most mathematical objects can be represented in different ways. A mathematician can fluently move between these representations, whereas computers cannot. This is a largely the obstacle for the adoption of proof assistants among mathematicians.
I am a mathematician and would love to mechanize my proofs. The issue is that systems like Coq construct proofs in a radically different way than how we were trained. I've struggled with Coq even with a somewhat more than a passing knowledge of the system; I used Coq's underlying language Gallina to write interpreters, which were then processed into OCaml while studying in graduate school. At the same time, I can barely turn out a basic proof in Coq.
What would help dramatically for me and others that work in applied math are proofs that cover the material in something like Principles of Mathematical Analysis by Walter Rudin. These are proofs that cover basic real analysis and topics like continuity, differentiation, and integration. It would also help to see basic linear algebra proofs such as that all real symmetric matrices have a real eigendecomposition. Now, these may exist and it's been a couple of years since I've looked. If anyone has a link to them, I'd be extraordinary grateful.
I am nothing more than an enthusiastic dabbler, but these have helped me.
There are two very different directions
1. the sort of full on proof automation with tactics https://direct.mit.edu/books/oa-monograph/4021/Certified-Pro... Dr. Chlipala builds up understanding of the "crush" tactic, which more or less lets you specify an inductive hypothesis, and that tactic fiddles around with identifying the base case, and expanding enough to generate the n+1 step. Along the way, you get exposure to using Gallina to write your own tactics.
it's neat, from a programmer point of view, because you can automate away the boring parts. There is pretty good discussion of debugging tactics, which might provide a path to transforming machine proofs into something a bit more comprehensible. at the very least, you can recover the application of axioms and such that did the transformation.
The other direction, that has been much more enlightening for me (the amateur hobbyist) was agda and Jesper Cockx's tutorial- https://github.com/jespercockx/agda-lecture-notes/blob/maste...
he builds up all of the notation from, well, nothing.
for example, you get refl, and then build up
-- symmetry of equality
sym : {A : Set} {x y : A} → x ≡ y → y ≡ x sym refl = refl
-- transitivity of equality
trans : {A : Set} {x y z : A} → x ≡ y → y ≡ z → x ≡ z (54) trans refl refl = refl
-- congruence of equality
cong : {A B : Set} {x y : A} → (f : A → B) → x ≡ y → f x ≡ f y cong f refl = refl
along with the syntax that comes with the agda std lib.
agda is different, but you can still extract to Haskell (and allegedly javascript for node, but that was a little flakey for me and might go away in a future release). less useful for your goals, much more programming oriented - https://plfa.github.io is pretty good, and worth mentioning as a reference.
For the math that you mention, I would suggest looking at mathlib (https://github.com/leanprover-community/mathlib). I agree that the foundations of Coq are somewhat distanced from the foundations most mathematicians are trained in. Lean/mathlib might be a bit more familiar, not sure. That said, I don't see any obstacles to developing classical real analysis or linear algebra in Coq, once you've gotten used to writing proofs in it.
Thanks for sending by the mathlib examples. They're not particularly intelligible for me, but it's something to work towards.
Generally speaking, I work in the intersection between optimization, differential equations, and control. To that end, there's a variety of foundational results in things like optimization theory that I'd like to see formalized such as optimality conditions, convergence guarantees, and convergence rates.
Two questions if you know. One, which theorem prover would you recommend for working toward these kinds of results? Two, is there appetite or a place to publish older, known theorems that have been reworked and validated? And, to be clear, two is not particularly important for me and my career, but it is for many of my colleagues and I'm not sure what to tell them about it.
You're welcome! And likewise, thanks for the interesting reply.
I'm afraid I have no knowledge of your field, and no idea whether there are good tools and libraries for formalising the things you want. Maybe ask or have a look around the Proof Assistants StackExchange[1]?
There are many CS conferences through which you can publish formalised mathematics. One that comes to mind is ITP[2], but there are lots which are announced on mailing lists like TYPES-announce, coq-club, agda... You could look through previous versions of ITP and check out a few of the papers on formalising mathematics to get a feel for what these publications look like.
Is the problem that human mathematicians have decided that classical logic + ZF(C) is the best framework for them–but it isn’t the best framework for computers?
Which raises an interesting philosophical question - do contemporary mathematicians mostly do things one way rather than another (i.e. classical logic instead of intuitionistic/paraconsistent/etc logic, ZF(C) set theory instead of alternative set theories or type theory or whatever, classical analysis instead of non-standard analysis, and so on) - because they’ve settled upon the inherently superior (easier, more productive) way of doing things - or is that just an accident of history? Could it be that in some alternative timeline or parallel universe out there, mathematics took some different forks in the road, and the mathematical mainstream ended up looking very different from how it does here?
I can't speak for kxyvr, but let me chime in as a mathematician who does formalise my theorems. There's no issue with representing various foundations (e.g. ZFC) on a computer -- for example, that's essentially what Lean/mathlib does, and it's working out great for them. A "problem" with ZFC, however, is that it's very low-level, meaning there's a large distance between the "informal" proofs which mathematicians carry out and communicate, and their corresponding formal proofs in ZFC. Accordingly, for a ZFC-based mathematician to start using proof assistants, they not only have to learn how to use a proof assistant, but also to translate (or expand/elaborate) their math into ZFC.
In other foundations, such as homotopy type theory (which is what the article is about), the distance between "informal" mathematical arguments and their formal counterparts is much shorter. This is why formalisation is widespread in the HoTT-community. Indeed, I believe Voevodsky worked on HoTT in order to have a foundation which was much closer to the mathematical practice in homotopy theory.
I find these "fork in the road" questions endlessly fascinating, and that's exactly what I was getting at with my comment about the motivation for formalizing mathematics. And it doesn't just apply to math, but the whole "tech tree" of human history. How many seemingly arbitrary choices led us down a path that unknowingly diverged from some alternative future?
Last time I commented about this [0], I made some guy irrationally angry as he told me I sounded like I was 19 years old. And in an earlier comment [1], another guy took extreme offense to my questioning of the statement "with enough marbles, the discrete distribution behaves like a continuous distribution."
It seems there is some high priesthood of mathematics and physics that doesn't like when you ask these sorts of questions.
> Last time I commented about this [0], I made some guy irrationally angry as he told me I sounded like I was 19 years old
If people make comments like that, flag them and email dang (hn@ycombinator.com) and he’ll do something about them
> It seems there is some high priesthood of mathematics and physics that doesn't like when you ask these sorts of questions
In my personal experience, most professional mathematicians and physicists are open to questioning the foundations of their discipline, and deal charitably with doubters (even those who are coming from a place of misunderstanding).
I suspect most of the “high priests” you speak of aren’t actually professionals at all, just Internet randoms who (at best) know only slightly more than you do, and who know very little about the norms of civil dialogue
> Most mathematicians aren't interested in refactoring their mathematical "codebase", nor experimenting with axioms. They simply want to understand and discover more math.
Experimenting with axioms is one way you discover more math. For example, set theory: there’s a lot of math assuming ZF(C), but even more if you start looking at alternatives to it (like NBG, NF(U), constructive set theories, non-well-founded set theories, paraconsistent set theories, etc)
You're not wrong, but most mathematicians aren't working on (or even interested in) foundations. Not saying what you mentioned isn't math (I think it is), but my point still stands.
Well, as a computer scientist, I'd urge them to reconsider. :) My intuition is that, given a sufficiently formalized system, computers can help them achieve their goal of understanding and discovering more math, much more rapidly and with compounding effects. Although perhaps there is a risk that any mathematics the computer might "discover" would become increasingly incomprehensible to human mathematicians, in which case the discoveries wouldn't be worth much.
I agree that humans are better at borrowing techniques between fields, but we can't do it quickly, and we can't systematically mine the entire "codebase" for these relationships (hence why "discoveries" in 2020 can combine ideas from 1990 and 2000). Besides, I suspect much of that gap between human and computer ability is due to a lack of a properly formalized representation of the ideas within each possibly related field. To use a programming term, it's a "code smell" if we haven't properly encoded the relationships between two fields. Sure, we can manually identify them, and then build on them once we do. But why did we need to bother in the first place? Wouldn't a formal system have made the relationships obvious, or at least more automatically discoverable?
Large language models have also changed my view of what computers are capable of, and in that context I've been most impressed with their ability to analogize and explain relationships between seemingly arbitrary sets of ideas. It makes sense they'd be good at this, since the premise of training them is to construct a web of weights that's indecipherable to humans but fundamentally enabling to the language model. Stephen Wolfram has been writing about how language models seem to encode a certain "truth" which has always existed but which we've never recognized. This idea resonates with me and I expect models have a lot to teach us about our own consciousness and how we model the world.
But on the other hand, if LLMs are so successful, by definition, at identifying relationships without being given formal representations of them, then why bother formalizing mathematics? After all, mathematics only exists because we can communicate its ideas by constructing the language of mathematics as a sort of amalgam of spoken language and abstractions we create within it. And while the exercise of formalizing the relationships between those abstractions might identify some imprecisions or mistaken assumptions in the existing hybrid of spoken language and formalized chunks of math, do we really need to bother? For an advanced LLM, shouldn't our existing definitions become naturally encoded into its weights during training, so that any emergent formalisms can be poked and prodded when we talk to it or ask it to reason about them? Or will it get stuck on the same imprecisions that we might identify when trying to manually formalize our existing systems of mathematics?
Asking working mathematicians to formalize their proofs in a machine-checkable system is like asking photographers to become electronics engineers so they can build their own digital cameras from scratch. It's not a reasonable request. It's so far outside of a typical mathematician's lane it's not even on the same continent.
They could do what every other scientist does, and pay people to build and run the tools they need to get scientifically valid results from their experiments.
Math isn’t science. Most mathematicians aren’t interested in applications to the real world. They don’t care about empiricism or validity. The fact that a lot of applications have come out of math is a side effect.
Imagine if someone discovered a way to solve real world problems by formulating them as chess positions. Would it be reasonable of us to demand that chess players stop competing and work on these formulations? No, and I don’t think many chess players would jump at the opportunity. They just want to play chess!
I don't quite understand what you are saying about llms. Language models are sort of a reverse engineering of the ways humans learn. I imagine if you read basically the whole internet and meaningfully memorized it you would be able to connect a bunch of dots too. Computers are just fast. Math viewed as tool doesn't require formalization, math as an endevour does. Sometimes exploring a theory is useful sometimes it's only interesting in itself
All of these are open to contributions, and there are lots of useful basic things that haven't been done and which I think would make excellent semester projects for a cs/math undergrad (for example).
Oh My Git! looks really cool and useful, thanks for mentioning it! I find it incredible that it's funded by the Prototype Fund (https://prototypefund.de/project/oh-my-git/), and it makes me wonder which other countries have similar funds hackers can apply to.
The paper argues against those who think that "programming should strive to be more mathematical" through the development and adoption of formal methods. It points out that "more mathematical" does not implicate formal methods, since proofs by mathematicians are informal and their correctness is established by social processes. The author fears that an over-emphasis on formal methods could stifle innovation (in particular, in programming language design).
As a mathematician that works with proof assistants, I largely agree with this thesis. However, I don't think there is any reason to have any such fears associated with formal methods. I think informal proofs, as the exist in both CS and maths, are here to stay. And, on the contrary, I think investigations into formal methods can drive new theory and insight. For example, one could say that the formal system of homotopy type theory (HoTT) is a programming language created in order to reason about highly "coherent" mathematical structures, which HoTT often does very well. In addition, being a formal system, HoTT is well-suited for formal methods -- but even so, many mathematicians still prefer to work informally in this language.
In summary, I think the article makes a valid point, but the motivating fears seems unfounded in retrospect.
Also "I am excited to welcome Linda Yaccarino as the new CEO of Twitter" (current front-page) is tagged 'javascript', which isn't particularly relevant.
The speed of KaTeX is great, but the lack of support for diagrams (a la tikz-cd) is what makes KaTeX unsuitable for general adoption by mathematicians (e.g., mathoverflow.net and all online mathematical wikis I know use MathJax). KaTeX has some rudimentary support for diagrams though the {CD} environment, but something more fully fledged akin to tizk-cd or xymatrix is needed. There's been some discussion on their github (https://github.com/KaTeX/KaTeX/issues/219), but I wouldn't hold my breath.
As a researcher, this sounded interesting so I had a look. The first paper that caught my eye (https://paperlist.io/post/247381561) has a bunch of "generic" (lacking substance; could be generated) comments that I find a bit uncanny. Three of them have the same style of being all lower-case (even the name of R, the programming language being discussed). Doesn't make me want to engage.
... and now I also noticed that the paper summaries are AI-generated! That's an anti-feature for me, at present.
Yeah, after reading a few dozen comments on some papers I completely agree... the lack of substance is so clear and consistent it's highly suspicious.
The concept is interesting enough, but uh. If they're filling it with bot comments to make it look less empty to passers-by, they're optimizing for catching people not paying attention, while apparently targeting people interested in deep attention (who will immediately turn away from this).