I wanted to use this blog post to record the proof using no more than the rationals (i.e. no embedding things into the real numbers), that the geometric series converges in for and rational. One can perform the usual manipulation of partial sums (possible in ) to get

(assuming the RHS exists) and hence it suffices to prove that . It is easy to prove (say with induction) that for all .

Then the limit is zero when for all , we can find some such that . This is close to being *dual* to the statement of the Archimedean property, which is of the form , for each positive rational . Initially I thought of trying to leverage the Archimedean property for the positive rational numbers, in a multiplicative sense (Archimedeanness makes sense for any ordered group), but I didn’t end up making this work (more on this below). Ultimately I found an argument in Kenneth Ross’ book *Elementary analysis*, which I simplified further to the following.

We write for positive integers , since . One can prove (by induction) the estimate (Ross had this as a corollary of the binomial theorem, but there is a simple direct proof). Then we have . Given , we can choose , so that , as needed.

What I like about this argument is that it uses nothing other than the ordered field axioms on , together with two very easy applications of induction. It’s a lovely proof to present to an undergrad class.

Returning my failed first idea at a proof, I had reduced the problem to that of showing that for every rational , there is an such that (challenge: can you leverage this fact to conclude the convergence as desired? It’s the base case of an induction proving the multiplicative Archimedean property). User @Rafi3AK on Twitter supplied an explicit estimate of the required , using the binomial theorem, namely (i.e. round up to the ceiling).

]]>I grew up in a family with a strong emphasis on arts and crafts. Spinning, knitting, pottery, leadlighting, paper-making, printing, furniture restoration, garment construction, baking, drawing and so on. At the end of the day, there was something you could hold, touch, wear and so on. At one stage in high school I considered studying Design.

I think this idea that at the end of the day, one can actually *make* something, is one that pervades my mathematics. This ranges from my habit of physically printing stapled booklets of each paper I write, to wanting a concrete formulas or constructions for certain abstract objects. At one point I had found explicit transition functions for the nontrivial String bundle on , and I collaborated with someone more expert at coding than me to generate an animation of part of this. Another time I really wanted to get my hands on (what an apt metaphor!) what amounts to a map of higher (non-concrete) differentiable stacks, so worked out the formula for my own satisfaction:

I love proving a good theorem, but if I can write it out in a really *visually* pleasing way, then it is much more satisfying. Such as the circle of ideas around the diagonal argument/Lawvere fixed point theorem.

I very much like designing nice-looking diagrams, and at one point was trying to get working a string diagram calculus for working in the hom-bicategory of (with objects the weak 2-functors), for the purpose of building amazing looking explicit calculations to verify a tricategorical universal property.

Sadly, I never finished this, and now the reason—a second independent proof of a higher-categorical fact in the literature with many omitted details—is now moot, with another proof by other people.

I have a t-shirt with a picture of the data generated in my computational project on the Parker loop (joint with Ben Nagy), and I’m itching to make more clothes with my maths on them. I love the fun I’ve had with David Butler thinking about measures of convexity of deltahedra, built with Polydrons (it seems to be an open question how non-convex one can go!)

It just speaks to me when I can actually *make* something, or at least feel like I’ve made something when doing maths. Something I can point at and say “I made that”. I think there’s an opportunity in the market for really high-quality art prints of pieces of really visually beautiful category theory, for instance, or even just mathematics more broadly. I’ve experimented over the years with (admittedly naive, amateur, filter-heavy) photographs of mathematics, for the sake of striving for an aesthetic presentation.

I want the mathematics I create to “feel real”. Sometimes that feeling comes when I can hold the whole conceptual picture in my head at once, but it’s ephemeral. Actually making the end product, making it tangible—no matter how painful it can feel in the process—is a real point of closure.

Even the process of making little summary notes of subjects I studied at school and uni has produced objects that I have kept, and have a fondness for. They are the distillation of that learning, the physical artifact that represents my knowledge

Even the choice to work in physical notebooks, and slowly build that collection, rather than digital note-taking gives me something I can see slowly grow, and I can appreciate as being a reflection of my changing ideas and development in research. Having nice notebooks gives an aesthetic that motivates me to fill them up.

Given that mathematics is generally taught as a playground of the mind, though there is of course a push in places for more manipulables in maths education (physical or digital), more visualisation, I do wonder the extent to which students feel like they are missing an aspect like this. We don’t need a 3d printer in a classroom to have the students make something tangible, lasting and awesome. Somehow I’ve managed to avoid edutech, despite winning a graphics calculator at school back in the 90s—and never learning how to use it—and I’ve never been taught with manipulables past Polydrons at age 5 and MAB blocks at age 6 (both of which I still think are awesome). But I love actually making mathematical things.

]]>Joyal seems to be citing his own work “in preparation” in the ’00s for the source of the model structure whose fibrant objects are the quasicategories, at latest in 2006, based on the citation of Theory of quasi-categories I in the (arXived) 2006 *Quasi-categories vs Segal spaces*. In the (published in) 2002 paper *Quasi-categories and Kan complexes* Joyal cites *Theory of quasi-categories* (not *Theory of quasi-categories I*), but doesn’t say anything about the model structure yet. However, in the CRM notes from 2008 (partly based on a manuscript—the IMA lectures—from 2004, plus the then draft of the book Theory of quasi-categories; there is also a 2007 version of the notes with the model structure) Joyal says “The results presented here are the fruits of a long term research project which began around thirty years ago.”

Verity, in his (arXived 2006) paper *Weak Complicial Sets A Simplicial Weak ω-Category Theory Part I: Basic Homotopy Theory* writes

we round out our presentation by localising our model structure and transporting it to the category of simplicial sets itself, in order to provide an independent construction of a model category structure on that latter category whose fibrant objects are Joyal’s quasi-categories [10].

where [10] is Joyal’s 2002 paper, so that the model structure was known to experts at least by 2006, even if not announced in 2002.

So I’m tempted to guess that the whole 1980s origin of the Joyal model structure (i.e. in a letter to Grothendieck, as stated on Wikipedia at the time of writing) for quasi-categories might be an urban myth.

I couldn’t find a mention of a model structure for quasicategories in the 2004 slides from IMA for the talk of Joyal/May/Porter, except for the closing sentence:

Baby camparison should give that the hammock localizations of all models for weak categories have equivalent hammock localizations. Model category theory shows how.

Tim Porter’s 2004 IMA notes likewise don’t seem to mention the model structure. So perhaps the date for the model structure can be pinned down to between (June) 2004 and (July) 2006, at least as far as going by Joyal’s public statements. One point in favour of this is that Tim’s notes include the open question

In general, what is the precise relationship between quasi categories (a weakening of categories) and Segal categories (also a weakening of categories)? (This question is vague, of course, and would lead to many interpretations)

which is what Joyal and Tierney’s 2006 paper pins down, in terms of a Quillen equivalence of model categories. If the question in 2004 had been merely one of trying to match up existing model structures (the Segal category one existed in 1998), I doubt Tim would have called it a vague question!

PS: I don’t know how long Lurie took to write the 589-page version 1 of *Higher Topos Theory*, put on the arXiv at the start of August 2006, but he refers to Joyal’s model structure there, citing *Theory of quasi-categories I.*

When the Front finally packed up shop, there were about 900 links to it. Stackexchange, the company, has ways of mass-editing urls without causing chaos (i.e. bumping all edited questions), but this has to be done algorithmically, of course…and the arXiv Front identifiers were *not always identical to the arXiv ones*, and hence the paper part of the url was not the same. Woah woah, I hear you say: what do you mean? That the Front was rolling its own article IDs? Yep.

The reason for this is that the arXiv didn’t launch into the world fully formed: it started out with physics, and there were sorta-parallel, not-quite-independent arXiv-like repos for various subjects in maths. If you go back in the ‘what’s new’ postings to 1994–1996, you can see things like the “q-alg archive” appear (now math.QA), in this case due to people not knowing where to put things like quantum knot invariants, and it ending up in hep-th. There were names for topics like dg-ga and so on. By mid-2007, all the arXiv identifiers across all subjects were unified, but before that you had area-specific prefixes (eg math/0102003, cond-mat/0102003 or hep-th/0102003), but before *that*, you had an even more granular system just for mathematics, similar to how physics was split up. Pre-1998 you had alg-geom, dg-ga, funct-an, and q-alg, and also math-ph. There was a parallel system at one point, allowing for eg math.DG/0307245 and math/0307245 to point to the same paper. There were also more actual independent preprint repos, like the the Hopf archive, the K-Theory archive, and the Banach archive. These slowly got absorbed into the arXiv itself. The upshot is, the arXiv Front had a slightly more systematic referencing system, as far as I can tell, while still recognising the actual arXiv identifiers. It would assign an ID that was just a number to a paper, since it was intended to only covers mathematics, at least to start, and so the hassle of having parallel identifiers in different topics wouldn’t raise its head.

However, the fun part is that when the issue was raised last year on meta.MO on 25th August, after nearly 18 months of broken links, the different types of IDs was known and pointed out, but there was no extant documentation on how the Front created its own IDs! This point was compounded by situations where people on MO would write “…and see also this paper.” with no additional information and the only context was that it was presumably relevant to the question. Sometimes an answer from 2009 (before current social norms were firmed up) would be “This is answered in this paper“, and *that’s it*. The only thing we knew for sure was the year and month of the paper, and maybe the subject it was in (but not eg the arXiv subject area). If the Stackexchange gurus went ahead and did a blanket search-and-replace for the arXiv Front domain and replace it with arXiv.org, the situation would be even worse, since we wouldn’t have the original link to work with, and the new link might point to something it shouldn’t.

Martin Sleziak, an indefatigable MO/meta.MO editor, wrote an epic answer full of targeted search queries looking for papers in the various date ranges and with what should be all the different ID formats, reporting the numbers of each, and classified them into categories depending on how automated the editing might need to be. He also found some of the needed translation, and eventually I found an old help page on the Wayback Machine that spelled out the actual encoding, in glorious late 90s web design:

Until March 2000, the Front renumbered articles in the old mathematical archives alg-geom, funct-an, dg-ga, and q-alg as math archive articles. To avoid duplicate numbers, the system added 50 to each funct-an number, 100 for dg-ga, and 140 for q-alg. Since this system was never adopted at the arXiv, it has for now been scrapped. If you use cite or link to any math articles math.XX/yymmnnn, where the year yy is 97 or prior and the number nnn is less than 200, you should convert back to the original numbers as stamped on the articles themselves.

That was five months in to the project of slowly editing questions and answers the old-fashioned way, by hand, replacing broken URLs we knew how to deal with, but which were still in the pre-2007 era of ID weirdness. They couldn’t be done too many at a time, and someone did complain I was editing too much, because it pushed new questions further down the front page, and off it entirely quicker than usual.

Further, since leaving a link direct to a pdf, say, and merely saying “this paper”, means the person reading the question needs to open up a pdf to know what the paper is (not helpful on mobile!), I took it on myself to include actual bibliographic information in fixing the link from the Front, to the arXiv proper. Knowing you are being referred to a 2002 arXiv paper *of Perelman* means you can recognise it instantly. Even better, I tried to include a journal reference and even a doi link, earlier on in the project, when I was fresh and keen (sometimes people would also link to unstable publisher urls, and there are still problems with these, especially those pointing to opaque springerlink URLs which no longer exist!). This is one way of future-proofing the system, and making it more information-rich for both human and machine readers. I have a suspicion that the arXiv is now implementing a doi system for its articles, for the day when arxiv.org may not be the address we visit when looking for papers.

Another problem is that people also supplied links in comments, and comments cannot be edited except by a mod. So our solution was to give a reply comment pointing out the fact the Front link was broken, and supply the working arXiv.org link. When particularly motivated, I included the paper title and sometimes even more bibliographic info. Asaf Karagila whacked a few of these with his mod powers, editing them directly, but leaving the mods to fix all of these is not an option.

After slow work by user ‘Glorfindel‘ (a mod on the big meta.SE), who wrote a script to do slow edits every couple of days, Martin, and myself, on the 29th March this year, I edited the last outstanding link to a pre-April 2007 Front link—every broken custom arXiv Front url was now working in questions and answers, and every comment with such a link had a reply pointing out what it should be pointing to. For good measure, I went ahead over the next day or so and edited the rest of the few links to papers in 2007, so that any replacement code can deal with a clean date division where it needs to be active. Between 20th September 2021 and 30th March, it turns out I fixed a bit over 200 broken links, and responded to about 100 comments with new, working links. The graph showing all removals of “front.math.ucdavis.edu” back through the lifetime of MO on the SE2.0 platform is quite dramatic:

Now that all the manual edits are done, the Stackexchange Community Mods (these are SE employees, not just elected users) are looking at the situation again and how the 2008–2019 links can be automatically edited by a script. Watch this space…

So what is the takeaway, if any? Don’t leave links to papers on MathOverflow without some minimum identifying information! The problem is similar for links to papers on people’s personal websites, that have evaporated after a decade, and as noted above, publisher urls instead of doi links. Without a title and at least one author, someone has to spend the time tracking this stuff down. If the MO user who posted the link has moved on, sometimes there is very little that can be done. By spending the time even just copying the title of the paper, an MO user is helping potentially many people downstream, and certainly saving the time of someone like me, who enjoys such a detective task but would prefer not to need to do it!

]]>I then used this to show that a weak equivalence of topological categories (ff+eso in the numerable pretopology) geometrically realises to a homotopy equivalence (assuming some mild condition on the codomain).

What is new now is that of course I know a lot more about bicategorical localisation, and this result I can now say implies that the classifying space functor extends along the Yoneda embedding to define a “classifying space” 2-functor from a suitable 2-category of topological stacks of categories to the 2-category of spaces, maps and homotopy classes of homotopies. This improves on contemporaneous work of Ebert, who defines a homotopy type for certain topological stacks of *groupoids*, but has to battle size issues, and so only defines it on a small subcategory of stacks. This work was done in a better way indepdently by Noohi, who associated a *weak* homotopy type to a (large class of) topological stacks. My extension of is also a somewhat orthogonal generalisation of Ebert’s work, since Noohi works with topological stacks under the open cover topology, and on all spaces, whereas my setup works with the numerable topology, and compactly-generated Hausdorff spaces. It does, however, allow for the full 2-category including non-invertible 2-arrows, which is not covered by the usual familiar -setup.

]]>

Abstract:The notion of string structure on a space goes back to work in the 1980s, particularly of Killingback, starting as an analogue of a spin structure on the loop space . In the decades since, increasingly refined versions of string structures have been defined. Ultimately, one wants to have a full-fledged String 2-bundle with connection, a structure from higher geometry, which combines differential geometry and category-theoretic structures. A half-way step, due to Waldorf, is known as a “geometric string structure”. Giving examples of such structures, despite existence being know, has been an outstanding problem for some time. In this talk, I will describe joint work with Raymond Vozzo on our framework for working with the structure that obstruct the existence of a geometric string structure, which is a 2-gerbe with connection, as well as give a general construction of geometric string structures on reductive homogeneous spaces.

If we go to Brylinski and McLaughlin, they have in Theorem 5.2 cited Grothendieck as the source of the equivalence between the simplicial construction and the central extension. The problem is, they cite chapter VIII of tome 1 of SGA 7, *Complements sur les biextensions. Proprietes generales des biextensions des schemas en groupes*. This chapter is 95 pages long. Skimming through that chapter, Grothendieck seems to cite the *previous* chapter as the source of the relevant theory. I might be wrong, but perhaps the citation went a bit astray? I would love to be proven wrong, and a precise numbered citation given.

Now we need to go and start reading chapter VII of SGA 7.1, *Biextensions de faisceaux de groupes*. It’s 90 pages long. And, as those who know the material, it’s a scanned, typewritten document, in French—and doesn’t use simplicial terminology anywhere. So there’s no text searching even for relevant key words. It’s not a huge deal, but Grothendieck is dealing with algebraic geometry, and sheaves of groups, instead of Lie groups. Worse, it’s in complete generality of arbitrary *normal* extensions of groups, not *central* extensions, so the material is made a lot more subtle and now full of irrelevant and complicating detail.

The closest I’ve come to finding what Brylinski and McLaughlin might be wanting to cite is Proposition 1.3.5 of chapter VII:

The notion of equivalence that Brylinski and McLaughlin state in their Theorem 5.2 is not that difficult, except for the implication that the definition of simplicial -bundle implies that the putative group structure on the extension group is in fact associative. The relevant condition is that some section over is equal to the canonical trivialisation of what would have been denoted by Grothendieck. The closest I can see in the proof of the above statement is the following excerpt:

The “condition d’associativité (1.1.4.1)” is the usual commutative square expressing the associativity of a group object in a category. So it’s not very clear, since the statement of the Proposition is about objects not really like what Brylinski and McLaughlin define.

I’m fully aware that Brylinski and McLaughlin may have in fact going for a citation in the sense of the classic Australian film *The Castle*:

It may also be that my trying to skim read 185 pages of mathematical French just missed the relevant section of SGA 7.1, chapter VIII (or possibly VII). In which case, I’d be happy to be pointed to chapter-and-verse

]]>Unfortunately the original proof starts by well-ordering the universe of sets, which really makes me itch: category theory is generally rather constructive in nature, or at least can be reformulated to have some kind of constructive content. A purely category-theoretic statement that needs the axiom of Global Choice seems non-optimal to me. And, thankfully, this is the case. Martti came up with a way to remove Choice (even normal AC) from the construction, and then I figured out how to do the whole thing in Algebraic Set Theory with the assumption of Excluded Middle (i.e. the category with class structure is *Boolean*).

Then we turned out attention to a theorem of Freyd that gives a precise characterisation of which categories are concretizable, albeit we started from Vinárek’s cleaner and shorter proof. It shows that a necessary condition for concreteness, due to Isbell, is also sufficient. Isbell gave examples of non-concretizable categories, but they were artificial, specifically constructed to violate his condition. Like Kučera’s theorem, even Vinárek’s improved proof used Global Choice to pick a well-ordering of the universe. We were able to re-do the theorem in ZF, and more generally in (Boolean) algebraic set theory, with the additional assumption that the universal object has a well-ordered stratification by small objects (roughly, it has a small map to the object of ordinals). This last assumption is validated in any membership-based set theory where some cumulative hierarchy exhausts the class of all sets, say the von Neumann hierarchy for ZF, or some weaker set theories where a hierarchy, or a rank function, is posited axiomatically (see eg this paper).

The paper is in preparation, and would have been done by now, had I not been distracted by other projects. Martti is being very patient with me! For now, let me point to the slides for my talk, which you can see here.

]]>Unfortunately, it has come to my attention that certain misunderstandings concerning IUT continue to persist in certain parts of the world. Perhaps the most famous misunderstanding concerns an asserted identification of “redundant copies”. This misunderstanding involves well-known, essentially elementary mathematics at the beginning graduate level concerning the general nonsense surrounding “gluings”. For instance, if one “applies” this misunderstanding to the well-known gluing construction of the projective line, then one concludes that the two copies of the affine line that appear in this gluing are

“redundant’’, hence may beidentified. This identification leads immediately to acontradiction, i.e., to a “proof” that theprojective line cannot exist!More details may be found in the Introduction to [EssLgc] and the references given there.

referencing this document, which is, to say the least, not a mathematical justification of any part of IUT.

Anyway, in case there are people who think Mochizuki can do no wrong, I wrote out a complete and elementary construction, with proof, of the projective line, using no “redundant” copies of the affine line. It uses the standard definition of pushout (=a gluing construction) from Mac Lane’s classic textbook *Categories for the Working Mathematician*, and the usual definition of the projective line. It is not terribly interesting, but someone had to do it.

I maintain, with good reason, that the type of reasoning in the note is what the so-called ‘RCS’ is doing. It is standard category theory and standard mathematics. There is no linguistic trickery or confusion or deeply detrimental disruptions here. Mochizuki is using non-standard definitions of standard terminology, and then complaining that other people’s definitions (which are the standard ones) lead to contradictions. They really don’t, if one doesn’t insist on trying to ignore the differences, and I don’t understand why he persists in it.

This says nothing about IUT itself, but given the claim of a

remarkably close structural resemblancebetween thegluingthat appears in the standard construction of theprojective lineand thegluingconstituted by theΘ-linkof inter-universal Teichmüller theory,

not to mention

from the point of view of

arithmetic geometry, the discussion of

theprojective lineas agluingofring schemesalong amultiplicative group scheme

given in Example 2.4.7 yields aremarkably elementary qualitative model/analogueof the essential logical structure surrounding the gluing given by the Θ-link in inter-universal Teichmüller theory.

and the fact the claims of a contradiction in the construction of the projective line à la ‘RCS’ are erroneous, I think that one must take a really hard look at the analogous claims around the contradiction arising from the Θ-link, and adjust one’s priors accordingly.

]]>There is an open problem in computational HoTT/synthetic homotopy theory, namely to get a computer to evaluate the proof in HoTT that there is a natural number , such that , and spit out the answer . This number is known as *Brunerie’s number*, as the proof that there exists such an was given by Brunerie, who then gave an on-paper proof in HoTT that it is indeed 2.

The only way to *compute* Brunerie’s number is to have an implementation of some version of HoTT in which the univalence axiom will actually compute, and so far this means using a proof assistant that uses *cubical type theory*. The inefficiencies in existing software and/or lack of computing power mean that despite starting a formal proof running that would calculate Brunerie’s number, it hasn’t been achieved.

This means that it would be nice to have a slightly easier problem of the same nature to test out ideas in computational synthetic homotopy theory. Guillaume Brunerie, Anders Mörtberg, Axel Ljungström have supplied just one such problem, in *Synthetic Cohomology Theory in Cubical Agda *and even implemented in cubical Agda, namely to decide the degree of the cup-square map

Namely, if denotes this homomorphism , and is a generator of , then . This is proved using on-paper HoTT in Chapter 6 of Brunerie’s thesis, using Gysin sequences, and is rather complicated. Brunerie, Mörtberg and Ljungström say that if one could simply *compute* —what they call “a second Brunerie number”—by running the cubical Agda code, then the actual proof that it is is not needed.

However, the computation still is not efficient enough, or the software design clever enough, for it to actually finish. But the authors write

This computation should be more feasible than the original Brunerie number. As our definition of ⌣ produces very simple terms, most of the work has to occur in the two isomorphisms, and we are optimistic that future optimizations will allow us to perform this computation.

Hence we might actually see this number be calculated. It’s mildly amusing to me that we cannot compute the number 2 (the original Brunerie number) with a TB of RAM and 90 hours of computation, so HoTT-ists propose calculating 1 instead, and we still cannot do it (but might in the future!)

]]>