A study in basepoints: guest post by Kirti Joshi

[The text below the divider is a response by Kirti Joshi in response to some comments at MathOverflow regarding his recent preprint, “Untilts of fundamental groups: construction of labeled isomorphs of fundamental groups — Arithmetic Holomorphic Structures“. Kirti reached out to me regarding making a response, and I suggested that a blog posting would be better than an answer at MathOverflow, since it is not in the format of an answer to the question, to which he agreed. Regular readers of the blog will know that I follow developments around IUT with close interest, but I am not an expert in that area. My long-stated hope is that some interesting mathematics comes out of the whole affair, regardless of specifics about the correctness of Mochizuki’s proof or otherwise. –tHG]


I wish to clarify my work in the context of the discussion here. For this purpose suppose that X is a geometrically connected, smooth quasi-projective variety over a p-adic field which I will take to be {\mathbb{Q}}_p for simplicity. In Mochizuki’s context this X will additionally required to be an hyperbolic curve.

  1. First of all let me say this clearly: one cannot fix a basepoint for the tempered fundamental group of X in Mochizuki’s Theory [IUT1–IUT4]. The central role of (arbitrary) basepoints play in Mochizuki’s theory is discussed in (print version) [IUT1, Page 24], and notably the key operations of the theory, namely the log-link and theta-link, change or require arbitrary basepoints on either side of these operations [IUT2, Page 324] (and similar discussion in [IUT3]).
  2. This means one cannot naturally identify the tempered fundamental groups arising from distinct basepoints. [The groups arising from different basepoints are of course abstractly (and non-canonically) isomorphic. Mochizuki does not explicitly track basepoints while requiring them and so this makes his approach extremely complicated.]
  3. In the context of tempered fundamental groups, a basepoint for the tempered fundamental group of X is a morphism of Berkovich spaces \mathcal{M}(K)\to X^{an}_{{\mathbb{Q}}_p}, where K is an algebraically closed complete valued field containing an isometrically embedded {\mathbb{Q}}_p. [Such fields are perfectoid.] As I have detailed in my paper, arbitrary basepoints requires arbitrary perfectoid fields K containing an isometrically embedded {\mathbb{Q}}_p. [For experts on Scholze’s Theory of Diamonds, let me say that the datum (X^{an}_{{\mathbb{Q}}_p}, \mathcal{M}(K)\to X^{an}_{{\mathbb{Q}}_p}) required to define tempered fundamental group with basepoint \mathcal{M}(K)\to X^{an}_{{\mathbb{Q}}_p} is related (by Huber’s work) to a similar datum for the diamond (X^{ad})^\diamond associated to the adic space for X/{\mathbb{Q}}_p.]
  4. In my approach I track basepoints explicitly (because of the (1) above) and I demonstrate how basepoints are affected by the key operations of the theory. [This is claimed in Mochizuki’s papers, but I think his proofs of this are quite difficult to discern (for me).]
  5. Because basepoints have to be tracked, and tempered fundamental groups arising from distinct basepoints cannot all be naturally identified, assertions which involve arbitrarily identifying fundamental groups arising from distinct basepoints cannot be used to arrive at any conclusion about Mochizuki’s Theory.
  6. In arithmetic geometry one typically works with isomorphism classes of Riemann surfaces i.e. with moduli of Riemann surfaces. Teichmuller space requires a different notion of equivalence and it is possible for distinct points of the classical Teichmuller space to have isomorphic moduli. This is also what happens in my p-adic theory.
  7. There is no linguistic trickery in my paper. I have developed my approach independently of Mochizuki’s group theoretic approach and my approach is geometric and completely parallels classical Teichmuller Theory. Nevertheless in its group theoretic aspect, my theory proceeds exactly as is described in [IUT1–IUT3] and arrives at all the principal landmarks with added precision because I bring to bear on the issues the formidable machinery of modern p-adic Hodge Theory due to Fargues-Fontaine, Kedlaya, Scholze and others. This precision allows me to give clear, transparent and geometric proofs of many of the principal assertions claimed in [IUT1–IUT3] without using Mochizuki’s machinery. [Notably my view is that Mochizuki’s Corollary 3.12 should be viewed as consequent to the existence of Arithmetic Teichmuller Spaces (at all primes) as detailed in my papers. One version of Corollary 3.12 is detailed (with proof) in my Constructions II paper. The cited version works with the standard complete Fargues-Fontaine curve {\mathscr{X}}_{{\mathbb{C}_p^\flat},{\mathbb{Q}}_p}, but there is a full version better adapted for Diophantine applications which works with the adic Fargues-Fontaine curve {\mathscr{Y}}_{{\mathbb{C}_p^\flat},{\mathbb{Q}}_p} and its finite étale covers and which exhibits the tensor packet structure of [IUT3, Section 3] which will appear in Constructions III (the two proofs are similar).] No claims are presently being made about the main result of [IUT4]. That is a work in progress.
  8. Apart from its intrinsic value, my original and independent work provides new evidence regarding Mochizuki’s work and I urge the mathematical community to reexamine his work using the emerging mathematical evidence.
  9. I am happy to talk to any mathematician who is interested in my work and I have written my papers as transparently as possible. [If there are any concrete mathematical objections to my papers, I will be happy to address them.]
  10. I thank David M. Roberts for a number of suggestions which have helped me improve this text.

Russell’s paradox: the original letter to Frege

Russell’s Paradox is famous for having put a nail in the coffin of naive set theory, even if not directly, and it’s the type of thing that many people have been introduced to in varying ways: from the parable of the barber who only shaves those who don’t shave themselves, to the more “official” set of all sets that don’t contain themselves.

A version of Russell’s paradox, taken from Logicomix.

But, the original presents the result in two ways, only the second way is closer to what is presented. The first is in a much more Fregean style, not in a form of naive set theory. A translation of Russell’s letter (which was in German) is available in the excellent book From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931, but that is not exactly filled with light reading. So here is a transcribed version of the one-page letter.

Note that Russell’s Paradox was not the first of the “antinomies” to be discovered: the Burali-Forti Paradox predates it by five years. And Ernst Zermelo also independently discovered Russell’s result a few years earlier, and communicated it privately to Hilbert, Husserl and possibly others. But it’s become the icon for the transition point between naive and axiomatic set theory, which was published by Zermelo in 1908.

The core of a Lie category is an open Lie subgroupoid

In Ehresmann’s first paper introducing what we today call Lie groupoids and topological groupoids, as well as categories internal to \mathbf{Top}, is Catégories topologiques et catégories différentiables, dating to 1959 and appearing in a rather obscure place. It is also available in his Œvres Complète, Part I, which are thankfully now online and available for free. Ehresmann in that paper also also talks about what might be called Lie categories, namely categories internal to \mathbf{Mfld}, with the added caveat that the source and target maps have locally constant rank—but I think it is also implicitly assumed that they are submersions. What I wanted to record was a write-up of a proof of the first result of that paper, which is that given a Lie category X, the maximal subgroupoid, the core \mathrm{Core}(X), of the underlying category, forms a Lie groupoid whose manifold of arrows is open in the manifold of arrows of X.

This is analogous to, and a vast generalisation of, the example where one has the Lie category with a single object, \mathbb{R}^n, and the manifold \mathrm{End}(\mathbb{R}^n) of linear maps for arrows; it is well known that in this instance GL(n) \subset \mathrm{End}(\mathbb{R}^n) is an open submanifold, given by the condition \det(A)\neq 0. It is not difficult to show that in fact the same is true for the Lie category \mathbf{Mat} where the objects are natural numbers, and \mathbf{Mat}(n,m) := \mathbf{Vect}_{\mathbb{R}}(\mathbb{R}^n,\mathbb{R}^m), the vector space of linear maps. Here the manifold of objects is zero-dimensional, and we have different dimensions among the connected components of the arrow manifold. So you can imagine that Ehresmann’s result generalises the case where the manifold of objects is positive-dimensional, and where one does not have a simple criterion measuring invertibility like determinant.

When I first found this result, just over a decade ago, it was utterly mysterious. My experience with the nitty-gritty of differential geometry was at that time relatively limited. Not to mention a) it was in French and b) it pre-dated standard modern treatments of differential geometry and c) it was by Ehresmann, who has a distinctive style. It seemed interesting, and perhaps a little bit important, but I didn’t know what to do with it. I asked on Math.Stackexchange about the analogous setup for monoids internal to schemes, and then on MathOverflow for the general case of categories in schemes, with some nice answers, but nothing I could use at that point.

Most of Ehresmann’s proof of the headline result, from his 1959 paper Catégories topologiques et catégories différentiables

Several times since then I’ve looked at it casually and tried to satisfy myself, and finally it clicked. So this blog post will explain the proof in my own words and notation, and with all the details that I felt were missing.

Continue reading “The core of a Lie category is an open Lie subgroupoid”

New paper — Rigid models for 2-gerbes I: Chern–Simons geometry

This is to finally announce the release of a slow-burn project joint with Raymond Vozzo …. at least, Part I of it.

  • DMR, Raymond F. Vozzo, Rigid models for 2-gerbes I: Chern–Simons geometry, arXiv:2209.05521, 63+5 pages.

Here’s the abstract:

Motivated by the problem of constructing explicit geometric string structures, we give a rigid model for bundle 2-gerbes, and define connective structures thereon. This model is designed to make explicit calculations easier, for instance in applications to physics. To compare to the existing definition, we give a functorial construction of a bundle 2-gerbe as in the literature from our rigid model, including with connections. As an example we prove that the Chern–Simons bundle 2-gerbe from the literature, with its connective structure, can be rigidified—it arises, up to isomorphism in the strongest possible sense, from a rigid bundle 2-gerbe with connective structure via this construction. Further, our rigid version of 2-gerbe trivialisation (with connections) gives rise to trivialisations (with connections) of bundle 2-gerbes in the usual sense, and as such can be used to describe geometric string structures.

The entire point of this project is the drive to make calculations simpler. We started out gritting our teeth and working with bundle 2-gerbes as defined by Danny Stevenson (based on earlier work by Carey, Murray and Wang), but there was a combinatorial growth in the complexity of what was going on, even though it was still only in relatively small degrees. Ultimately, through a process of refining our approach we landed on what is in the paper. The length of the paper is due to two things: keeping detail from calculations (it’s done in eg analytic number theory papers, why not here? I dislike the trend in some areas of pure maths to hide the details to make the paper look slick and conceptual, when there’s real work to be done), and the big appendix. Oh that appendix. It was a useful exercise, I think, to actually work through the (functorial) construction of a bundle 2-gerbe as in the literature from one of our rigid bundle 2-gerbes. Relying on a cohomological classification result here feels a bit too weak for my liking (and, additionally, requires building up the classification theory, here we can build things by hand).

The introduction is intended to give an overview of the main ideas, so I will point you there, but perhaps this is the best place to outline the plan for the rest of the project. The main result of Part II will be to construct a universal, diffeological, Chern–Simons-style rigid bundle 2-gerbe, rigidifying the usual universal 2-gerbe on a suitable K(\mathbb{Z},4). Moreover, we build explicit classifying spans (that is, a span of maps the left of which is shrinkable) for the basic gerbes on arbitrary suitable G, taking the universal ‘basic gerbe’ to arise from a diffeological crossed module much as the basic gerbe is intimately linked to the crossed module underlying the string 2-group of G. This permits us to justify focussing on a such a rigid model, in that we will then know every bundle 2-gerbe should be stably isomorphic to a bundle 2-gerbe with a rigidification in our sense. Finally, Part III will give the intended first main application of this project, namely explicit geometric string structures for a wide class of examples. There are some spinoff applications, but they are further down the line.

Unifying ε-N and ε-δ arguments

I’m teaching an intro analysis topic at the moment, and so of course there’s the whole ordeal of introducing ε-δ arguments. However, when we say such a thing, we usually also have in mind the type of proofs that are used for convergence of sequences, which are not usually ε-δ, which is to do with continuity of a function, but involve finding some large N past which the sequence is close to a limit, or else some Cauchy-type condition: hence either | a_n - L| < \varepsilon or | a_n - a_m| < \varepsilon, for all n \geq N.

However, it is possible to present a convergence proof as a continuity proof, using \delta. This is not a massive secret, but it’s cute, so I thought I’d write it up.

Let us fix some data: a sequence (a_n) in \mathbb{R}. Such a thing is a function a\colon \mathbb{N}\to \mathbb{R}. We say that (a_n) converges to L\in \mathbb{R} if:

\forall \varepsilon > 0,\ \exists N\in \mathbb{N},\ \forall n> N,\ |a_N - L| < \varepsilon

Another way to think about is is if we know that the function a extends to a function a'\colon \mathbb{N}_\infty = \mathbb{N} \cup \{\infty\}\to \mathbb{R}, satisfying a special property, then we have convergence. If we have convergence, we can clearly define such an extension where a'(\infty) = L. So what is this special property? It’s nothing other than continuity of a', where we have to put a particular topology on \mathbb{N}_\infty! From the point of view of pure point-set topology, we specify the neigbourhoods of \infty to be the cofinite subsets of \mathbb{N}_\infty containing \infty—that is, the subsets only missing finitely many elements of \mathbb{N}\subset \mathbb{N}_\infty—and otherwise for every n\in \mathbb{N}, \{n\} itself is a neighbourhood. Thus \mathbb{N} is a discrete subset, and only the point \infty “has nontrivial topology”, as it were. This at least means continuity of a' makes sense.

But the ε-δ definition of continuity is a metric definition, namely it’s treating \mathbb{R} as a metric space. So how do we make \mathbb{N}_\infty a metric space, so that the topology just defined is the metric topology? Here’s where we see the link we seek. Recall the fundamental sequence \frac{1}{n+1} \to 0 in the reals, whose convergence characterises Archimedean ordered fields. We can think of the set \mathcal{N}_0 := \{\frac{1}{n+1}\in \mathbb{R}\mid n\in \mathbb{N}\}\cup \{0\} = \mathcal{N}\cup \{0\} as inheriting the subspace topology from the reals, and in fact this gives something homeomorphic to \mathbb{N}_\infty. And, since \mathbb{R} is a metric space, we can give the subset a metric inducing this topology: we have d(\frac{1}{n},\frac{1}{m}) = |\frac{1}{n} - \frac{1}{m}| and d(0\frac{1}{n}) = \frac{1}{n}

However, this metric is slightly awkward, but it at least can inspire a slightly nicer metric d' on \mathbb{N}_\infty, namely d'(\frac{1}{n},\frac{1}{m}) = 1 and d(\infty,\frac{1}{n}) = \frac{1}{n}. There is a bijective short map \mathbb{N}_\infty\to \mathcal{N}_0 (n\mapsto \frac{1}{n},\ \infty \mapsto 0, but which is not invertible as as short map (nor even as a Lipschitz map, or a uniformly continuous function), but which is still a homeomorphism. As far as mere continuity goes, either version of the space is ok, but we will use \mathbb{N}_\infty with the metric d' and the resulting topology. So here we see where our very patient \delta is going to come in. A function \mathbb{N}_\infty\to \mathbb{R} (where both of these are considered as metric spaces) is continuous, precisely if

\forall \varepsilon > 0,\ \exists  \frac{1}{N},\ \forall \frac{1}{n}< \frac{1}{N},\ |a(n) - a(\infty)| < \varepsilon

So we could in principle dispense with the N, and only consider positive \delta, namely

\forall \varepsilon > 0,\ \exists \delta > 0,\ \forall \frac{1}{n}< \delta,\ |a_n - L| < \varepsilon

where we have set L:=a(\infty), and a_n = a(n), as usual. This is what happens if we know (a_n) has a limit. If we were to ask whether it has as limit, we should ask instead whether there is any continuous extension of \mathbb{N}\to \mathbb{R} along the inclusion \mathbb{N}\hookrightarrow \mathbb{N}_\infty. The usual proof of uniqueness of limits of sequences in metric space can be adapted to show that if such a continuous extension existed, then there is exactly one of them.

A similar descriptions can be made for the Cauchy property, except in this instance, one really does need to use the metric space (\mathcal{N},d) (not the space \mathcal{N}_0!), so that the function \mathbb{N}\to \mathcal{N} is a Cauchy sequence. Notice here that we do not have the limit point, since Cauchyness doesn’t refer to any actual limiting value, even assuming one exists. If we include \{0\}, hence consider the metric space \mathcal{N}_0, then we are dealing with a Cauchy sequence known to be convergent (in the reals, this is all Cauchy sequences, but one can consider all this in the rationals, for instance, where only some Cauchy sequences have a limit).

Thus there are four different things happening here. And this is where I put on my category-theorists hat: we have 1) a generic sequence, 2) a generic convergent sequence, 3) a generic Cauchy sequence, and 4) a generic convergent Cauchy sequence. There are maps from the generic sequence to the generic Cauchy sequence, from the generic sequence to the generic convergent sequence, from the generic convergent sequence to the generic convergent Cauchy sequence, and from the generic Cauchy sequence to the generic convergent Cauchy sequence. Since a X-sequence (say in \mathbb{R}, but it works in any metric space) is given by a continuous function from the generic X-sequence, precomposing with these maps just described forget properties of the sequence (for instance, take a convergent sequence together with its limit, and then forget what the limit is, or take a Cauchy sequence and forget this fact). I’ve been a bit sloppy as to what category all this is happening in, it should at least be metric spaces and continuous maps, but one could see if the screws could be tightened, and something like this work in eg uniformly continuous maps. I leave this as an exercise for the reader.

Convergence of an infinite sum in the rationals

I’m teaching an intro to analysis course this semester, and we are starting with the usual axiomatic treatment of numbers. I made a small emphasis on the rationals as a Archimedean field, and we can actually start with the analysis before we even get to talking about the real numbers. Moreover, since everything here is so close to the metal, we can be proving results at the level of using induction.

I wanted to use this blog post to record the proof using no more than the rationals (i.e. no embedding things into the real numbers), that the geometric series \sum_{n=0}^\infty x^n converges in \mathbb{Q} for 0 < x < 1 and rational. One can perform the usual manipulation of partial sums (possible in \mathbb{Q}) to get

\sum_{n=0}^\infty x^n = \dfrac{1}{1-x} - \dfrac{1}{1-x}\lim_{n\to \infty} x^n

(assuming the RHS exists) and hence it suffices to prove that \lim_{n\to \infty} x^n = 0. It is easy to prove (say with induction) that x^{n+k} < x^n for all k > 1.

Then the limit is zero when for all N\in \mathbb{N}, we can find some n\in \mathbb{N} such that x^n < \dfrac{1}{N}. This is close to being dual to the statement of the Archimedean property, which is of the form \exists N,\ \dfrac{1}{N} < y, for each positive rational y. Initially I thought of trying to leverage the Archimedean property for the positive rational numbers, in a multiplicative sense (Archimedeanness makes sense for any ordered group), but I didn’t end up making this work (more on this below). Ultimately I found an argument in Kenneth Ross’ book Elementary analysis, which I simplified further to the following.

We write x = \dfrac{1}{1+a/b} for positive integers a, b, since x < 1. One can prove (by induction) the estimate (1+a/b)^n > na/b (Ross had this as a corollary of the binomial theorem, but there is a simple direct proof). Then we have x^n =\dfrac{1}{(1+a/b)^n} < \dfrac{1}{na/b} = \dfrac{b}{a}\cdot\dfrac{1}{n}. Given N\in \mathbb{N}, we can choose n = bN, so that x^n < \dfrac{b}{abN} \leq \dfrac{1}{N}, as needed.

What I like about this argument is that it uses nothing other than the ordered field axioms on \mathbb{Q}, together with two very easy applications of induction. It’s a lovely proof to present to an undergrad class.

Returning my failed first idea at a proof, I had reduced the problem to that of showing that for every rational x > 1, there is an n such that x^n > 2 (challenge: can you leverage this fact to conclude the convergence as desired? It’s the base case of an induction proving the multiplicative Archimedean property). User @Rafi3AK on Twitter supplied an explicit estimate of the required n, using the binomial theorem, namely n = \lceil 1/(x-1)\rceil (i.e. round up to the ceiling).

Mathematics as art, and as craft

I don’t think I’m too shy in the fact I have a somewhat non-standard approach to mathematics, but I had a recent realisation about my own mindset that I found interesting.

I grew up in a family with a strong emphasis on arts and crafts. Spinning, knitting, pottery, leadlighting, paper-making, printing, furniture restoration, garment construction, baking, drawing and so on. At the end of the day, there was something you could hold, touch, wear and so on. At one stage in high school I considered studying Design.

I think this idea that at the end of the day, one can actually make something, is one that pervades my mathematics. This ranges from my habit of physically printing stapled booklets of each paper I write, to wanting a concrete formulas or constructions for certain abstract objects. At one point I had found explicit transition functions for the nontrivial String bundle on S^5, and I collaborated with someone more expert at coding than me to generate an animation of part of this. Another time I really wanted to get my hands on (what an apt metaphor!) what amounts to a map of higher (non-concrete) differentiable stacks, so worked out the formula for my own satisfaction:

An explicit formula for a functor from the 5-sphere to the stack of SU(2)-bundles with connection

I love proving a good theorem, but if I can write it out in a really visually pleasing way, then it is much more satisfying. Such as the circle of ideas around the diagonal argument/Lawvere fixed point theorem.

The Yanofsky variation of Lawvere’s fixed point theorem, in a magmoidal category with diagonals. That’s the proof.

I very much like designing nice-looking diagrams, and at one point was trying to get working a string diagram calculus for working in the hom-bicategory of \mathbf{2Cat} (with objects the weak 2-functors), for the purpose of building amazing looking explicit calculations to verify a tricategorical universal property.

String diagram data for a transformation between weak 2-functors

Sadly, I never finished this, and now the reason—a second independent proof of a higher-categorical fact in the literature with many omitted details—is now moot, with another proof by other people.

I have a t-shirt with a picture of the data generated in my computational project on the Parker loop (joint with Ben Nagy), and I’m itching to make more clothes with my maths on them. I love the fun I’ve had with David Butler thinking about measures of convexity of deltahedra, built with Polydrons (it seems to be an open question how non-convex one can go!)

It just speaks to me when I can actually make something, or at least feel like I’ve made something when doing maths. Something I can point at and say “I made that”. I think there’s an opportunity in the market for really high-quality art prints of pieces of really visually beautiful category theory, for instance, or even just mathematics more broadly. I’ve experimented over the years with (admittedly naive, amateur, filter-heavy) photographs of mathematics, for the sake of striving for an aesthetic presentation.

Proof of an elementary result around t-th roots of integers, with the lions on Adelaide’s CML Building in the background

I want the mathematics I create to “feel real”. Sometimes that feeling comes when I can hold the whole conceptual picture in my head at once, but it’s ephemeral. Actually making the end product, making it tangible—no matter how painful it can feel in the process—is a real point of closure.

Even the process of making little summary notes of subjects I studied at school and uni has produced objects that I have kept, and have a fondness for. They are the distillation of that learning, the physical artifact that represents my knowledge

Notes from Year 12 Physics and Chemistry

Even the choice to work in physical notebooks, and slowly build that collection, rather than digital note-taking gives me something I can see slowly grow, and I can appreciate as being a reflection of my changing ideas and development in research. Having nice notebooks gives an aesthetic that motivates me to fill them up.

Stack of (mostly) Moleskine research notebooks

Given that mathematics is generally taught as a playground of the mind, though there is of course a push in places for more manipulables in maths education (physical or digital), more visualisation, I do wonder the extent to which students feel like they are missing an aspect like this. We don’t need a 3d printer in a classroom to have the students make something tangible, lasting and awesome. Somehow I’ve managed to avoid edutech, despite winning a graphics calculator at school back in the 90s—and never learning how to use it—and I’ve never been taught with manipulables past Polydrons at age 5 and MAB blocks at age 6 (both of which I still think are awesome). But I love actually making mathematical things.

When was the Joyal model structure on sSet born?

Back in 2012, I was under the impression that the Joyal model structure was described in a letter to Grothendieck in the early/mid 1980s. There is a letter from Joyal to Grothendieck describing a model structure, but it was the model structure on simplicial sheaves. Based on my MO answer, the nLab was edited to include this claim. But Dmitri Pavlov started asking questions on the nForum and under my answer, and now I have to retract my statement! Now he has asked an MO question of his own, looking for a definitive answer. Here is my attempt to track things down, written before Pavlov’s MO question landed.

Joyal seems to be citing his own work “in preparation” in the ’00s for the source of the model structure whose fibrant objects are the quasicategories, at latest in 2006, based on the citation of Theory of quasi-categories I in the (arXived) 2006 Quasi-categories vs Segal spaces. In the (published in) 2002 paper Quasi-categories and Kan complexes Joyal cites Theory of quasi-categories (not Theory of quasi-categories I), but doesn’t say anything about the model structure yet. However, in the CRM notes from 2008 (partly based on a manuscript—the IMA lectures—from 2004, plus the then draft of the book Theory of quasi-categories; there is also a 2007 version of the notes with the model structure) Joyal says “The results presented here are the fruits of a long term research project which began around thirty years ago.”

Verity, in his (arXived 2006) paper Weak Complicial Sets A Simplicial Weak ω-Category Theory Part I: Basic Homotopy Theory writes

we round out our presentation by localising our model structure and transporting it to the category of simplicial sets itself, in order to provide an independent construction of a model category structure on that latter category whose fibrant objects are Joyal’s quasi-categories [10].

where [10] is Joyal’s 2002 paper, so that the model structure was known to experts at least by 2006, even if not announced in 2002.

So I’m tempted to guess that the whole 1980s origin of the Joyal model structure (i.e. in a letter to Grothendieck, as stated on Wikipedia at the time of writing) for quasi-categories might be an urban myth.

I couldn’t find a mention of a model structure for quasicategories in the 2004 slides from IMA for the talk of Joyal/May/Porter, except for the closing sentence:

Baby camparison should give that the hammock localizations of all models for weak categories have equivalent hammock localizations. Model category theory shows how.

Tim Porter’s 2004 IMA notes likewise don’t seem to mention the model structure. So perhaps the date for the model structure can be pinned down to between (June) 2004 and (July) 2006, at least as far as going by Joyal’s public statements. One point in favour of this is that Tim’s notes include the open question

In general, what is the precise relationship between quasi categories (a weakening of categories) and Segal categories (also a weakening of categories)? (This question is vague, of course, and would lead to many interpretations)

which is what Joyal and Tierney’s 2006 paper pins down, in terms of a Quillen equivalence of model categories. If the question in 2004 had been merely one of trying to match up existing model structures (the Segal category one existed in 1998), I doubt Tim would have called it a vague question!

PS: I don’t know how long Lurie took to write the 589-page version 1 of Higher Topos Theory, put on the arXiv at the start of August 2006, but he refers to Joyal’s model structure there, citing Theory of quasi-categories I.

It’s a messy job, but someone had to do it: fixing all the links

Back in the olden days, there used to be a site called the Front for the Mathematics arXiv. It lasted from the 90s until a few years ago, and had a nicer website than the arXiv itself started out with. It also served search results in a much nicer way than the arXiv, even as the latter improved over time. As a result, some people had a habit of using it to look up papers, and, as it happened, supply links to said papers on MathOverflow.

When the Front finally packed up shop, there were about 900 links to it. Stackexchange, the company, has ways of mass-editing urls without causing chaos (i.e. bumping all edited questions), but this has to be done algorithmically, of course…and the arXiv Front identifiers were not always identical to the arXiv ones, and hence the paper part of the url was not the same. Woah woah, I hear you say: what do you mean? That the Front was rolling its own article IDs? Yep.

The reason for this is that the arXiv didn’t launch into the world fully formed: it started out with physics, and there were sorta-parallel, not-quite-independent arXiv-like repos for various subjects in maths. If you go back in the ‘what’s new’ postings to 1994–1996, you can see things like the “q-alg archive” appear (now math.QA), in this case due to people not knowing where to put things like quantum knot invariants, and it ending up in hep-th. There were names for topics like dg-ga and so on. By mid-2007, all the arXiv identifiers across all subjects were unified, but before that you had area-specific prefixes (eg math/0102003, cond-mat/0102003 or hep-th/0102003), but before that, you had an even more granular system just for mathematics, similar to how physics was split up. Pre-1998 you had alg-geom, dg-ga, funct-an, and q-alg, and also math-ph. There was a parallel system at one point, allowing for eg math.DG/0307245 and math/0307245 to point to the same paper. There were also more actual independent preprint repos, like the the Hopf archive, the K-Theory archive, and the Banach archive. These slowly got absorbed into the arXiv itself. The upshot is, the arXiv Front had a slightly more systematic referencing system, as far as I can tell, while still recognising the actual arXiv identifiers. It would assign an ID that was just a number to a paper, since it was intended to only covers mathematics, at least to start, and so the hassle of having parallel identifiers in different topics wouldn’t raise its head.

However, the fun part is that when the issue was raised last year on meta.MO on 25th August, after nearly 18 months of broken links, the different types of IDs was known and pointed out, but there was no extant documentation on how the Front created its own IDs! This point was compounded by situations where people on MO would write “…and see also this paper.” with no additional information and the only context was that it was presumably relevant to the question. Sometimes an answer from 2009 (before current social norms were firmed up) would be “This is answered in this paper“, and that’s it. The only thing we knew for sure was the year and month of the paper, and maybe the subject it was in (but not eg the arXiv subject area). If the Stackexchange gurus went ahead and did a blanket search-and-replace for the arXiv Front domain and replace it with arXiv.org, the situation would be even worse, since we wouldn’t have the original link to work with, and the new link might point to something it shouldn’t.

Martin Sleziak, an indefatigable MO/meta.MO editor, wrote an epic answer full of targeted search queries looking for papers in the various date ranges and with what should be all the different ID formats, reporting the numbers of each, and classified them into categories depending on how automated the editing might need to be. He also found some of the needed translation, and eventually I found an old help page on the Wayback Machine that spelled out the actual encoding, in glorious late 90s web design:

Until March 2000, the Front renumbered articles in the old mathematical archives alg-geom, funct-an, dg-ga, and q-alg as math archive articles. To avoid duplicate numbers, the system added 50 to each funct-an number, 100 for dg-ga, and 140 for q-alg. Since this system was never adopted at the arXiv, it has for now been scrapped. If you use cite or link to any math articles math.XX/yymmnnn, where the year yy is 97 or prior and the number nnn is less than 200, you should convert back to the original numbers as stamped on the articles themselves.

That was five months in to the project of slowly editing questions and answers the old-fashioned way, by hand, replacing broken URLs we knew how to deal with, but which were still in the pre-2007 era of ID weirdness. They couldn’t be done too many at a time, and someone did complain I was editing too much, because it pushed new questions further down the front page, and off it entirely quicker than usual.

Further, since leaving a link direct to a pdf, say, and merely saying “this paper”, means the person reading the question needs to open up a pdf to know what the paper is (not helpful on mobile!), I took it on myself to include actual bibliographic information in fixing the link from the Front, to the arXiv proper. Knowing you are being referred to a 2002 arXiv paper of Perelman means you can recognise it instantly. Even better, I tried to include a journal reference and even a doi link, earlier on in the project, when I was fresh and keen (sometimes people would also link to unstable publisher urls, and there are still problems with these, especially those pointing to opaque springerlink URLs which no longer exist!). This is one way of future-proofing the system, and making it more information-rich for both human and machine readers. I have a suspicion that the arXiv is now implementing a doi system for its articles, for the day when arxiv.org may not be the address we visit when looking for papers.

Another problem is that people also supplied links in comments, and comments cannot be edited except by a mod. So our solution was to give a reply comment pointing out the fact the Front link was broken, and supply the working arXiv.org link. When particularly motivated, I included the paper title and sometimes even more bibliographic info. Asaf Karagila whacked a few of these with his mod powers, editing them directly, but leaving the mods to fix all of these is not an option.

After slow work by user ‘Glorfindel‘ (a mod on the big meta.SE), who wrote a script to do slow edits every couple of days, Martin, and myself, on the 29th March this year, I edited the last outstanding link to a pre-April 2007 Front link—every broken custom arXiv Front url was now working in questions and answers, and every comment with such a link had a reply pointing out what it should be pointing to. For good measure, I went ahead over the next day or so and edited the rest of the few links to papers in 2007, so that any replacement code can deal with a clean date division where it needs to be active. Between 20th September 2021 and 30th March, it turns out I fixed a bit over 200 broken links, and responded to about 100 comments with new, working links. The graph showing all removals of “front.math.ucdavis.edu” back through the lifetime of MO on the SE2.0 platform is quite dramatic:

Plot of edits removing links to the arXiv Front, per month, from 2013 to present (courtesy of Martin Sleziak)

Now that all the manual edits are done, the Stackexchange Community Mods (these are SE employees, not just elected users) are looking at the situation again and how the 2008–2019 links can be automatically edited by a script. Watch this space…

So what is the takeaway, if any? Don’t leave links to papers on MathOverflow without some minimum identifying information! The problem is similar for links to papers on people’s personal websites, that have evaporated after a decade, and as noted above, publisher urls instead of doi links. Without a title and at least one author, someone has to spend the time tracking this stuff down. If the MO user who posted the link has moved on, sometimes there is very little that can be done. By spending the time even just copying the title of the paper, an MO user is helping potentially many people downstream, and certainly saving the time of someone like me, who enjoys such a detective task but would prefer not to need to do it!