New paper — Rigid models for 2-gerbes I: Chern–Simons geometry

This is to finally announce the release of a slow-burn project joint with Raymond Vozzo …. at least, Part I of it.

  • DMR, Raymond F. Vozzo, Rigid models for 2-gerbes I: Chern–Simons geometry, arXiv:2209.05521, 63+5 pages.

Here’s the abstract:

Motivated by the problem of constructing explicit geometric string structures, we give a rigid model for bundle 2-gerbes, and define connective structures thereon. This model is designed to make explicit calculations easier, for instance in applications to physics. To compare to the existing definition, we give a functorial construction of a bundle 2-gerbe as in the literature from our rigid model, including with connections. As an example we prove that the Chern–Simons bundle 2-gerbe from the literature, with its connective structure, can be rigidified—it arises, up to isomorphism in the strongest possible sense, from a rigid bundle 2-gerbe with connective structure via this construction. Further, our rigid version of 2-gerbe trivialisation (with connections) gives rise to trivialisations (with connections) of bundle 2-gerbes in the usual sense, and as such can be used to describe geometric string structures.

The entire point of this project is the drive to make calculations simpler. We started out gritting our teeth and working with bundle 2-gerbes as defined by Danny Stevenson (based on earlier work by Carey, Murray and Wang), but there was a combinatorial growth in the complexity of what was going on, even though it was still only in relatively small degrees. Ultimately, through a process of refining our approach we landed on what is in the paper. The length of the paper is due to two things: keeping detail from calculations (it’s done in eg analytic number theory papers, why not here? I dislike the trend in some areas of pure maths to hide the details to make the paper look slick and conceptual, when there’s real work to be done), and the big appendix. Oh that appendix. It was a useful exercise, I think, to actually work through the (functorial) construction of a bundle 2-gerbe as in the literature from one of our rigid bundle 2-gerbes. Relying on a cohomological classification result here feels a bit too weak for my liking (and, additionally, requires building up the classification theory, here we can build things by hand).

The introduction is intended to give an overview of the main ideas, so I will point you there, but perhaps this is the best place to outline the plan for the rest of the project. The main result of Part II will be to construct a universal, diffeological, Chern–Simons-style rigid bundle 2-gerbe, rigidifying the usual universal 2-gerbe on a suitable K(\mathbb{Z},4). Moreover, we build explicit classifying spans (that is, a span of maps the left of which is shrinkable) for the basic gerbes on arbitrary suitable G, taking the universal ‘basic gerbe’ to arise from a diffeological crossed module much as the basic gerbe is intimately linked to the crossed module underlying the string 2-group of G. This permits us to justify focussing on a such a rigid model, in that we will then know every bundle 2-gerbe should be stably isomorphic to a bundle 2-gerbe with a rigidification in our sense. Finally, Part III will give the intended first main application of this project, namely explicit geometric string structures for a wide class of examples. There are some spinoff applications, but they are further down the line.

Unifying ε-N and ε-δ arguments

I’m teaching an intro analysis topic at the moment, and so of course there’s the whole ordeal of introducing ε-δ arguments. However, when we say such a thing, we usually also have in mind the type of proofs that are used for convergence of sequences, which are not usually ε-δ, which is to do with continuity of a function, but involve finding some large N past which the sequence is close to a limit, or else some Cauchy-type condition: hence either | a_n - L| < \varepsilon or | a_n - a_m| < \varepsilon, for all n \geq N.

However, it is possible to present a convergence proof as a continuity proof, using \delta. This is not a massive secret, but it’s cute, so I thought I’d write it up.

Let us fix some data: a sequence (a_n) in \mathbb{R}. Such a thing is a function a\colon \mathbb{N}\to \mathbb{R}. We say that (a_n) converges to L\in \mathbb{R} if:

\forall \varepsilon > 0,\ \exists N\in \mathbb{N},\ \forall n> N,\ |a_N - L| < \varepsilon

Another way to think about is is if we know that the function a extends to a function a'\colon \mathbb{N}_\infty = \mathbb{N} \cup \{\infty\}\to \mathbb{R}, satisfying a special property, then we have convergence. If we have convergence, we can clearly define such an extension where a'(\infty) = L. So what is this special property? It’s nothing other than continuity of a', where we have to put a particular topology on \mathbb{N}_\infty! From the point of view of pure point-set topology, we specify the neigbourhoods of \infty to be the cofinite subsets of \mathbb{N}_\infty containing \infty—that is, the subsets only missing finitely many elements of \mathbb{N}\subset \mathbb{N}_\infty—and otherwise for every n\in \mathbb{N}, \{n\} itself is a neighbourhood. Thus \mathbb{N} is a discrete subset, and only the point \infty “has nontrivial topology”, as it were. This at least means continuity of a' makes sense.

But the ε-δ definition of continuity is a metric definition, namely it’s treating \mathbb{R} as a metric space. So how do we make \mathbb{N}_\infty a metric space, so that the topology just defined is the metric topology? Here’s where we see the link we seek. Recall the fundamental sequence \frac{1}{n+1} \to 0 in the reals, whose convergence characterises Archimedean ordered fields. We can think of the set \mathcal{N}_0 := \{\frac{1}{n+1}\in \mathbb{R}\mid n\in \mathbb{N}\}\cup \{0\} = \mathcal{N}\cup \{0\} as inheriting the subspace topology from the reals, and in fact this gives something homeomorphic to \mathbb{N}_\infty. And, since \mathbb{R} is a metric space, we can give the subset a metric inducing this topology: we have d(\frac{1}{n},\frac{1}{m}) = |\frac{1}{n} - \frac{1}{m}| and d(0\frac{1}{n}) = \frac{1}{n}

However, this metric is slightly awkward, but it at least can inspire a slightly nicer metric d' on \mathbb{N}_\infty, namely d'(\frac{1}{n},\frac{1}{m}) = 1 and d(\infty,\frac{1}{n}) = \frac{1}{n}. There is a bijective short map \mathbb{N}_\infty\to \mathcal{N}_0 (n\mapsto \frac{1}{n},\ \infty \mapsto 0, but which is not invertible as as short map (nor even as a Lipschitz map, or a uniformly continuous function), but which is still a homeomorphism. As far as mere continuity goes, either version of the space is ok, but we will use \mathbb{N}_\infty with the metric d' and the resulting topology. So here we see where our very patient \delta is going to come in. A function \mathbb{N}_\infty\to \mathbb{R} (where both of these are considered as metric spaces) is continuous, precisely if

\forall \varepsilon > 0,\ \exists  \frac{1}{N},\ \forall \frac{1}{n}< \frac{1}{N},\ |a(n) - a(\infty)| < \varepsilon

So we could in principle dispense with the N, and only consider positive \delta, namely

\forall \varepsilon > 0,\ \exists \delta > 0,\ \forall \frac{1}{n}< \delta,\ |a_n - L| < \varepsilon

where we have set L:=a(\infty), and a_n = a(n), as usual. This is what happens if we know (a_n) has a limit. If we were to ask whether it has as limit, we should ask instead whether there is any continuous extension of \mathbb{N}\to \mathbb{R} along the inclusion \mathbb{N}\hookrightarrow \mathbb{N}_\infty. The usual proof of uniqueness of limits of sequences in metric space can be adapted to show that if such a continuous extension existed, then there is exactly one of them.

A similar descriptions can be made for the Cauchy property, except in this instance, one really does need to use the metric space (\mathcal{N},d) (not the space \mathcal{N}_0!), so that the function \mathbb{N}\to \mathcal{N} is a Cauchy sequence. Notice here that we do not have the limit point, since Cauchyness doesn’t refer to any actual limiting value, even assuming one exists. If we include \{0\}, hence consider the metric space \mathcal{N}_0, then we are dealing with a Cauchy sequence known to be convergent (in the reals, this is all Cauchy sequences, but one can consider all this in the rationals, for instance, where only some Cauchy sequences have a limit).

Thus there are four different things happening here. And this is where I put on my category-theorists hat: we have 1) a generic sequence, 2) a generic convergent sequence, 3) a generic Cauchy sequence, and 4) a generic convergent Cauchy sequence. There are maps from the generic sequence to the generic Cauchy sequence, from the generic sequence to the generic convergent sequence, from the generic convergent sequence to the generic convergent Cauchy sequence, and from the generic Cauchy sequence to the generic convergent Cauchy sequence. Since a X-sequence (say in \mathbb{R}, but it works in any metric space) is given by a continuous function from the generic X-sequence, precomposing with these maps just described forget properties of the sequence (for instance, take a convergent sequence together with its limit, and then forget what the limit is, or take a Cauchy sequence and forget this fact). I’ve been a bit sloppy as to what category all this is happening in, it should at least be metric spaces and continuous maps, but one could see if the screws could be tightened, and something like this work in eg uniformly continuous maps. I leave this as an exercise for the reader.

Convergence of an infinite sum in the rationals

I’m teaching an intro to analysis course this semester, and we are starting with the usual axiomatic treatment of numbers. I made a small emphasis on the rationals as a Archimedean field, and we can actually start with the analysis before we even get to talking about the real numbers. Moreover, since everything here is so close to the metal, we can be proving results at the level of using induction.

I wanted to use this blog post to record the proof using no more than the rationals (i.e. no embedding things into the real numbers), that the geometric series \sum_{n=0}^\infty x^n converges in \mathbb{Q} for 0 < x < 1 and rational. One can perform the usual manipulation of partial sums (possible in \mathbb{Q}) to get

\sum_{n=0}^\infty x^n = \dfrac{1}{1-x} - \dfrac{1}{1-x}\lim_{n\to \infty} x^n

(assuming the RHS exists) and hence it suffices to prove that \lim_{n\to \infty} x^n = 0. It is easy to prove (say with induction) that x^{n+k} < x^n for all k > 1.

Then the limit is zero when for all N\in \mathbb{N}, we can find some n\in \mathbb{N} such that x^n < \dfrac{1}{N}. This is close to being dual to the statement of the Archimedean property, which is of the form \exists N,\ \dfrac{1}{N} < y, for each positive rational y. Initially I thought of trying to leverage the Archimedean property for the positive rational numbers, in a multiplicative sense (Archimedeanness makes sense for any ordered group), but I didn’t end up making this work (more on this below). Ultimately I found an argument in Kenneth Ross’ book Elementary analysis, which I simplified further to the following.

We write x = \dfrac{1}{1+a/b} for positive integers a, b, since x < 1. One can prove (by induction) the estimate (1+a/b)^n > na/b (Ross had this as a corollary of the binomial theorem, but there is a simple direct proof). Then we have x^n =\dfrac{1}{(1+a/b)^n} < \dfrac{1}{na/b} = \dfrac{b}{a}\cdot\dfrac{1}{n}. Given N\in \mathbb{N}, we can choose n = bN, so that x^n < \dfrac{b}{abN} \leq \dfrac{1}{N}, as needed.

What I like about this argument is that it uses nothing other than the ordered field axioms on \mathbb{Q}, together with two very easy applications of induction. It’s a lovely proof to present to an undergrad class.

Returning my failed first idea at a proof, I had reduced the problem to that of showing that for every rational x > 1, there is an n such that x^n > 2 (challenge: can you leverage this fact to conclude the convergence as desired? It’s the base case of an induction proving the multiplicative Archimedean property). User @Rafi3AK on Twitter supplied an explicit estimate of the required n, using the binomial theorem, namely n = \lceil 1/(x-1)\rceil (i.e. round up to the ceiling).

Mathematics as art, and as craft

I don’t think I’m too shy in the fact I have a somewhat non-standard approach to mathematics, but I had a recent realisation about my own mindset that I found interesting.

I grew up in a family with a strong emphasis on arts and crafts. Spinning, knitting, pottery, leadlighting, paper-making, printing, furniture restoration, garment construction, baking, drawing and so on. At the end of the day, there was something you could hold, touch, wear and so on. At one stage in high school I considered studying Design.

I think this idea that at the end of the day, one can actually make something, is one that pervades my mathematics. This ranges from my habit of physically printing stapled booklets of each paper I write, to wanting a concrete formulas or constructions for certain abstract objects. At one point I had found explicit transition functions for the nontrivial String bundle on S^5, and I collaborated with someone more expert at coding than me to generate an animation of part of this. Another time I really wanted to get my hands on (what an apt metaphor!) what amounts to a map of higher (non-concrete) differentiable stacks, so worked out the formula for my own satisfaction:

An explicit formula for a functor from the 5-sphere to the stack of SU(2)-bundles with connection

I love proving a good theorem, but if I can write it out in a really visually pleasing way, then it is much more satisfying. Such as the circle of ideas around the diagonal argument/Lawvere fixed point theorem.

The Yanofsky variation of Lawvere’s fixed point theorem, in a magmoidal category with diagonals. That’s the proof.

I very much like designing nice-looking diagrams, and at one point was trying to get working a string diagram calculus for working in the hom-bicategory of \mathbf{2Cat} (with objects the weak 2-functors), for the purpose of building amazing looking explicit calculations to verify a tricategorical universal property.

String diagram data for a transformation between weak 2-functors

Sadly, I never finished this, and now the reason—a second independent proof of a higher-categorical fact in the literature with many omitted details—is now moot, with another proof by other people.

I have a t-shirt with a picture of the data generated in my computational project on the Parker loop (joint with Ben Nagy), and I’m itching to make more clothes with my maths on them. I love the fun I’ve had with David Butler thinking about measures of convexity of deltahedra, built with Polydrons (it seems to be an open question how non-convex one can go!)

It just speaks to me when I can actually make something, or at least feel like I’ve made something when doing maths. Something I can point at and say “I made that”. I think there’s an opportunity in the market for really high-quality art prints of pieces of really visually beautiful category theory, for instance, or even just mathematics more broadly. I’ve experimented over the years with (admittedly naive, amateur, filter-heavy) photographs of mathematics, for the sake of striving for an aesthetic presentation.

Proof of an elementary result around t-th roots of integers, with the lions on Adelaide’s CML Building in the background

I want the mathematics I create to “feel real”. Sometimes that feeling comes when I can hold the whole conceptual picture in my head at once, but it’s ephemeral. Actually making the end product, making it tangible—no matter how painful it can feel in the process—is a real point of closure.

Even the process of making little summary notes of subjects I studied at school and uni has produced objects that I have kept, and have a fondness for. They are the distillation of that learning, the physical artifact that represents my knowledge

Notes from Year 12 Physics and Chemistry

Even the choice to work in physical notebooks, and slowly build that collection, rather than digital note-taking gives me something I can see slowly grow, and I can appreciate as being a reflection of my changing ideas and development in research. Having nice notebooks gives an aesthetic that motivates me to fill them up.

Stack of (mostly) Moleskine research notebooks

Given that mathematics is generally taught as a playground of the mind, though there is of course a push in places for more manipulables in maths education (physical or digital), more visualisation, I do wonder the extent to which students feel like they are missing an aspect like this. We don’t need a 3d printer in a classroom to have the students make something tangible, lasting and awesome. Somehow I’ve managed to avoid edutech, despite winning a graphics calculator at school back in the 90s—and never learning how to use it—and I’ve never been taught with manipulables past Polydrons at age 5 and MAB blocks at age 6 (both of which I still think are awesome). But I love actually making mathematical things.

When was the Joyal model structure on sSet born?

Back in 2012, I was under the impression that the Joyal model structure was described in a letter to Grothendieck in the early/mid 1980s. There is a letter from Joyal to Grothendieck describing a model structure, but it was the model structure on simplicial sheaves. Based on my MO answer, the nLab was edited to include this claim. But Dmitri Pavlov started asking questions on the nForum and under my answer, and now I have to retract my statement! Now he has asked an MO question of his own, looking for a definitive answer. Here is my attempt to track things down, written before Pavlov’s MO question landed.

Joyal seems to be citing his own work “in preparation” in the ’00s for the source of the model structure whose fibrant objects are the quasicategories, at latest in 2006, based on the citation of Theory of quasi-categories I in the (arXived) 2006 Quasi-categories vs Segal spaces. In the (published in) 2002 paper Quasi-categories and Kan complexes Joyal cites Theory of quasi-categories (not Theory of quasi-categories I), but doesn’t say anything about the model structure yet. However, in the CRM notes from 2008 (partly based on a manuscript—the IMA lectures—from 2004, plus the then draft of the book Theory of quasi-categories; there is also a 2007 version of the notes with the model structure) Joyal says “The results presented here are the fruits of a long term research project which began around thirty years ago.”

Verity, in his (arXived 2006) paper Weak Complicial Sets A Simplicial Weak ω-Category Theory Part I: Basic Homotopy Theory writes

we round out our presentation by localising our model structure and transporting it to the category of simplicial sets itself, in order to provide an independent construction of a model category structure on that latter category whose fibrant objects are Joyal’s quasi-categories [10].

where [10] is Joyal’s 2002 paper, so that the model structure was known to experts at least by 2006, even if not announced in 2002.

So I’m tempted to guess that the whole 1980s origin of the Joyal model structure (i.e. in a letter to Grothendieck, as stated on Wikipedia at the time of writing) for quasi-categories might be an urban myth.

I couldn’t find a mention of a model structure for quasicategories in the 2004 slides from IMA for the talk of Joyal/May/Porter, except for the closing sentence:

Baby camparison should give that the hammock localizations of all models for weak categories have equivalent hammock localizations. Model category theory shows how.

Tim Porter’s 2004 IMA notes likewise don’t seem to mention the model structure. So perhaps the date for the model structure can be pinned down to between (June) 2004 and (July) 2006, at least as far as going by Joyal’s public statements. One point in favour of this is that Tim’s notes include the open question

In general, what is the precise relationship between quasi categories (a weakening of categories) and Segal categories (also a weakening of categories)? (This question is vague, of course, and would lead to many interpretations)

which is what Joyal and Tierney’s 2006 paper pins down, in terms of a Quillen equivalence of model categories. If the question in 2004 had been merely one of trying to match up existing model structures (the Segal category one existed in 1998), I doubt Tim would have called it a vague question!

PS: I don’t know how long Lurie took to write the 589-page version 1 of Higher Topos Theory, put on the arXiv at the start of August 2006, but he refers to Joyal’s model structure there, citing Theory of quasi-categories I.

It’s a messy job, but someone had to do it: fixing all the links

Back in the olden days, there used to be a site called the Front for the Mathematics arXiv. It lasted from the 90s until a few years ago, and had a nicer website than the arXiv itself started out with. It also served search results in a much nicer way than the arXiv, even as the latter improved over time. As a result, some people had a habit of using it to look up papers, and, as it happened, supply links to said papers on MathOverflow.

When the Front finally packed up shop, there were about 900 links to it. Stackexchange, the company, has ways of mass-editing urls without causing chaos (i.e. bumping all edited questions), but this has to be done algorithmically, of course…and the arXiv Front identifiers were not always identical to the arXiv ones, and hence the paper part of the url was not the same. Woah woah, I hear you say: what do you mean? That the Front was rolling its own article IDs? Yep.

The reason for this is that the arXiv didn’t launch into the world fully formed: it started out with physics, and there were sorta-parallel, not-quite-independent arXiv-like repos for various subjects in maths. If you go back in the ‘what’s new’ postings to 1994–1996, you can see things like the “q-alg archive” appear (now math.QA), in this case due to people not knowing where to put things like quantum knot invariants, and it ending up in hep-th. There were names for topics like dg-ga and so on. By mid-2007, all the arXiv identifiers across all subjects were unified, but before that you had area-specific prefixes (eg math/0102003, cond-mat/0102003 or hep-th/0102003), but before that, you had an even more granular system just for mathematics, similar to how physics was split up. Pre-1998 you had alg-geom, dg-ga, funct-an, and q-alg, and also math-ph. There was a parallel system at one point, allowing for eg math.DG/0307245 and math/0307245 to point to the same paper. There were also more actual independent preprint repos, like the the Hopf archive, the K-Theory archive, and the Banach archive. These slowly got absorbed into the arXiv itself. The upshot is, the arXiv Front had a slightly more systematic referencing system, as far as I can tell, while still recognising the actual arXiv identifiers. It would assign an ID that was just a number to a paper, since it was intended to only covers mathematics, at least to start, and so the hassle of having parallel identifiers in different topics wouldn’t raise its head.

However, the fun part is that when the issue was raised last year on meta.MO on 25th August, after nearly 18 months of broken links, the different types of IDs was known and pointed out, but there was no extant documentation on how the Front created its own IDs! This point was compounded by situations where people on MO would write “…and see also this paper.” with no additional information and the only context was that it was presumably relevant to the question. Sometimes an answer from 2009 (before current social norms were firmed up) would be “This is answered in this paper“, and that’s it. The only thing we knew for sure was the year and month of the paper, and maybe the subject it was in (but not eg the arXiv subject area). If the Stackexchange gurus went ahead and did a blanket search-and-replace for the arXiv Front domain and replace it with arXiv.org, the situation would be even worse, since we wouldn’t have the original link to work with, and the new link might point to something it shouldn’t.

Martin Sleziak, an indefatigable MO/meta.MO editor, wrote an epic answer full of targeted search queries looking for papers in the various date ranges and with what should be all the different ID formats, reporting the numbers of each, and classified them into categories depending on how automated the editing might need to be. He also found some of the needed translation, and eventually I found an old help page on the Wayback Machine that spelled out the actual encoding, in glorious late 90s web design:

Until March 2000, the Front renumbered articles in the old mathematical archives alg-geom, funct-an, dg-ga, and q-alg as math archive articles. To avoid duplicate numbers, the system added 50 to each funct-an number, 100 for dg-ga, and 140 for q-alg. Since this system was never adopted at the arXiv, it has for now been scrapped. If you use cite or link to any math articles math.XX/yymmnnn, where the year yy is 97 or prior and the number nnn is less than 200, you should convert back to the original numbers as stamped on the articles themselves.

That was five months in to the project of slowly editing questions and answers the old-fashioned way, by hand, replacing broken URLs we knew how to deal with, but which were still in the pre-2007 era of ID weirdness. They couldn’t be done too many at a time, and someone did complain I was editing too much, because it pushed new questions further down the front page, and off it entirely quicker than usual.

Further, since leaving a link direct to a pdf, say, and merely saying “this paper”, means the person reading the question needs to open up a pdf to know what the paper is (not helpful on mobile!), I took it on myself to include actual bibliographic information in fixing the link from the Front, to the arXiv proper. Knowing you are being referred to a 2002 arXiv paper of Perelman means you can recognise it instantly. Even better, I tried to include a journal reference and even a doi link, earlier on in the project, when I was fresh and keen (sometimes people would also link to unstable publisher urls, and there are still problems with these, especially those pointing to opaque springerlink URLs which no longer exist!). This is one way of future-proofing the system, and making it more information-rich for both human and machine readers. I have a suspicion that the arXiv is now implementing a doi system for its articles, for the day when arxiv.org may not be the address we visit when looking for papers.

Another problem is that people also supplied links in comments, and comments cannot be edited except by a mod. So our solution was to give a reply comment pointing out the fact the Front link was broken, and supply the working arXiv.org link. When particularly motivated, I included the paper title and sometimes even more bibliographic info. Asaf Karagila whacked a few of these with his mod powers, editing them directly, but leaving the mods to fix all of these is not an option.

After slow work by user ‘Glorfindel‘ (a mod on the big meta.SE), who wrote a script to do slow edits every couple of days, Martin, and myself, on the 29th March this year, I edited the last outstanding link to a pre-April 2007 Front link—every broken custom arXiv Front url was now working in questions and answers, and every comment with such a link had a reply pointing out what it should be pointing to. For good measure, I went ahead over the next day or so and edited the rest of the few links to papers in 2007, so that any replacement code can deal with a clean date division where it needs to be active. Between 20th September 2021 and 30th March, it turns out I fixed a bit over 200 broken links, and responded to about 100 comments with new, working links. The graph showing all removals of “front.math.ucdavis.edu” back through the lifetime of MO on the SE2.0 platform is quite dramatic:

Plot of edits removing links to the arXiv Front, per month, from 2013 to present (courtesy of Martin Sleziak)

Now that all the manual edits are done, the Stackexchange Community Mods (these are SE employees, not just elected users) are looking at the situation again and how the 2008–2019 links can be automatically edited by a script. Watch this space…

So what is the takeaway, if any? Don’t leave links to papers on MathOverflow without some minimum identifying information! The problem is similar for links to papers on people’s personal websites, that have evaporated after a decade, and as noted above, publisher urls instead of doi links. Without a title and at least one author, someone has to spend the time tracking this stuff down. If the MO user who posted the link has moved on, sometimes there is very little that can be done. By spending the time even just copying the title of the paper, an MO user is helping potentially many people downstream, and certainly saving the time of someone like me, who enjoys such a detective task but would prefer not to need to do it!

Homotopy equivalence of topological categories

Way back in the dark ages when I was doing my PhD, but couldn’t settle on a topic, I was looking at trying to understand the homotopy theory of topological groupoids and categories. I had no idea what was done, or how to do this, so I started working some things out in a pedestrian way (around 2008–09). One thing that I now understand that I was lacking was the concept of cohesion, i.e. how Lie groupoids and topological groupoids are different from spaces and groupoids represent homotopy types in their own ways. But I did manage to prove a version of Quillen’s Theorem A about when a functor geometrically realises to a homotopy equivalence, but starting from topological categories (i.e. categories internal to \mathbf{Top}, or rather \mathbf{CGH}). I abandoned this paper very close to being finished, as I started working on what would eventually become my thesis, and also because I was going around in circles a bit, and not sure it was worth releasing. Maybe the result wasn’t that stellar, but I think it’s not been done in this way before (and it is much easier to understand than comparable results in the literature). The paper (with the above title) is now on the arXiv as arxiv:2204.02778. Here’s the theorems I had proved back in the 00’s

Theorem A and Theorem A’ in arxiv:2204.02778

I then used this to show that a weak equivalence of topological categories (ff+eso in the numerable pretopology) geometrically realises to a homotopy equivalence (assuming some mild condition on the codomain).

What is new now is that of course I know a lot more about bicategorical localisation, and this result I can now say implies that the classifying space functor B\colon \mathbf{Cat}(\mathbf{CGH})\to \mathbf{CGH} extends along the Yoneda embedding to define a “classifying space” 2-functor from a suitable 2-category of topological stacks of categories to the 2-category of spaces, maps and homotopy classes of homotopies. This improves on contemporaneous work of Ebert, who defines a homotopy type for certain topological stacks of groupoids, but has to battle size issues, and so only defines it on a small subcategory of stacks. This work was done in a better way indepdently by Noohi, who associated a weak homotopy type to a (large class of) topological stacks. My extension of B is also a somewhat orthogonal generalisation of Ebert’s work, since Noohi works with topological stacks under the open cover topology, and on all spaces, whereas my setup works with the numerable topology, and compactly-generated Hausdorff spaces. It does, however, allow for the full 2-category including non-invertible 2-arrows, which is not covered by the usual familiar (\infty,1)-setup.

Geometric string structures on homogeneous spaces

This post is just to provide a link to slides for a talk of the above title, for a Zoom talk held in the UNAM categories seminar.

Abstract: The notion of string structure on a space X goes back to work in the 1980s, particularly of Killingback, starting as an analogue of a spin structure on the loop space LX. In the decades since, increasingly refined versions of string structures have been defined. Ultimately, one wants to have a full-fledged String 2-bundle with connection, a structure from higher geometry, which combines differential geometry and category-theoretic structures. A half-way step, due to Waldorf, is known as a “geometric string structure”. Giving examples of such structures, despite existence being know, has been an outstanding problem for some time. In this talk, I will describe joint work with Raymond Vozzo on our framework for working with the structure that obstruct the existence of a geometric string structure, which is a 2-gerbe with connection, as well as give a general construction of geometric string structures on reductive homogeneous spaces.

Tracking an errant citation

In my work, I have often used the definition of a “simplicial line bundle”, or what is essentially equivalent, a simplicial U(1)-bundle, over a simplicial manifold. If this simplicial manifold is the nerve of \mathbf{B}G, the one-object groupoid associated to a Lie group G, then such an object is equivalent to a central extension of G by U(1). The construction is used by Murray and Stevenson, for instance, in their paper Higgs fields, bundle gerbes and string structures. They cite Brylinski and McLaughlin’s paper Geometry of degree-4 characteristic classes and of line bundles on loop spaces, I for the source of this construction (and, implicitly, the equivalence between the notion of the simplicial construction and the central extension).

If we go to Brylinski and McLaughlin, they have in Theorem 5.2 cited Grothendieck as the source of the equivalence between the simplicial construction and the central extension. The problem is, they cite chapter VIII of tome 1 of SGA 7, Complements sur les biextensions. Proprietes generales des biextensions des schemas en groupes. This chapter is 95 pages long. Skimming through that chapter, Grothendieck seems to cite the previous chapter as the source of the relevant theory. I might be wrong, but perhaps the citation went a bit astray? I would love to be proven wrong, and a precise numbered citation given.

Now we need to go and start reading chapter VII of SGA 7.1, Biextensions de faisceaux de groupes. It’s 90 pages long. And, as those who know the material, it’s a scanned, typewritten document, in French—and doesn’t use simplicial terminology anywhere. So there’s no text searching even for relevant key words. It’s not a huge deal, but Grothendieck is dealing with algebraic geometry, and sheaves of groups, instead of Lie groups. Worse, it’s in complete generality of arbitrary normal extensions of groups, not central extensions, so the material is made a lot more subtle and now full of irrelevant and complicating detail.

The closest I’ve come to finding what Brylinski and McLaughlin might be wanting to cite is Proposition 1.3.5 of chapter VII:

Proposition 1.3.5 of chapter VII of SGA 7, tome 1

The notion of equivalence that Brylinski and McLaughlin state in their Theorem 5.2 is not that difficult, except for the implication that the definition of simplicial U(1)-bundle implies that the putative group structure on the extension group is in fact associative. The relevant condition is that some section over G\times G\times G is equal to the canonical trivialisation of what would have been denoted \delta\delta(E) by Grothendieck. The closest I can see in the proof of the above statement is the following excerpt:

Proof of Proposition 1.3.5 concluded

The “condition d’associativité (1.1.4.1)” is the usual commutative square expressing the associativity of a group object in a category. So it’s not very clear, since the statement of the Proposition is about objects not really like what Brylinski and McLaughlin define.

I’m fully aware that Brylinski and McLaughlin may have in fact going for a citation in the sense of the classic Australian film The Castle:

“What section of the constitution […]?”

It may also be that my trying to skim read 185 pages of mathematical French just missed the relevant section of SGA 7.1, chapter VIII (or possibly VII). In which case, I’d be happy to be pointed to chapter-and-verse

How to be concrete when you don’t have a choice

I forgot to write a post about this a year ago, but I gave a talk at AustMS2020 detailing some work that Martti Karvonen and I did, originating in a discussion on the category theory Zulip chat server. The main gist of the project is removing the need for Global Choice from a pair of theorems in pure category theory dealing with concrete and non-concrete categories. The most famous example (shown by Freyd) of a non-concrete locally small category is the homotopy category of \mathbf{Top}, which is the category whose objects are topological spaces, but whose morphisms are homotopy classes of continuous maps. It is a quotient of the concrete category of topological spaces, by a congruence on its hom-sets. The first theorem that we improved, due to Kučera, is that this is as bad as it gets: every locally small category is the quotient of a concrete category by a congruence like this. The proof constructs a concrete category, but it’s not very nice or workable, so it’s really just an existence result.

Unfortunately the original proof starts by well-ordering the universe of sets, which really makes me itch: category theory is generally rather constructive in nature, or at least can be reformulated to have some kind of constructive content. A purely category-theoretic statement that needs the axiom of Global Choice seems non-optimal to me. And, thankfully, this is the case. Martti came up with a way to remove Choice (even normal AC) from the construction, and then I figured out how to do the whole thing in Algebraic Set Theory with the assumption of Excluded Middle (i.e. the category with class structure is Boolean).

Then we turned out attention to a theorem of Freyd that gives a precise characterisation of which categories are concretizable, albeit we started from Vinárek’s cleaner and shorter proof. It shows that a necessary condition for concreteness, due to Isbell, is also sufficient. Isbell gave examples of non-concretizable categories, but they were artificial, specifically constructed to violate his condition. Like Kučera’s theorem, even Vinárek’s improved proof used Global Choice to pick a well-ordering of the universe. We were able to re-do the theorem in ZF, and more generally in (Boolean) algebraic set theory, with the additional assumption that the universal object has a well-ordered stratification by small objects (roughly, it has a small map to the object of ordinals). This last assumption is validated in any membership-based set theory where some cumulative hierarchy exhausts the class of all sets, say the von Neumann hierarchy for ZF, or some weaker set theories where a hierarchy, or a rank function, is posited axiomatically (see eg this paper).

The paper is in preparation, and would have been done by now, had I not been distracted by other projects. Martti is being very patient with me! For now, let me point to the slides for my talk, which you can see here.