A new “Brunerie number”, might be easier to calculate?

Agda code:

-- Second "Brunerie number"
open HⁿℂP² using (g)
brunerie2 : ℤ
brunerie2 = g 1

There is an open problem in computational HoTT/synthetic homotopy theory, namely to get a computer to evaluate the proof in HoTT that there is a natural number n, such that \pi_4(S^3)\simeq \mathbb{Z}/n\mathbb{Z}, and spit out the answer n=2. This number n is known as Brunerie’s number, as the proof that there exists such an n was given by Brunerie, who then gave an on-paper proof in HoTT that it is indeed 2.

The only way to compute Brunerie’s number is to have an implementation of some version of HoTT in which the univalence axiom will actually compute, and so far this means using a proof assistant that uses cubical type theory. The inefficiencies in existing software and/or lack of computing power mean that despite starting a formal proof running that would calculate Brunerie’s number, it hasn’t been achieved.

This means that it would be nice to have a slightly easier problem of the same nature to test out ideas in computational synthetic homotopy theory. Guillaume Brunerie, Anders Mörtberg, Axel Ljungström have supplied just one such problem, in Synthetic Cohomology Theory in Cubical Agda and even implemented in cubical Agda, namely to decide the degree of the cup-square map

\mathbb{Z} \simeq H^2(\mathbb{CP}^2)\stackrel{x\mapsto x\smile x}{\longrightarrow} H^4(\mathbb{CP}^4)\simeq \mathbb{Z}

Namely, if g denotes this homomorphism \mathbb{Z}\to\mathbb{Z}, and e is a generator of H^2(\mathbb{CP}^2), then g(e)=\pm 1. This is proved using on-paper HoTT in Chapter 6 of Brunerie’s thesis, using Gysin sequences, and is rather complicated. Brunerie, Mörtberg and Ljungström say that if one could simply compute g(e)—what they call “a second Brunerie number”—by running the cubical Agda code, then the actual proof that it is \pm1 is not needed.

However, the computation still is not efficient enough, or the software design clever enough, for it to actually finish. But the authors write

This computation should be more feasible than the original Brunerie number. As our definition of ⌣ produces very simple terms, most of the work has to occur in the two isomorphisms, and we are optimistic that future optimizations will allow us to perform this computation.

Hence we might actually see this number be calculated. It’s mildly amusing to me that we cannot compute the number 2 (the original Brunerie number) with a TB of RAM and 90 hours of computation, so HoTT-ists propose calculating 1 instead, and we still cannot do it (but might in the future!)

Substructural fixed-point theorems and the diagonal argument: theme and variations

This is just to give a pre-release sneak peek of a pre-preprint I want to put on the arXiv very shortly. It’s a slightly odd beast, since I think it might be of some interest to logicians/philosophers, computer science types and possibly also category theorists, but I can’t tell. It’s a further analysis of Lawvere’s diagonal argument/fixed point theorem, reducing the assumptions beyond what is probably sensible, and giving a few different versions of the fixed point theorem more general than Lawvere’s. Here’s the abstract:

This note generalises Lawvere’s diagonal argument and fixed-point theorem for cartesian categories in several ways. Firstly, by replacing the categorical product with a general, possibly incoherent, magmoidal product with sufficient diagonal arrows. This means that the diagonal argument and fixed-point theorem can be interpreted in some substructural type theories, and semantically in categories with a product functor satisfying no coherence axioms, for instance relevance categories. The second way is by showing that both of Lawvere’s theorems as stated for cartesian categories only concern the well-pointed quotient, and giving a version of the fixed-point theorem in the internal logic of an arbitrary regular category. Lastly, one can give a uniform version of the fixed-point theorem if the magmoidal category has the appropriate endomorphism object, and has a comonad (i.e. an (S4) necessity modality) allowing for potentially `discontinuous’ dependence on the initial data.

Perhaps it is just a bit of a curio, and really waiting for a killer app, but it’s been sitting on my plate for a while, and I think some level of feedback is warranted, since I really can’t tell how it will land with people inside and outside my sphere of expertise. Grab your copy here.

Thuses

(This is a guest post by David Benjamin Lim)

I would like to make an announcement concerning a new math blog, Thuses.com, started by Stanford math graduate students Slava Naprienko, Bogdan Zavyalov and myself. You may already be aware of Mathoverflow and how it has become extremely valuable to the mathematical community. Unfortunately on Mathoverflow you cannot share little things in math – often not enough for a paper – that you find cool with others. 

We feel that the current format in which mathematical ideas are disseminated throughout the community is suboptimal: The arxiv does not support comments and hence is not conducive to discussing mathematical ideas. On the other hand, if you want to publish a “folklore” result, there is not really a place to do this apart from a personal blog or website. It would be nice if there would be a one-stop place to share math, which is what motivated us to start Thuses. 

The things you post on Thuses don’t even need to be new or folklore results. They can be expository in nature, e.g. a new take on a paper or a hairy calculation that’s “known” but no mathematician has ever bothered to write down. Or even a list of challenge problems, such as Peter Scholze’s post on Kevin Buzzard’s blog here. Note however that posts on introductory grad level math, e.g. “Introduction to Scheme Theory.” are not welcome on Thuses. 

For an example of a post that is really awesome, take a look at this post of Sean Cotner and Bogdan. It is exactly the kind of thing that we want to see on Thuses, and answers one of Richard Taylor’s questions during the Stanford Number Theory Learning Seminar.

Come to Thuses and spread the word of math! For more ways to support us, please look at our support page here.

When not to say “universe”

(This post offers no new insights into IUT, it is just an exploration of some of the terminology that gets thrown around. But I had reason to get these ideas down, which I will share later)

Inter-Universal Teichmüller Theory (IUT) is somewhat infamous, to say the least. I want to write about some purely cosmetic thoughts I had recently in a discussion around the use of the term “universe” in IUT. It was curious, because I was, while talking, thinking of the technical term universe i.e. Grothendieck universe and the person that I was talking to had gotten details from a source much closer to the horse’s mouth, and they had the mental image closer to what it apparently really means—and it was not a Grothendieck universe.

However, there seem to be two distinct uses of the term “universe” by Mochizuki. One is the technical sense, given in IUT4 section 3, and explicitly said to be related to size issues and needing to “go up a universe”, as we say, since certain large constructions turn out to stay inside a given Grothendieck universe. I shall always say “Grothendieck universe” for this sense. However, Mochizuki then immediately goes on to say

In the following discussion, it should be understood that every set-theoretic formula that appears is “absolute” in the sense that its validity for a collection of sets contained in some universe V relative to the model of set theory determined by V is equivalent, for any universe W such that V \in W, to its validity for the same collection of sets relative to the model of set theory determined by W

In plain terms, this means that the definitions in that whole section regarding set-theoretical constructions are not affected by going up a universe. This point is repeated in Remark 3.1.4 of IUT4, which concludes

This is the sense in which we apply the term “inter-universal”. That is to say, “inter-universal geometry” allows one to relate the “geometries” that occur in distinct universes.

But what this means is that given a specific geometric object to start with, which lives (I guess!) in the base universe, does not change in the slightest when thinking of it as living in the next universe up. Talk of computing something in an “alien” universe, for instance as is apparently made possibly by the use of “species”, as in

calculate in terms that make sense in one universe the operations performed in an alien universe!

is odd, since Grothendieck universes are cumulative: given a pair of Grothendieck universes V,\ W (in standard ZFC set theory) then one contains everything contained in the other (say if W is bigger than V). In this sense, “alienness” is not mutual, as from the point of view of the larger Grothendieck universe W, everything in V just happens to be comparitively rather small, but still lives in V. One could still say that things in W are “alien” to things in W, but then that would be true of sets and proper classes. No one says proper classes are “alien” to sets. I also note that nowhere is the term “alien” defined, except in a dictionary sense;

Here, the intended sense of the descriptive “alien” is that of its original latin root, i.e., a sense of abstract, tautological “otherness”

no mathematical meaning is attached to it that I’ve seen.

It is also mildly ironic that despite claims that

one must continue to extend the universe, i.e., to modify the model of set theory, relative to which one works. Here, we recall in passing that such “extensions of universe” are possible on account of an existence axiom concerning universes, which is apparently attributed to the “Grothendieck school

the work of the Grothendieck school, which I take to mean the collected volumes EGA I-IV and SGA I-VII, has recently been proved to not need universes by Colin McLarty (more so because the preprint originally landed before the IUT preprints did, though it only appeared in print last year). McLarty makes the point rather forcefully that Grothendieck was aware of size issues, and that his eponymous universes would ensure nothing ran afoul of Russell’s Paradox, but also that it was merely a technical fix, and not essential. The same point was also emphasised by Brian Conrad (among others: 1, 2, 3) when the issue of Grothendieck universes and the proof of Fermat’s Last Theorem came up.

So despite maybe some technical size issues regarding definable categories (which is what “species” seem to be) in the back end of IUT4, what else gets slapped with the phrase “universe”? Well, there’s this quote from the expository document “The Mathematics of Mutually Alien Copies: from Gaussian Integrals to Inter-universal Teichmüller Theory” (“Alien Copies”):

That is to say, the notion of a “universe”, as well as the use of multiple universes within the discussion of a single set-up in arithmetic geometry, already occurs in the mathematics of the 1960’s, i.e., in the mathematics of Galois categories and étale topoi associated to schemes. On the other hand, in this mathematics of the Grothendieck school, typically one only considers relationships between universes — i.e., between labelling apparatuses for sets — that are induced by morphisms of schemes, i.e., in essence by ring homomorphisms. The most typical example of this sort of situation is the functor between Galois categories of étale coverings induced by a morphism of connected schemes.

So here we see again the invocation of the work of Grothendieck and collaborators (this time, no scare quotes), except “universe” here is not the technical sense, but in a more colloquial sense that a (Grothendieck) topos—or more specifically a Galois category, a special case—is a category with rich enough structure that one can work entirely inside it as if it were the whole mathematical universe. This is nicely encapsulated by a cartoon that Ingo Blechschmidt uses to illustrate some of his work:

The “relationships between universes” mentioned above are geometric morphisms between Grothendieck toposes, these are extremely familiar to working algebraic geometers, and are not seen as somehow dealing with links between “alien universes”. Mochizuki does go on to warn the reader that there are in IUT

much more complicated relationships between the universes […] that are adopted in the Galois categories that occur in the domains and codomains of these links [defined in IUT]

so one should expect some more general kind of morphism between Galois categories. Notice here that he says outright that the “links” (which I assume means the Theta-links and log-links) are relationships between Galois categories. Even allowing for this to perhaps be an slight oversimplification, we are being invited to consider familiar categories, albeit related in a non-standard way, and

it is precisely this sort of situation that is referred to by the term “inter-universal”.

So now we are being explicitly told that the “inter-universal” aspect is not because of needing to pass up to a larger Grothendieck universe because to size issues, and that having absolutely definable categories means structure is preserved under this passage; rather it is about non-standard morphisms between toposes.

And here, one should remember that a Grothendieck universes (with definition explicitly given by Mochizuki) are things defined using the language of ZFC, elementhood and so on, and of course the assumption of their existence is a proper extension of the strength of ZFC. Further, the more Grothendieck universes one assumes exist (one, two, three, 57, countably many, a proper class…), the stronger one’s theory. But a Galois category is something definable in ZFC (and even in weaker systems), and moreover they exist in bucketloads with no strong assumptions whatsoever. Moreover, they are not defined in terms of ZFC, or the language of set theory, but ordinary mathematical objects.

So it seems like there is some collision here between metaphor and precisely-defined mathematics. The technical definition rears its head briefly in “Alien Copies”, namely when in the context of IUT4 a “a given universe, or model of set theory” is mentioned. But we again return to

Recall from the discussion of §2.10 that the notion of a “universe”, as well as the use of multiple universes within the discussion of a single set-up in arithmetic geometry, already occurs in the mathematics of the 1960’s, i.e., in the mathematics of Galois categories and étale topoi associated to schemes [cf. [SGA1], [SGA4]].

It’s entirely possible that “universe” (with quotes) and universe (without quote) really are meant to refer to difference concepts. Such a tiny distinction between naming highly different concepts seems to me to be not exactly the clearest.

So what is my take? Personally, without passing any judgement whatsoever on the actual mathematical content, not to mention the big claim and raison d’etre of IUT, I think using the word “universe” is the colloquial sense should have been avoided. It makes for great publicity, evokes a sense of amazement, but ultimately it is referring to a mathematical setup that is far from novel, and is a terminological clash with Grothendieck universes, which are a technical tool apparently invoked in a different way, but in a way that is not clear exactly how it is used or really needed. Immediately after discussing species and different Grothendieck universes in the introduction to IUT4, Mochizuki says

At a more concrete level, this “inter-universal” contact between constructions in distant models of conventional scheme theory in the log-theta-lattice is realized…

but this latter “inter-universal contact” is nothing other than the generalised kinds of morphisms the “links” between different Galois categories (or their appropriate generalisations) are supposed to represent.

I pretty much think that of the two just-so stories presented above about why the term “inter-universal” is used, it is the second one that is closer to the truth, the one about the “links” being generalised morphisms between more-or-less familiar types of categories. Thus I think from the point of view of trying to communicate this idea, so central as to give the name to the entire topic, something better needs to be said. I think “universe” is too grand a term, and already used in a precise mathematical sense in the community (and in IUT4, too!). In the discussion alluded to at the start, I came up with the metaphorical term of “simulation” to explain in a non-technical way what I think Mochizuki’s “universes” are (i.e. not the precisely-defined Grothendieck universes).

Calling a Galois category a simulation of a particular mathematical object (eg a number field, via its Galois group, hence the Galois category of continuous actions) seems to me to capture the mathematical intuition better. Having independent simulations of isomorphic objects conveys to me that I have a lab and I am looking into little worlds formed out of or around those objects. This is reasonably close to how one might think of a Grothendieck topos more generally, and in fact coincidentally similar to Blechschmidt’s cartoon.

One can go “into” a simulation (in the style of The Matrix, for instance), and see what it looks like from the inside. And maybe one could jump over to a different simulation and see what that process does to anything one tries to carry across from the first simulation. This gels better with the idea that one is comparing independent copies of something, rather than the universe-enlargement idea. Ultimately, one has as many of these little experimental “worlds” as are needed, and they are each “controlled” by a small amount of data. One should note the distinction between ‘big’ and ‘little’ toposes (or ‘gros’ and ‘petit’): the former act like “mathematical universes” (in a sense I don’t want to make precise, but Lawvere’s cohesive toposes are one take on it), the latter act like “generalised spaces”. And it is things like little toposes that I mean when I say “simulation”.

Well, so what? one might say. Well, indeed. Perhaps if there had been less conflation of independent ideas, more clarity around the key concepts, we could have arrived at our current situation sooner. I do not find these conflicting origin stories boost my confidence of the conceptual underpinning of the key mechanisms of the theory, but the reverse. One can explain what I think is supposed to be the key idea (passing between categories arising from arithmetic objects) without so much metaphor. Given that the main audience for IUT should have been arithmetic geometers, writing in terms they are familiar with seems to me to be a given. Metaphors of alien universes being mixed with actual (apparent) use of Grothendieck universes is unneeded. If one wants to tell a story for non-experts, then the metaphors should not outright abuse established terminology, nor outstrip the actual mathematically-precise explanations in volume.

Anyway, I’d be interested to know the impressions people (especially non-experts, and even more, non-mathematicians) have to my very much post-hoc shift in metaphor for this aspect. I think experts got what was meant, and ignored the metaphor. But a lot of pixels have been spilled in the popular media about this whole affair, and I think a better metaphor is needed, if one wishes to discuss it.

Correction to the definition of the String crossed module, part 4: the coherence equation

In the last post I showed that the paper BCSS contains a contradictory definition, namely of a particular 1-form \beta_p, and that flipping the sign out the front fixes it up. now I need to make sure that I ensure the main theorem is still valid, since it implicitly depends on that sign, through the conclusion of Proposition 3.1. My claim in the first post in this series is that the issue is hiding in equation (5) of BCSS, whereas Urs Schreiber cautioned me that the equation should be checked against the L_\infty-algebra literature, rather than just aiming for internal consistency. In particular the paper “Strongly homotopy Lie algebras” by Lada–Markl (arXiv), which is the cited source for the definition and conventions for L_\infty-algebras. The definition that needs to be used is that of ‘weak map’, except the description is slightly indirect, and there are several layers of interacting sign conventions that I felt I was going to come unstuck on. In this post I will deal with my workaround to arrive at what I hope is a definitive answer.

Continue reading “Correction to the definition of the String crossed module, part 4: the coherence equation”

Correction to the definition of the String crossed module, part 3: arriving at the contradiction, and the fix

Continuing on from the previous post, I need to finish off the calculation that shows the definition of the 1-form

\beta_p = -2\int_0^{2\pi} \langle \Theta,p(\theta)^{-1}\partial_\theta p(\theta)\rangle\,d\theta \in \Omega^1(P\Omega G)

in BCSS leads to a contradiction, for G a suitable compact connected Lie group, taken to be SU(2) for concreteness. Recall that I had calculated the difference between a pair of connections on the U(1)-bundle \widehat{\Omega SU(2)}\to \Omega SU(2), one arising from the construction of Murray and Stevenson, and the other being the pullback of this along the lift of Ad_p\colon \Omega SU(2) \to \Omega SU(2) for p a path in SU(2) starting at the identity matrix. This lift is defined in BCSS using \beta_p. In actual fact, we did the calculation on the trivial bundle over P\Omega SU(2) that is the pullback of the one I care about, and arrived at the formula

ev_{2\pi}^*(\widehat{Ad}_p^*\mu - \mu - \beta_p) = 2\int_{[0,2\pi]}ev^*(d\beta_p)

where ev_{2\pi}\colon P\Omega SU(2)\to \Omega SU(2) evaluates a path at the endpoint. I can show that for a careful choice of p, path f\colon [0,2\pi]\to \Omega SU(2) and tangent vector at f that is vertical for ev_{2\pi}, the right hand side fails to vanish. The left hand side is identically zero for vertical tangent vectors, and this will be the contradiction.

Continue reading “Correction to the definition of the String crossed module, part 3: arriving at the contradiction, and the fix”

Correction to the definition of the String crossed module, part 2: triangulation

After the previous post was shared on Twitter, and I tagged Urs Schreiber, one of the authors of the paper in question (‘BCSS‘), he rightly pointed out that I had merely found a discrepancy in the coherence equations between BCSS and HDA VI. The true source of equation (5) of BCSS (which I claimed last time was in error) should be, he says, the theory of weak maps of L_\infty algebras, going back to Lada and Markl’s Strongly homotopy lie algebras (or free arXiv version). I totally agree with this point, but I felt daunted by having to unwind the various definitions that unravel to equation (5). So I wanted to explore the issue a bit more. I know that the sign in front of the 1-form

\beta_p = -2\int_0^{2\pi} \langle \Theta,p(\theta)^{-1}\partial_\theta p(\theta)\rangle\,d\theta

in the proof of Proposition 3.1 cannot be correct, since it leads to a contradiction in a geometric calculation independent of all the L_\infty-algebra material. I should note that this geometric calculation is nothing exotic; a correspondent let me know I muddied the water in the previous post by discussing it in the context of a larger project with more novel constructions. As a result, I will give the calculation later below the fold.

What I did not realise when I wrote the first post is that the sign error in \beta_p is linked not only the an overall sign in equation (5) of BCCS, but also to a plus sign implicit in the definition of the comparison functor from 2-term L_\infty-algebras to (semistrict) Lie 2-algebras, which are possibly weak Lie algebra objects in the 2-category of groupoids internal to \mathbf{Vect}. This functor is constructed in HDA VI, in Theorem 4.3.6 (=Theorem 36 in the arXiv version), and here is the relevant version

Part of the proof of Theorem 4.3.6 of HDA VI, constructing a functor from 2-term L_\infty-algebras to Lie 2-algebras

This was invisible to me, until I checked a different article generalising HDA VI, namely Roytenberg’s On weak Lie 2-algebras (or free arXiv version), where a generalisation of the comparison functor to weaker objects is given. Here is included a minus sign on what turns out to be the troublesome piece of data arising from the crossed module of Lie algebras, arising from the sign on \beta_p.

Discussion in Roytenberg about the relation between weak 2-term L_\infty-algebra maps and morphisms between weak Lie 2-algebras

So now in fact one can see the sign on \beta_p as being a combination of two signs: one from the comparison functor, and one from the coherence equation (5), just that in BCSS, the first of these is a +, and the second is a - .

Continue reading “Correction to the definition of the String crossed module, part 2: triangulation”

Diophantine fruit

I guess people may be familiar with the sorts of memes with equations involving fruit that are generally relatively trivial, while claiming something like “95% of people can’t solve this!” Sometimes this leads to amusing parodies. However someone went the whole other way and created an real, solvable instance of the problem using only natural numbers whose smallest solution is … very large.

I like the way this type of problem can actually open up a dialogue of how mathematics is really open ended, creative and unfinished. Framing a fruit equation problem as a filter that makes people either feel stupid for not understanding it (it is basically algebra, after all), or smug and superior for getting it, is extremely unhelpful. But explaining that what can seem like an easy problem actually requires a lot of work, and would stump professional mathematicians (cf Fermat’s last theorem), is a good conversation starter. It can also touch on things like how geometry is brought to bear on algebraic problems (and vice versa!), how the solution can use methods not present in the problem and so on.

So, in honour of this, I thought I take what seems to be, at time of writing, the simplest currently unsolved Diophantine equation (that is: a multivariable polynomial, and looking only for whole-number solutions), and turn it into a fruit equation. We can think of it as trying to count fruit:

Here ‘simplest’ is according to the notion of “size” defined in this MathOverflow question, basically it’s a measure of how large all the powers and coefficients of a multivariable polynomial is. There are only finitely many polynomials of a given size. The polynomial from the picture is -x^3 + y^2 + z^2 - xyz + 5, and has size 29. Every polynomial of size 28 and smaller has either been solved or shown to have no solutions. The idea is to see (experimentally) where the rough threshold is between equations than be solved in a more-or-less elementary way, and where really serious techniques or obstructions really kick in, of the sort outlined here.

I’m happy for people to share the above image as widely as they can. If nothing else, maybe someone will actually solve the equation above, and contribute a piece of new knowledge!

EDIT: check out the neat new preprint Fruit Diophantine Equation (arXiv:2108.02640) on this problem

Correction to the definition of the String crossed module

(See also part 2)

I have been very slowly working, with my colleague Raymond Vozzo, on a project that involves a very explicit calculation for bundle gerbes, and in particular with the gerbe underlying the crossed module of Lie groups introduced by Baez, Crans, Schreiber and Stevenson (BCSS), which presents the String 2-group \mathrm{String}_G of a suitable compact simply-connected Lie group G. However, there was a particular step where we got thoroughly stuck, where a certain calculation involving the comparison of two explicitly described connections on (a pullback of) the universal central extension U(1) \to \widehat{\Omega G} \to \Omega G of Fréchet Lie groups.

Aside from needing to overcome a sign convention around the Stokes theorem for integration along a fibre of a family of manifolds with boundary, there was still an issue where I got a formula that implied a certain non-zero 2-form was equal to its own negative. The origin of this negative sign is, ultimately, the minus sign in the definition of a certain family of 1-forms \beta_p on \Omega G appearing in the proof of Proposition 3.1 in BCSS. In my own notation it is

\beta_p = -2\int_0^{2\pi} \langle \Theta,p(\theta)^{-1}\partial_\theta p(\theta)\rangle\,d\theta

where p\colon [0,2\pi]\to G is a smooth path starting at the identity element, \Theta is the left-invariant Maurer–Cartan form on \Omega G and \langle \cdot , \cdot\rangle is a suitably normalised version of the (pointwise-evaluated) Killing form on \mathfrak{g}. So I really couldn’t figure out what was going wrong, since I was fairly confident in our computations. This computation fed into a much larger computation, where if I ignored this problem and pressed on, trusting my instincts, things seemed to be going smoothly, and reproducing something like what we’d expect. I also had another (slow-burn) project involving this same comparison of connections, where the sign issue was also arising. The theory there pointed in one direction, the computation in another.

Continue reading “Correction to the definition of the String crossed module”

The strange adventure of the Universal Coefficient Theorem in the night

If you trawl the internet for more exotic Universal Coefficient Theorems, then you’ll come across comments to the effect that there isn’t a UCT for cohomology with local coefficients in general (for instance, here, for group cohomology, which is ordinary cohomology of a K(G,1), or the Groupprops wiki page, which only gives the trivial coefficients version). I had reason to want a UCT for group cohomology with coefficients a nontrivial module, to try to clear some hypotheses using formal properties of group homology (it’s better behaviour with respect to filtered colimits, in particular). The only reference I could find for a suitable UCT was an exercise in Spanier’s venerable book:

Page 283 of Spanier’s book Algebraic Topology

I found a reference in a 2018 paper, that said “there is a version [of the local coefficient UCT] in [Spanier], p. 283, though its application is limited”. Up until this point, I had not actually seen the statement written out anywhere! In particular, it’s not clear what Spanier’s assumptions are (it might be he is assuming R is a PID throughout this section, but I couldn’t see it on a quick search), and in particular, something must break for this to be of “limited” application.

Continue reading “The strange adventure of the Universal Coefficient Theorem in the night”