That’s his three-dimensional interpretation of the first few iterations of this design of mine:

What’s fun about Colton’s version is that each new layer of squares is printed a bit taller than the previous layer. I had really only imagined these as two-dimensional objects, so for me it’s really fun to have 3-dimensional models of them to hold and play with! Colton’s idea of adding some depth really adds another … er … dimension to the overall effect:

His work also gives a nice way to illustrate some of the features of these fractals. For example, visually proving that the “inside” and “outside” in my fractals converge to the same shape can be done by printing the same model at different scales. Here are three copies of the same fractal at different scales, each printed with the same number of iterations:

Not only do these nest tightly inside each other, the *thickness* is also scaled down by the same ratio, so that the upper surfaces of each layer are completely flush.

Colton has been doing this work partly because designing fractals is a great way to learn 3d printing, and he’s now getting some impressively accurate prints. But, I also like some of his earlier rough drafts. For example, in his first attempt with this fractal based on triangles:

there were small gaps between the triangles, which Colton hadn’t intended. But, this gave the piece a sort of rough, edgy look that I like, and it casts shadows like castle battlements:

Colton is still doing nice new work, and we’ll eventually post some more pictures here. But I couldn’t wait to post a preview of some of his stuff!

(Designs and photos © 2018 Colton Baumler and Derek Wise)

]]>This workshop was a lot of fun! I learned a lot, had the chance to talk to people I’ve known for a long time, and meet others I hadn’t managed to connect with before. I was especially excited to find out about some lines of work in progress that build on my work with Catherine Meusburger on Hopf algebra gauge theory.

In fact, our work on this seems to have been an impetus for the workshop, and it was really gratifying to see how other people are beginning to apply our theory, and also work out some interesting examples of it for particular Hopf algebras! I’m anticipating some interesting work coming out in the near future.

Here’s the conference photo; I’m farthest right, and my coauthor, Catherine, is the 11th head from the left, peeking out from the second row:

I gave an introductory talk on the subject of Hopf algebra gauge theory, and you can download the slides from my talk, or even watch the video. Catherine’s talk followed mine, and she showed how Kitaev models are related to Hopf algebra gauge theory in the same way that Turaev-Viro TQFTs are related to Reshetikhin-Turaev TQFTs. Video of her talk is also available. Of course, for more detail on Hopf algebra gauge theory, you can also check out our paper: **Hopf algebra gauge theory on a ribbon graph**.

I can also recommend watching other talks from the conference, available from the webpage linked to above. This was just the kind of conference I like best, since it brought people from multiple research communities together, in this case including mathematicians and physicists of various sorts as well as mathematical computer scientists. Kitaev models have been a hot topic the past few years, and one reason I think they’re fun is precisely that people from several areas—quantum computation, Hopf algebras, category theory, quantum gravity, quantum foundations, topological quantum field theory, condensed matter physics, and more—are working together. Of course, this probably also helps explain the rather long conference title.

]]>

generates a bunch of copes of the Koch snowflake at different scales:

Similarly, I’ve shown (2) how letting squares reproduce like this:

generates a bunch of copies of a fractal related to the Koch snowflake, but with 8-fold symmetry:

So what about letting *pentagons* reproduce? For pentagons, an analog of the replication rules above is this:

Each of the 10 cute little pentagon children here is a factor of smaller than its parent, where is the golden ratio.

However, something interesting happens here that didn’t happen with the triangle and square rules. While triangles and squares overlap with their *ancestors*, pentagons overlap with both their ancestors and their *cousins*. The trouble is that certain siblings already share faces (I know, the accidental metaphors here are getting troublesome too!), and so siblings’ children have to fight over territory:

In this three-generation pentagon family portrait, you can see that each second generation pentagon has two children that overlap with a cousin.

As we carry this process further, we get additional collisions between second cousins, third cousins, and so on. At five generations of pentagons, we start seeing some interestingly complex behavior develop from these collisions:

There’s a lot of fascinating structure here, and much of it is directly analogous to the 6-fold and 8-fold cases above, but there are also some differences, stemming from the “cousin rivalry” that goes on in pentagon society.

Let’s zoom in to see some collisions near where the two ‘wreaths’ meet on the right side of the picture:

I find the complicated behavior at the collisions quite pretty, but the ordering issues (i.e. which members of a given generation to draw first when they overlap) annoy me somewhat, since they break the otherwise perfect decagonal symmetry of the picture.

If I were doing this for purely artistic purposes, I’d try resolving the drawing order issues to restore as much symmetry as possible. Of course, I could also cheat and restore symmetry completely by not filling in the pentagons, so that you can’t tell which ones I drew first:

It’s cool seeing all the layers at once in this way, and it shows just how complex the overlaps can start getting after a few generations.

Anyway, because of these collisions, we don’t get seem to get a fractal tiling of the plane—at least, not like we got in the previous cases, where the plane simply keeps getting further subdivided into regions that converge to tiles of the same shape at different scales.

Actually, though, we still *might* get a fractal tiling of the plane, if the total area of overlap of *n*th generation pentagons shrinks to zero as *n* goes to infinity! That would be cool. But, I don’t know yet.

In any case, the picture generated by pentagons is in many ways *very* similar to the pictures generated by triangles and squares. Most importantly, all of the similar-looking octagonal flower shaped regions we see in this picture including the outer perimeter, the inner light-blue region, and tons of smaller ones:

really *are* converging to the same shape, my proposal for the 10-fold rotationally symmetric analog of the Koch snowflake:

How do we *know* that all of these shapes are converging to the same fractal, up to rescaling? We can get a nice visual proof by starting with *two* pentagons, one rotated and scaled down from the other, and then setting our replication algorithm loose on both of them:

**Proof:**

We see that the area between the two fractal curves in the middle shrinks closer to zero with each generation.

**Puzzle for Golden Ratio Fans: ***What is the exact value of the scaling factor relating the two initial pentagons? *

Next up in this infinite series of articles: *hexagons*! …

I’m joking! But, it’s fairly clear we can keep ascending this ladder to get analogs of the Koch snowflake generated by *n*-gons, with (2*n*)-fold rotational symmetry. More nice features might be sacrificed as we go up; in the case generated by hexagons, we’d have collisions not only between cousins, but already between siblings.

In the Sierpinski triangle, each triangle yields three new, scaled-down triangles, attached at the midpoints of sides of the original, like this:

These triangles are usually thought of as “holes” cut out of a big triangle, but all I care about here is the pattern of the triangles. As I explained last time, the Koch snowflake can be built in a similar way, where each triangle produces six new ones, like this:

You might say this bends the usual rules for making fractals since some of the triangles overlap with their ancestors. But, it makes me happy because it lets me think of the Sierpinski triangle and the Koch snowflake as essentially the same kind of thing, just with different self-replication rules.

What other fractals can we build in this way? The Sierpinski carpet is very similar to the Sierpinski triangle, where we now start with squares and a rule for how a square produces 8 smaller ones:

This made me wonder if I could generalize my construction of the Koch snowflake using triangles to generate some other fractal using squares. In other words, is there some Koch snowflake-like fractal that is analogous to the ordinary Koch snowflake in the same way that the Sierpinski carpet is analogous to the Sierpinki traingle?

There is! Taken to the 5th generation, it looks like this:

The outline of this fractal is an analog of the Koch snowflake, but with 8-fold symmetry, rather than 6-fold. Compare the original Koch snowflake with this one:

Just as I explained last time for the Koch snowflake (left), the blue picture above actually provides a proof that the plane can be tiled with copies of tiles like the one on the right, with various sizes—though in this case, you can’t do it with just two sizes of tiles; it takes infinitely many different sizes! In fact, this tiling of the plane is also given in Aidan Burns’ paper I referenced in the previous post.

But, my construction is built entirely out of self-replicating squares. *What’s the rule for how squares replicate?*

Before I tell you, I’ll give two hints:

First, each square produces 8 new squares, just like in the Sierpinski carpet. (So, we could actually make a cool animation of this fractal morphing into the Sierpinski carpet!)

Second, you can more easily see some of the bigger squares in the picture if you make the colors of the layers more distinct. While I like the subtle effect of making each successive generation a slightly darker shade of blue, playing with the color scheme on this picture is fun. And I learned something interesting when my 7-year old (who is more fond of bold color statements) designed this scheme:

The colors here are not all chosen independently; the only choice is the color of each *generation* of squares. And this lets us see bits of earlier-generation squares peeking through in places I hadn’t noticed with my more conservative color choices.

For example, surrounding the big light blue flower in the middle, there are 8 small light blue flowers, and 16 even smaller ones (which just look like octagons here, since I’ve only gone to the 5th generation); these are really all part of the same light-blue square that’s hiding behind everything else.

The same thing happens with the pink squares, and so on. If you stare at this picture, you can start seeing the outlines of the squares.

So what’s the rule? Here it is:

The 8 small squares are all the same size, and the side of the big square is two sides plus a diagonal of the small squares, so the squares are scaled down by a factor of .

Up next: Triangles and squares were fun. What fun can we have with *pentagons*?

(All images in this post copyright 2017, Derek Wise)

]]>The Koch snowflake is usually constructed starting with an equilateral triangle by replacing the middle third of each side with an equilateral triangular protrusion, doing this again to the resulting polygon, and so on. The first seven steps are shown in this animation:

and the Koch snowflake is the “limit” of this process as the number of steps goes to infinity.

In the alternative construction we use only self-replicating triangles. We again start with a triangle:

But now, rather than modifying this triangle, we let it “reproduce,” yielding six new triangles, three at the corners of the original, and three sticking out from the sides of the original. I’ll make the six offspring a bit darker than the original so that you can see them all:

Notice that three of the children hide the corners of their parent triangle, so it looks like we now have a hexagon in the middle, but really we’ve got one big triangle behind six smaller ones. Now we do the same thing again, letting each of the six smaller triangles reproduce in the same way:

The 36 small triangles are the “grandchildren” of the original triangle; if each of these has six children of its own, we get:

Repeating this again:

And again:

At this stage, it starts getting hard to see the new triangles, so I’ll stop and rely on your imagination of this process continuing indefinitely. We can now see some interesting features emerging. Here are some of the main ones:

- The outer perimeter of all of these triangles, taken to the infinite limit, is the Koch snowflake.
- The lightest blue region, in the middle, is also converging to a smaller Koch snowflake, rotated from the outer one by .
- Between the outer perimeter and the Koch snowflake in the middle are six more yet smaller Koch snowflakes.
- The regions in the middle of
*these*snowflakes are also Koch snowflakes …

and so on: we have Koch snowflakes repeating at smaller and smaller scales.

All this self similarity shows in particular that Koch snowflakes can be assembled out of Koch snowflakes. This is nothing new; it’s related to Aidan Burns’ nice demonstration that Koch snowflakes can be used to tile the plane, but only if we use snowflakes of at least two different sizes:

Aidan Burns, *Fractal tilings*. *Mathematical Gazette* **78** No. 482 (1994) 193–196.

These tilings are already visible in the above construction using triangles, but we can make them even more evident by just playing with the color scheme.

First, if we draw the previous picture again but make all of the triangles the same color, we just get a region whose perimeter is the usual Koch snowflake:

On the other hand, if we make the original triangle white, but all of its descendants the same color of blue, we get this:

I hope you see how this forms part of a wallpaper pattern that could be extended in all directions, adding more blue snowflakes that bound larger white snowflake-shaped gaps. This gives the tiling of the plane by Koch snowflakes of two different sizes.

Taking this further, if we make the the original triangle *and* all of its children white, but all of their further descendants the same color of blue, we get this:

The pattern seen here can be extended in a hopefully obvious way to tile the whole plane with Koch snowflakes of three different sizes.

Going further, if we make the first *three* generations white, but all blue after that, we get:

and so on.

The previous four pictures are generated with exactly the same code—we’re drawing exactly the same triangles, and only changing the color scheme. If we keep repeating this process, we get a tiling with arbitrarily small Koch snowflakes!

But we can also go the other way, continuing *up* in scale to get a tiling that includes arbitrarily *large* Koch snowflakes. To do this, we just need to view the above series of pictures in a different way!

The way I described it, the scale is the same in all of the previous four pictures. Making successive generations white, one generation at a time, makes it look as if we’re cutting out a snowflake from the middle of a big snowflake, leaving six similar snowflakes behind, and then iterating this:

On the other hand, we can alternatively view these pictures as a process of *zooming out*: each picture is built from six copies of the previous one, and we can imagine zooming out so that each picture becomes just one small piece of the next.

If you’re careful about how you do this, you get a tiling of the whole plane, with arbitrarily large Koch-snowflake shaped tiles! I say you have to be careful because it won’t cover the whole plane if, for example, each picture becomes the *top middle* part of the next picture. But, if you zoom out in a “spiral,” rotating by at each step, you’ll get a tessellation of the plane.

Someone should make an animation that shows how this works. Maybe I’ll get a student to do it.

There are some other fun variations on this theme—including a similar construction that leads to the *other* “fractal tiling” described by Aidan Burns—which I should explain in another post.

In case anyone wants it, here is a 1-page visual explanation of the construction described in this post: snowflake.pdf

(All images in this post, except for the first, copyright 2017 Derek Wise.)

]]>Last time I suggested using a deck of 12 cards like this:

But instead, we used four solid colors, three cards of each. So, our “star” permutes the colors red, white, black, and silver:

You can get any permutation of these colors in our Star by *exactly one* symmetry taking outer vertices to outer vertices. The “exactly one” in this isomorphism is what makes the set of outer vertices a 4!-*torsor* rather than just a 4!-set.

Here’s what it looks like when you put three pieces together, from both sides:

]]>I went to some great talks at the JMM, but a hands-on, interactive workshop was a nice change of pace in the schedule. Having seen some of George’s artwork before, I couldn’t resist. In the workshop, he taught us to build his sculpture/puzzle which he calls the 12 Card Star. Here’s what mine looked like:

He supplied the necessary materials: 13 decks of cards, all pre-cut (presumably with a band saw), like this:

We each took 13 card from these decks—the 13th, he said, was “just in case something terrible happens.”

He showed us how to put three cards together:

Then he gave us a clue for assembling the star: *the symmetry group is the same as that of a … *

*Wait!* Let’s not give that away just yet. Better, let’s have some fun figuring out the symmetry group.

Let’s start by counting how many symmetries there are. There are twelve cards in the star, all identically situated in relation to their neighbors, so that’s already 12 symmetries: given any card, I can move it to the position of my favorite card, which is the one I’ll mark here with a blue line along its fold:

But my favorite card also has one symmetry: I can rotate it 180, flipping that blue line from end to end around its midpoint, and this gives a symmetry of the whole star. (Actually, this symmetry is slightly spoiled since I drew the five of hearts: that heart right in the middle of the card isn’t symmetric under a 180 rotation, but never mind that. This would be better if I had drawn a better card, say the *two* of hearts, or the five of *diamonds*.)

So there are symmetries in total, and we’re looking for a group of order 24. Since 24 = 4!, the most obvious guess is the group of permutations of a 4-element set. Is it that? If so, then it would be nice to have a concrete isomorphism.

By a *concrete* isomorphism, I mean a specific 4-element set such that a permutation of that set corresponds uniquely to a symmetry of the 12-card star. Where do we get such a 4-element set? Well, since there are conveniently *four* card suits, let’s get a specific isomorphism between the symmetry group of Hart’s star and the group of permutations of the set

At the workshop, each participant got all identical cards, as you can see in the picture of mine. But if I could choose, I’d use a deck with three 2’s of each suit:

From this deck, there is an essentially unique way to build a 12-Card Star so that the isomorphism between the symmetry group and the group of permutations of suits becomes obvious! The proof is `constructive,’ in that to really convince yourself you might need to *construct* such a 12-card star. You can cut cards using the template on George’s website. He’s also got some instructions there. But here I’ll tell you some stuff about the case with the deck of twelve 2’s. % and from these it will be clear that (if you succeed) your star will have the desired properties.

First notice that there are places where four cards come together, like this:

In fact, there are *six* such places—call these the six **4-fold rotation points**—and it’s no coincidence that six is also the number of *cyclic orderings* of card suits:

Now, out of the deck of twelve 2’s, I claim you can build a 12-card star so that each of these cyclic orderings appears at one 4-fold rotation point, and that you can do it in an essentially unique way.

This should be enough information to build such a 12-card star. If you do, then you can check the isomorphism. Think up an permutation of the set of suits, like this one:

and check that you can rotate your 12-card star in such a way that all of the suit symbols on all of the cards in the 12-card star are permuted in that way. The rest follows by counting.

Sometime I should get hold of the right cards to actually build one like this.

Of course, there are other ways to figure out the symmetry group. What George Hart actually told us during the workshop was not that the symmetry group was the permutation group on 4 elements, but rather that the symmetry group was the same as that of the *cube*. One way to see this is by figuring out what the `convex hull’ of the 12-card star is. The convex hull of an object in Euclidean space is just the smallest convex shape that the object can fit in. Here it is:

This convex polyhedron has eight hexagonal faces and six square faces. You might recognize as a truncated octahedron, which is so named because you can get it by starting with an octahedron and cutting off its corners:

The truncated octahedron has the same symmetry group as the octahedron, which is the same as the symmetry group of the cube, since the cube and octahedron are dual.

Thanks to Chris Aguilar for the Vectorized Playing Cards, which I used in one of the pictures here.

]]>First, consider the Poincaré group, the group of symmetries of Minkowski spacetime. Once we pick an origin of Minkowski spacetime, making it into a vector space , the Poincaré group becomes a semidirect product

and the action on can be written

In fact, demanding that this be a group action is enough to determine the multiplication in the Poincaré group. So, this is one way to think about the meaning of the multiplication law in the semidirect product.

In fact, there’s nothing so special about Minkowski spacetime in this construction. More generally, suppose I’ve got a vector space and a group of symmetries of . Then acts on itself by translations, and we want to form a group that consists of these translations as well as elements of . It should act on by this formula:

Just demanding that this give a group action is enough to determine the multiplication in this group, which we call . I won’t bother writing the formula down, but you can if you like.

In fact, there’s nothing so special about being a *vector space* in this construction. All I really need is an abelian group with a group of symmetries. This gives us a group , whose underlying set is , and whose multiplication is determined by demanding that

is an action.

In fact, there’s nothing so special about being *abelian*. Suppose I’ve got a group with a group of symmetries. This gives us a group , built on the set , and with multiplication determined by demanding that

give an action on . Here denotes the action of on , and is the product of and .

For example, if is a group and is the group of *all* automorphisms of , then the group is called the holomorph of .

What I’m doing here is defining as a *concrete* group: it’s not just some *abstract* group as might be defined in an algebra textbook, but rather a specific group of transformations of something, in this case transformations of . And, if you like Klein Geometry, then whenever you see a concrete group, you start wondering what kind of geometric structure gets preserved by that group.

So: what’s the geometric meaning of the concrete group ? This really involves thinking of in two different ways: as a* group* and as a right *torsor* of itself. The action of preserves the group structure by assumption: it acts by group automorphisms. On the other hand, the action of by left multiplication is by automorphisms of as a right space. Thus, preserves a kind of geometry on that combines the group and torsor structures. We can think of these as a generalization of the “rotations” and “translations” in the Poincaré group.

But I promised to talk about the Heisenberg double of a Hopf algebra.

In fact, there’s nothing so special about *groups* in the above construction. Suppose is a Hopf algebra, or even just an algebra, and there’s some other Hopf algebra that acts on as *algebra* automorphisms. In Hopf algebraists’ lingo, we say is a “ module algebra”. In categorists’ lingo, we say is an algebra in the category of modules.

Besides the Hopf algebra action, also acts on itself by left multiplication. This doesn’t preserve the algebra structure, but it does preserve the *coalgebra* structure: is an module coalgebra.

So, just like in the group case, we can form the semidirect product, sometimes also called a “smash product” in the Hopf algebra setting, , and again the multiplication law in this is completely determined by its action on . We think of this as a “concrete quantum group” acting as two different kinds of “quantum symmetries” on —a “point-fixing” one preserving the algebra structure and a “translational” one preserving the coalgebra structure.

The **Heisenberg double** is a particularly beautiful example of this. Any Hopf algebra is an module algebra, where is the Hopf algebra dual to . The action of on is the “left coregular action” defined as the dual of right multiplication:

for all and all .

One could use different conventions for defining the Heisenberg double, of course, but not as many as you might think. Here’s an amazing fact:

So, while I often see and confused, this is the one case where you don’t need to remember the difference.

But wait a minute—what’s that “equals” sign up there. I can hear my category theorist friends snickering. Surely, they say, I must mean is *isomorphic* to .

But no. I mean *equals*.

I defined as the algebra structure on determined by its action on , its “defining representation.” But every natural construction with Hopf algebras has a dual. I could have instead defined an algebra as the algebra structure on determined by its action on . Namely, acts on itself by *right* multiplication, and acts on by the *right* coregular action. These are just the duals of the two left actions used to define .

That’s really all I wanted to say here. But just in case you want the actual formulas for the Heisenberg double and its defining representations, here they are in Sweedler notation:

Define a map by

where denotes the left regular action of on . Then a quick calculation shows this is a representation iff we define the multiplication in by

This defines the algebra that I called , and this is exactly the usual formula for a semidirect product of Hopf algebras. is its “defining representation.”

On the other hand, the dual of this defining representation is a right representation on given by

Since everything in Hopf algebra theory has an equivalent dual statement, we could just as well have started with this action and defined the algebra structure on by demanding that *this* give a representation. A quick calculation show this is a representation iff we define the multiplication in by

.

This is the usual formula for . But this must be equal to the other formula for the multiplication, so we can define multiplication in the Heisenberg double by either one:

]]>First, recall how the group algebra works. If G is a group, its group algebra is simply the vector space spanned by elements of G, and with multiplication extended linearly from G to this vector space. It is an associative algebra and has a unit, the identity element of the group.

If G is a groupoid, we can similarly form the groupoid algebra. This may seem strange at first: you might expect to get only an algebr*oid* of some sort. In particular, whereas for the *group* algebra we get the multiplication by linearly extending the group multiplication, a *groupoid* has only a partially defined “multiplication”—if the source of *g* differs from the target of *h*, then the composite *gh* is undefined.

However, upon “linearizing”, instead of saying *gh* is undefined, we can simply say it’s *zero* unless *s(g)=t(h)*. This is essentially all there is to the groupoid algebra. The** groupoid algebra** of a groupoid is the vector space with basis the morphisms of , with multiplication given on this basis by composition whenever this is defined, zero when undefined, and extended linearly from there.

It’s easy to see that this gives an associative algebra: the multiplication is linear since we define it on a basis and extend linearly, and it’s associative since the group multiplication is. It is a *unital* algebra if and only if the groupoid has finitely many objects, and in this case the unit is the sum of all of the identity morphisms.

Mainly to avoid saying “groupoids with finitely many morphisms”, I’ll just stick to finite groupoids from now on, where the sets of objects *and* morphisms are both finite.

If we have a groupoid homomorphism, then we get an algebra homomorphism between the corresponding groupoid algebras, also by linear extension. So we get a functor

from the category of finite groupoids to the category of unital algebras.

But in fact, this extends *canonically* to a functor

from the category of finite groupoids to the category of weak Hopf algebras.

To see how this works, notice first that there’s a canonical functor

from the category of sets to the category of coalgebras: Every set is a comonoid in a unique way, so we just linearly extend that comonoid structure to a coalgebra.

In case that’s not clear to you, here’s what I mean in detail. Given a set , there is a unique map that is coassociative, namely the diagonal map . This is easy to prove, so do it if you never have. Also, there is a unique map to the one-element set , up the choice of which one-element set to use. Linearly extending and , they become a coalgebra structure on the vector space with basis . Moreover, any function between sets is a homomorphism of comonoids, and its linear extension to the free vector spaces on these sets is thus a homomorphism of coalgebras. This gives us our functor from sets to coalgebras.

So, given a finite groupoid, the vector space spanned by its morphisms becomes both an algebra and a coalgebra. An obvious question is: do the algebra and coalgebra structure obey some sort of compatibility relations? The answer, as I already gave away at the beginning, is that they form a **weak Hopf algebra**. The antipode is just the linear extension of the inversion map .

(More generally, for those who care, the category algebra of a finite category (or any category with finitely many objects) is a weak bialgebra, and we actually get a functor

from finite categories to weak bialgebras. If happens to be a groupoid, is a weak Hopf algebra; if it happens to be a monoid, is a bialgebra; and if it happens to be a group, is a Hopf algebra. )

This is nice, but have we squashed out all of the lovely “oid”-iness from our groupoid when we form the groupoid algebra? In other words, having built a weak Hopf algebra on the vector space spanned by morphisms, is there any remnant of the original distinction between objects and morphisms?

As I indicated last time, the key is in these two “loop” diagrams:

The left loop says to comultiply the identity, multiply the first part of this with an element and apply the counit. Let’s do this for a groupoid algebra, where , where the sum runs over all objects . Since comultiplication duplicates basis elements, we get

We then get:

using the definition of multiplication and the counit in the groupoid algebra. Similarly, the loop going in the other direction gives , as anticipated last time.

So, we can see that the image of either of the two “loop” diagrams is the subspace spanned by the identity morphisms. This is a commutative subalgebra of the groupoid algebra, and these maps are both idempotent algebra homomorphisms. So, they give “projections” onto the “algebra of objects”.

In fact, something like this happens in the case of a more general weak Hopf algebra. The maps described by the “loop” diagrams, are again idempotent homomorphisms and we can think of them as analogs of the source and target maps. But there are some differences, too. For instance, their images need not be the same in general, though they are isomorphic. The images also don’t need to be commutative. This starts hinting at what Hopf algebroids are like.

But I’ll get into that later.

]]>

First of all, as you might guess, the “-oid” in “Hopf algebroid” is supposed to be like the “-oid” in “groupoid”. Groupoids are a modern way to study symmetry, and they do things that ordinary groups don’t do very well. If you’re not already convinced groupoids are cool—or if you don’t know what they are—then one good place to start is with Alan Weinstein’s explanation of them here:

- Alan Weinstein, Groupoids: Unifying Internal and External Symmetry.

There are two equivalent definitions of groupoid, an algebraic definition and a categorical definition. I’ll use mainly categorical language. So for me, a **groupoid** is a small category in which all morphisms are invertible. A group is then just a groupoid with exactly one object.

Once you’ve given up the attachment to the special case of groups and learned to love groupoids, it seems obvious you should also give up the attachment to Hopf algebras and learn to love Hopf algebroids. That’s one thing I’ve been doing lately.

My main goal in these posts will be to explain what Hopf algebroids are, and how they’re analogous to groupoids. I’ll build up to this slowly, though, without even giving the definition of Hopf algebroid at first. Of course, if you’re eager to see it, you can always cheat and look up the definition here:

- Gabriella Böhm, Hopf Algebroids.

but I’ll warn you that the relationship to groupoids might not be obvious at first. At least, it wasn’t to me. In fact, going from Hopf algebras to Hopf algebroids took considerable work, and some time to settle on the correct definition. But the definition of Hopf algebroid given here in Böhm’s paper seems to be the one left standing after the dust settled. This review article also includes a brief summary of the development of the subject.

To work up to Hopf algebroids, I’ll start with something simpler: **weak Hopf algebras**. These are a rather mild generalization of Hopf algebras, and the definition doesn’t look immediately “oid”-ish. But in fact, they are a nice compromise between between Hopf algebras and Hopf algebroids. In particular, as we’ll see, just as a group has a Hopf algebra structure on its groupoid algebra, a groupoid has a weak Hopf algebra structure on its groupoid algebra.

Better yet, any weak Hopf algebra can be turned into a Hopf algebroid, and Hopf algebroids built in this way are rich enough to see many of features of general Hopf algebroids. So, I think this gives a pretty good way to understand Hopf algebroids, which might otherwise seem obscure at first. The strategy will be to start with weak Hopf algebras and consider what “groupoid-like” structure is already present. In fact, to emphasize how well they parallel ordinary groupoids, weak Hopf algebras are sometimes called **quantum groupoids**:

- Dmitri Nikshych and Leonid Vainerman, Finite Quantum Groupoids and Their Applications

So, here we go…

**What is a Weak Hopf algebra?** This is quick to define using string diagrams. First, let’s define a *weak bialgebra*. Just like a bialgebra, a weak bialgebra is both an associative algebra with unit:

and a coassociative coalgebra with counit:

(If the meaning of these diagrams isn’t clear, you can learn about string diagrams in several places on the web, like here or here.)

Compatibility of multiplication and comultiplication is also just like in an ordinary bialgebra or Hopf algebra:

So the only place where the axioms of a weak bialgebra are “weak” is in the compatibility between unit and comultiplication and between counit and multiplication. If we define these combinations:

then the remaining axioms of a weak bialgebra can be drawn like this:

The two middle pictures in these equations have not quite been defined yet, but I hope it’s clear what they mean. For example, the diagram in the middle on the top row means either of these:

since these are the same by associativity.

Just as a Hopf algebra is a bialgebra with an antipode, a **weak Hopf algebra** is a weak bialgebra with an antipode. The antipode is a linear involution which I’ll draw like this:

and it satisfies these axioms:

Like in a Hopf algebra, having an antipode isn’t additional structure on a weak Hopf algebra, but just a property: a weak bialgebra either has an antipode or it doesn’t, and if it does, the antipode is unique. The antipode also has most of the properties you would expect from Hopf algebra theory.

One thing to notice is that the equations defining a weak Hopf algebra are completely *self-dual*. This is easy to see from the diagrammatic definition given here, where duality corresponds to a rotation of 180 degrees: rotate all of the defining diagrams and you get the same diagrams back. Luckily, even the letter is self-dual.

There’s plenty to say about about weak Hopf algebras themselves, but here I want to concentrate on how they are related to groupoids, and ultimately how they give examples of Hopf algebroids.

To see the “groupoidiness” of weak Hopf algebras, it helps to start at the bottom: the antipode axioms. In particular, look at this one:

The left side instructs us to duplicate an element, apply the antipode to the copy on the right, and then multiply the two copies together. If we do this to an element of a group, where the antipode is the inversion map, we get the identity. If we do it to a morphism in a groupoid, we get the identity on the *target* of that morphism. So, in the *groupoid* world, the left side of this equation is the same as applying the target map, and then turning this back into a morphism by using the map that sends any object to its identity morphism. That is:

where is the map sending each morphism to its target, and denotes the identity morphism on the object .

Likewise, consider the dual of the previous axiom:

In the groupoid world, the left hand side gives the map

where denotes the map sending each morphism to its source.

So… if weak Hopf algebras really are like groupoids, then these two loop diagrams:

must essentially be graphical representations of the target and source maps.

Of course, I only said *if* Hopf algebras are like groupoids, and I haven’t yet explained any precise sense in which they are. But we’re getting there. Next time, I’ll explain more, including how groupoid algebras give weak Hopf algebras.

Meanwhile, if you want some fun with string diagrams, think of other things that are true for groupoids, and see if you can show weak Hopf algebra analogs of them using just diagrams. For example, you can check that the diagrammatic analog of (“the source of the source is the source”) follows from the weak Hopf algebra axioms. Some others hold make a trivial rephrasing: while the obvious diagrammatic translation of does not hold, if you draw it instead starting from the equation , then you get an equation that holds in any weak Hopf algebra.

]]>