Hopf algebra gauge theory and Kitaev models at Perimeter Institute

7 August 2017

I just got back from a trip to the Perimeter Institute, where I spoke at a conference on Hopf Algebras in Kitaev’s Quantum Double Models.

This workshop was a lot of fun!  I learned a lot, had the chance to talk to people I’ve known for a long time, and meet others I hadn’t managed to connect with before. I was especially excited to find out about some lines of work in progress that build on my work with Catherine Meusburger on Hopf algebra gauge theory.

In fact, our work on this seems to have been an impetus for the workshop, and it was really gratifying to see how other people are beginning to apply our theory, and also work out some interesting examples of it for particular Hopf algebras!  I’m anticipating some interesting work coming out in the near future.

Here’s the conference photo; I’m farthest right, and my coauthor, Catherine, is the 11th head from the left, peeking out from the second row:

Hopf-algebras-in-Kitaev-Models-conference-photo

I gave an introductory talk on the subject of Hopf algebra gauge theory, and you can download the slides from my talk, or even watch the video.  Catherine’s talk followed mine, and she showed how Kitaev models are related to Hopf algebra gauge theory in the same way that Turaev-Viro TQFTs are related to Reshetikhin-Turaev TQFTs.  Video of her talk is also available.  Of course, for more detail on Hopf algebra gauge theory, you can also check out our paper: Hopf algebra gauge theory on a ribbon graph.

I can also recommend watching other talks from the conference, available from the webpage linked to above.  This was just the kind of conference I like best, since it brought people from multiple research communities together, in this case including mathematicians and physicists of various sorts as well as mathematical computer scientists. Kitaev models have been a hot topic the past few years, and one reason I think they’re fun is precisely that people from several areas—quantum computation, Hopf algebras, category theory, quantum gravity, quantum foundations, topological quantum field theory, condensed matter physics, and more—are working together.  Of course, this probably also helps explain the rather long conference title.

 

 

A decagonal snowflake from pentagons

25 June 2017

So far in these posts about fractals, I’ve shown (1) how letting triangles reproduce like this:

tri-rule

generates a bunch of copes of the Koch snowflake at different scales:


koch-outlined

Similarly, I’ve shown (2) how letting squares reproduce like this:

quad-rule

generates a bunch of copies of a fractal related to the Koch snowflake, but with 8-fold symmetry:


octagon-snowflake
So what about letting pentagons reproduce?   For pentagons, an analog of the replication rules above is this:

penta-rule

Each of the 10 cute little pentagon children here is a factor of \frac{1}{2+\varphi} smaller than its parent, where \varphi is the golden ratio.

However, something interesting happens here that didn’t happen with the triangle and square rules. While triangles and squares overlap with their ancestors, pentagons overlap with both their ancestors and their cousins.  The trouble is that certain siblings already share faces (I know, the accidental metaphors here are getting troublesome too!), and so siblings’ children have to fight over territory:

penta-3gens

In this three-generation pentagon family portrait, you can see that each second generation pentagon has two children that overlap with a cousin.

As we carry this process further, we get additional collisions between second cousins, third cousins, and so on.  At five generations of pentagons, we start seeing some interestingly complex behavior develop from these collisions:

penta1 There’s a lot of fascinating structure here, and much of it is directly analogous to the 6-fold and 8-fold cases above, but there are also some differences, stemming from the “cousin rivalry” that goes on in pentagon society.

Let’s zoom in to see some collisions near where the two ‘wreaths’ meet on the right side of the picture:

penta1zoom

I find the complicated behavior at the collisions quite pretty, but the ordering issues (i.e. which members of a given generation to draw first when they overlap) annoy me somewhat, since they break the otherwise perfect decagonal symmetry of the picture.

If I were doing this for purely artistic purposes, I’d try resolving the drawing order issues to restore as much symmetry as possible. Of course, I could also cheat and restore symmetry completely by not filling in the pentagons, so that you can’t tell which ones I drew first:

penta1zoomed-outline

It’s cool seeing all the layers at once in this way, and it shows just how complex the overlaps can start getting after a few generations.

Anyway, because of these collisions, we don’t get seem to get a fractal tiling of the plane—at least, not like we got in the previous cases, where the plane simply keeps getting further subdivided into regions that converge to tiles of the same shape at different scales.

Actually, though, we still might get a fractal tiling of the plane, if the total area of overlap of nth generation pentagons shrinks to zero as n goes to infinity!  That would be cool.  But, I don’t know yet.

In any case, the picture generated by pentagons is in many ways very similar to the pictures generated by triangles and squares. Most importantly, all of the similar-looking octagonal flower shaped regions we see in this picture including the outer perimeter, the inner light-blue region, and tons of smaller ones:

penta1

really are converging to the same shape,  my proposal for the 10-fold rotationally symmetric analog of the Koch snowflake:

snowflake-deca

How do we know that all of these shapes are converging to the same fractal, up to rescaling?  We can get a nice visual proof by starting with two pentagons, one rotated and scaled down from the other, and then setting our replication algorithm loose on both of them:

Proof:

pentagon-proof

We see that the area between the two fractal curves in the middle shrinks closer to zero with each generation.

Puzzle for Golden Ratio Fans: What is the exact value of the scaling factor relating the two initial pentagons?

Next up in this infinite series of articles: hexagons!  …

I’m joking!  But, it’s fairly clear we can keep ascending this ladder to get analogs of the Koch snowflake generated by n-gons, with (2n)-fold rotational symmetry.  More nice features might be sacrificed as we go up; in the case generated by hexagons, we’d have collisions not only between cousins, but already between siblings.

More fractal fun

23 June 2017

In the previous article, I explained how the famous Koch snowflake can be built in a different way, using “self-replicating” triangles. This was a revelation for me, because I had always thought of the Koch snowflake as fundamentally different from other kinds of fractals like the Sierpinski triangle, and now I think of them as being basically the same.

In the Sierpinski triangle, each triangle yields three new, scaled-down triangles, attached at the midpoints of sides of the original, like this:

steps-sierpinski

These triangles are usually thought of as “holes” cut out of a big triangle, but all I care about here is the pattern of the triangles.  As I explained last time, the Koch snowflake can be built in a similar way, where each triangle produces six new ones, like this:

steps-koch

You might say this bends the usual rules for making fractals since some of the triangles overlap with their ancestors.  But, it makes me happy because it lets me think of the Sierpinski triangle and the Koch snowflake as essentially the same kind of thing, just with different self-replication rules.

What other fractals can we build in this way?  The Sierpinski carpet is very similar to the Sierpinski triangle, where we now start with squares and a rule for how a square produces 8 smaller ones:

carpet

This made me wonder if I could generalize my construction of the Koch snowflake using triangles to generate some other fractal using squares.  In other words, is there some Koch snowflake-like fractal that is analogous to the ordinary Koch snowflake in the same way that the Sierpinski carpet is analogous to the Sierpinki traingle?

There is!  Taken to the 5th generation, it looks like this:

octagon-snowflake

The outline of this fractal is an analog of the Koch snowflake, but with 8-fold symmetry, rather than 6-fold.  Compare the original Koch snowflake with this one:

Just as I explained last time for the Koch snowflake (left), the blue picture above actually provides a proof that the plane can be tiled with copies of tiles like the one on the right, with various sizes—though in this case, you can’t do it with just two sizes of tiles; it takes infinitely many different sizes!  In fact, this tiling of the plane is also given in Aidan Burns’ paper I referenced in the previous post.

But, my construction is built entirely out of self-replicating squares.  What’s the rule for how squares replicate?

Before I tell you, I’ll give two hints:

First, each square produces 8 new squares, just like in the Sierpinski carpet.  (So, we could actually make a cool animation of this fractal morphing into the Sierpinski carpet!)

Second, you can more easily see some of the bigger squares in the picture if you make the colors of the layers more distinct.  While I like the subtle effect of making each successive generation a slightly darker shade of blue, playing with the color scheme on this picture is fun.  And I learned something interesting when my 7-year old (who is more fond of bold color statements) designed this scheme:

octahedral-c

The colors here are not all chosen independently; the only choice is the color of each generation of squares.  And this lets us see bits of earlier-generation squares peeking through in places I hadn’t noticed with my more conservative color choices.

For example, surrounding the big light blue flower in the middle, there are 8 small light blue flowers, and 16 even smaller ones (which just look like octagons here, since I’ve only gone to the 5th generation); these are really all part of the same light-blue square that’s hiding behind everything else.

The same thing happens with the pink squares, and so on.  If you stare at this picture, you can start seeing the outlines of the squares.

So what’s the rule?  Here it is:

Read the rest of this entry »

Fun with the Koch snowflake

22 June 2017

I was playing with fractals recently and discovered a fun way to construct the Koch snowflake. It may not be new to the world, given how much is known about fractals, but it was new to me. In any case, it’s really cool!  It lets you see a lot more interesting self-similarity in this fractal, and effortlessly leads to the known tilings of the plane by Koch snowflakes.

The  Koch snowflake is usually constructed starting with an equilateral triangle by replacing the middle third of each side with an equilateral triangular protrusion, doing this again to the resulting polygon, and so on.  The first seven steps are shown in this animation:
von_koch_curve
and the Koch snowflake is the “limit” of this process as the number of steps goes to infinity.

In the alternative construction we use only self-replicating triangles.  We again start with a triangle:

koch1

But now, rather than modifying this triangle, we let it “reproduce,” yielding  six new triangles, three at the corners of the original, and three sticking out from the sides of the original.  I’ll make the six offspring a bit darker than the original so that you can see them all:

koch2

Notice that three of the children hide the corners of their parent triangle, so it looks like we now have a hexagon in the middle, but really we’ve got one big triangle behind six smaller ones.  Now we do the same thing again, letting each of the six smaller triangles reproduce in the same way:

koch3

The 36 small triangles are the “grandchildren” of the original triangle; if each of these has six children of its own, we get:

koch4

Repeating this again:

koch5  And again:

koch6

At this stage, it starts getting hard to see the new triangles, so I’ll stop and rely on your imagination of this process continuing indefinitely.  We can now see some interesting features emerging.  Here are some of the main ones:

  1. The outer perimeter of all of these triangles, taken to the infinite limit, is the Koch snowflake.
  2. The lightest blue region, in the middle, is also converging to a smaller Koch snowflake, rotated from the outer one by \pi/6.
  3. Between the outer perimeter and the Koch snowflake in the middle are six more yet smaller Koch snowflakes.
  4. The regions in the middle of these snowflakes are also Koch snowflakes …

and so on: we have Koch snowflakes repeating at smaller and smaller scales.

All this self similarity shows in particular that Koch snowflakes can be assembled out of Koch snowflakes.  This is nothing new; it’s related to Aidan Burns’ nice demonstration that Koch snowflakes can be used to tile the plane, but only if we use snowflakes of at least two different sizes:

Aidan Burns, Fractal tilings. Mathematical Gazette 78 No. 482 (1994) 193–196.

These tilings are already visible in the above construction using triangles, but we can make them even more evident by just playing with the color scheme.

First, if we draw the previous picture again but make all of the triangles the same color, we just get a region whose perimeter is the usual Koch snowflake:

koch10

On the other hand, if we make the original triangle white, but all of its descendants the same color of blue, we get this:

koch7

I hope you see how this forms part of a wallpaper pattern that could be extended in all directions, adding more blue snowflakes that bound larger white snowflake-shaped gaps. This gives the tiling of the plane by Koch snowflakes of two different sizes.

Taking this further, if we make the the original triangle and all of its children white, but all of their further descendants the same color of blue, we get this:

koch8

The pattern seen here can be extended in a hopefully obvious way to tile the whole plane with Koch snowflakes of three different sizes.

Going further, if we make the first three generations white, but all blue after that, we get:

koch9

and so on.

The previous four pictures are generated with exactly the same code—we’re drawing exactly the same triangles, and only changing the color scheme.  If we keep repeating this process, we get a tiling with arbitrarily small Koch snowflakes!

But we can also go the other way, continuing up in scale to get a tiling that includes arbitrarily large Koch snowflakes.  To do this, we just need to view the above series of pictures in a different way!

The way I described it, the scale is the same in all of the previous four pictures.   Making successive generations white, one generation at a time, makes it look as if we’re cutting out a snowflake from the middle of a big snowflake, leaving six similar snowflakes behind, and then iterating this:

koch-animation1

On the other hand, we can alternatively view these pictures as a process of zooming out: each picture is built from six copies of the previous one, and we can imagine zooming out so that each picture becomes just one small piece of the next.

 

If you’re careful about how you do this, you get a tiling of the whole plane, with arbitrarily large Koch-snowflake shaped tiles!  I say you have to be careful because it won’t cover the whole plane if, for example, each picture becomes the top middle part of the next picture.  But, if you zoom out in a “spiral,” rotating by \frac{\pi}{3} at each step, you’ll get a tessellation of the plane.

Someone should make an animation that shows how this works.  Maybe I’ll get a student to do it.

There are some other fun variations on this theme—including a similar construction that leads to the other “fractal tiling” described by Aidan Burns—which I should explain in another post.

In case anyone wants it, here is a 1-page visual explanation of the construction described in this post: snowflake.pdf

(All images in this post, except for the first, copyright 2017 Derek Wise.)

4!-torsor a la George Hart

20 April 2015

As a project with a certain 4-year-old relative of mine, we constructed the proof I described before that the outer vertices of George Hart’s 12-Card Star form a 4!-torsor.  (I guess I didn’t say it that way before, but it’s true!)  Here’s our proof:

IMG_2215Last time I suggested using a deck of 12 cards like this:

deckoftwosBut instead, we used four solid colors, three cards of each.  So, our “star” permutes the colors red, white, black, and silver:

IMG_2090

You can get any permutation of these colors in our Star by exactly one symmetry taking outer vertices to outer vertices.  The “exactly one” in this isomorphism is what makes the set of outer vertices a 4!-torsor rather than just a 4!-set.

Here’s what it looks like when you put three pieces together, from both sides:

IMG_2210IMG_2212

George Hart’s “12-Card Star” and suit permutations.

7 March 2015

When I was at the JMM in San Antonio in January, I was happy to catch a workshop by George Hart:

georgehart

I went to some great talks at the JMM, but a hands-on, interactive workshop was a nice change of pace in the schedule. Having seen some of George’s artwork before, I couldn’t resist. In the workshop, he taught us to build his sculpture/puzzle which he calls the 12 Card Star. Here’s what mine looked like:

12cardpic

He supplied the necessary materials: 13 decks of cards, all pre-cut (presumably with a band saw), like this:

wholedeck

We each took 13 card from these decks—the 13th, he said, was “just in case something terrible happens.”

He showed us how to put three cards together:

3cards

Then he gave us a clue for assembling the star: the symmetry group is the same as that of a …

Wait! Let’s not give that away just yet. Better, let’s have some fun figuring out the symmetry group.

Let’s start by counting how many symmetries there are. There are twelve cards in the star, all identically situated in relation to their neighbors, so that’s already 12 symmetries: given any card, I can move it to the position of my favorite card, which is the one I’ll mark here with a blue line along its fold:

markedcard

But my favorite card also has one symmetry: I can rotate it 180^\circ, flipping that blue line from end to end around its midpoint, and this gives a symmetry of the whole star. (Actually, this symmetry is slightly spoiled since I drew the five of hearts: that heart right in the middle of the card isn’t symmetric under a 180^\circ rotation, but never mind that. This would be better if I had drawn a better card, say the two of hearts, or the five of diamonds.)

So there are 12\times 2 = 24 symmetries in total, and we’re looking for a group of order 24. Since 24 = 4!, the most obvious guess is the group of permutations of a 4-element set. Is it that? If so, then it would be nice to have a concrete isomorphism.

By a concrete isomorphism, I mean a specific 4-element set such that a permutation of that set corresponds uniquely to a symmetry of the 12-card star. Where do we get such a 4-element set? Well, since there are conveniently four card suits, let’s get a specific isomorphism between the symmetry group of Hart’s star and the group of permutations of the set

suitset

At the workshop, each participant got all identical cards, as you can see in the picture of mine. But if I could choose, I’d use a deck with three 2’s of each suit:

deckoftwos

From this deck, there is an essentially unique way to build a 12-Card Star so that the isomorphism between the symmetry group and the group of permutations of suits becomes obvious! The proof is `constructive,’ in that to really convince yourself you might need to construct such a 12-card star. You can cut cards using the template on George’s website. He’s also got some instructions there. But here I’ll tell you some stuff about the case with the deck of twelve 2’s. % and from these it will be clear that (if you succeed) your star will have the desired properties.

First notice that there are places where four cards come together, like this:

fourfoldpoint

In fact, there are six such places—call these the six 4-fold rotation points—and it’s no coincidence that six is also the number of cyclic orderings of card suits:

cyclicsuitorders

Now, out of the deck of twelve 2’s, I claim you can build a 12-card star so that each of these cyclic orderings appears at one 4-fold rotation point, and that you can do it in an essentially unique way.

This should be enough information to build such a 12-card star. If you do, then you can check the isomorphism. Think up an permutation of the set of suits, like this one:

suitpermand check that you can rotate your 12-card star in such a way that all of the suit symbols on all of the cards in the 12-card star are permuted in that way.  The rest follows by counting.

Sometime I should get hold of the right cards to actually build one like this.

Of course, there are other ways to figure out the symmetry group. What George Hart actually told us during the workshop was not that the symmetry group was the permutation group on 4 elements, but rather that the symmetry group was the same as that of the cube. One way to see this is by figuring out what the `convex hull’ of the 12-card star is. The convex hull of an object in Euclidean space is just the smallest convex shape that the object can fit in. Here it is:

12cardconvexhull

This convex polyhedron has eight hexagonal faces and six square faces. You might recognize as a truncated octahedron, which is so named because you can get it by starting with an octahedron and cutting off its corners:

TruncatedOctahedron

The truncated octahedron has the same symmetry group as the octahedron, which is the same as the symmetry group of the cube, since the cube and octahedron are dual.


Thanks to Chris Aguilar for the Vectorized Playing Cards, which I used in one of the pictures here.

From the Poincaré group to Heisenberg doubles

25 September 2014

There’s a nice geometric way to understand the Heisenberg double of a Hopf algebra, using what one might call its “defining representation(s).” In fact, it’s based on the nice geometric way to understand any semidirect product of groups, so I’ll start with that.

First, consider the Poincaré group, the group of symmetries of Minkowski spacetime.  Once we pick an origin of Minkowski spacetime, making it into a vector space \mathbb{R}^{3,1}, the Poincaré group becomes a semidirect product

\mathrm{ISO}(3,1)\cong\mathbb{R}^{3,1} \rtimes \mathrm{SO}(3,1)

and the action on \mathbb{R}^{3,1} can be written

(v,g)\cdot x = v + g x

In fact, demanding that this be a group action is enough to determine the multiplication in the Poincaré group.  So, this is one way to think about the meaning of the multiplication law in the semidirect product.

In fact, there’s nothing so special about Minkowski spacetime in this construction.  More generally, suppose I’ve got a vector space V and a group G of symmetries of V.   Then V acts on itself by translations, and we want to form a group that consists of these translations as well as elements of G.  It should act on V by this formula:

(v,g)\cdot x = v + g x

Just demanding that this give a group action is enough to determine the multiplication in this group, which we call V \rtimes G.   I won’t bother writing the formula down, but you can if you like.

In fact, there’s nothing so special about V being a vector space in this construction.  All I really need is an abelian group H with a group G of symmetries.  This gives us a group H\rtimes G, whose underlying set is H \times G, and whose multiplication is determined by demanding that

(h,g) \cdot h' = h + gx

is an action.

In fact, there’s nothing so special about H being abelian.  Suppose I’ve got a group H with a group G of symmetries.  This gives us a group H\rtimes G, built on the set H \times G, and with multiplication  determined by demanding that

(h,g)\cdot x = h (g x)

give an action on H.  Here gx denotes the action of g\in G on x\in H, and h(gx) is the product of h and gx.

For example, if H is a group and G=\mathrm{Aut}(H) is the group of all automorphisms of H, then the group H\rtimes \mathrm{Aut}(H) is called the holomorph of H.

What I’m doing here is defining H \rtimes G as a concrete group: it’s not just some abstract group as might be defined in an algebra textbook, but rather a specific group of transformations of something, in this case transformations of H.  And, if you like Klein Geometry, then whenever you see a concrete group, you start wondering what kind of geometric structure gets preserved by that group.

So: what’s the geometric meaning of the concrete group H \rtimes G?  This really involves thinking of H in two different ways: as a group and as a right torsor of itself.  The action of G preserves the group structure by assumption: it acts by group automorphisms.  On the other hand, the action of H by left multiplication is by automorphisms of H as a right H space.  Thus,  H \rtimes G preserves a kind of geometry on H that combines the group and torsor structures.  We can think of these as a generalization of the “rotations” and “translations” in the Poincaré group.

But I promised to talk about the Heisenberg double of a Hopf algebra.

In fact, there’s nothing so special about groups in the above construction.  Suppose H is a Hopf algebra, or even just an algebra, and there’s some other Hopf algebra G that acts on H as algebra automorphisms.  In Hopf algebraists’ lingo, we say H is a “G module algebra”.  In categorists’ lingo, we say H is an algebra in the category of G modules.

Besides the Hopf algebra action, H also acts on itself by left multiplication.  This doesn’t preserve the algebra structure, but it does preserve the coalgebra structure: H is an H module coalgebra.

So, just like in the group case, we can form the semidirect product, sometimes also called a “smash product” in the Hopf algebra setting, H \rtimes G, and again the multiplication law in this is completely determined by its action on H. We think of this as a “concrete quantum group” acting as two different kinds of “quantum symmetries” on H—a “point-fixing” one preserving the algebra structure and a “translational” one preserving the coalgebra structure.

The Heisenberg double is a particularly beautiful example of this.   Any Hopf algebra H is an H^* module algebra, where H^* is the Hopf algebra dual to H.  The action of H^* on H is the “left coregular action” \rightharpoonup defined as the dual of right multiplication:

(h\rightharpoonup \alpha)(k) = \alpha(kh)

for all h,k\in H and all \alpha \in H^*.

One could use different conventions for defining the Heisenberg double, of course, but not as many as you might think.  Here’s an amazing fact:

H \rtimes H^* = H \ltimes H^*

So, while I often see \rtimes and \ltimes confused, this is the one case where you don’t need to remember the difference.

But wait a minute—what’s that “equals” sign up there.   I can hear my category theorist friends snickering.  Surely, they say, I must mean H \rtimes H^* is isomorphic to H \rtimes H^*.

But no.  I mean equals.

I defined H \rtimes H^* as the algebra structure on H \otimes H^* determined by its action on H, its “defining representation.”   But every natural construction with Hopf algebras has a dual.  I could have instead defined an algebra H \ltimes H^* as the algebra structure on H \otimes H^* determined by its action on H^*.  Namely, H^* acts on itself by right multiplication, and H acts on H^* by the right coregular action.  These are just the duals of the two left actions used to define H \rtimes H^*.

That’s really all I wanted to say here.  But just in case you want the actual formulas for the Heisenberg double and its defining representations, here they are in Sweedler notation:

Read the rest of this entry »

Hopf algebroids and (quantum) groupoids (Part 2)

8 September 2014

Last time I defined weak Hopf algebras, and claimed that they have groupoid-like structure. Today I’ll make that claim more precise by defining the groupoid algebra of a finite groupoid and showing that it has a natural weak Hopf algebra structure. In fact, we’ll get a functor from finite groupoids to weak Hopf algebras.

First, recall how the group algebra works. If G is a group, its group algebra is simply the vector space spanned by elements of G, and with multiplication extended linearly from G to this vector space. It is an associative algebra and has a unit, the identity element of the group.

If G is a groupoid, we can similarly form the groupoid algebra. This may seem strange at first: you might expect to get only an algebroid of some sort. In particular, whereas for the group algebra we get the multiplication by linearly extending the group multiplication, a groupoid has only a partially defined “multiplication”—if the source of g differs from the target of h, then the composite gh is undefined.

However, upon “linearizing”, instead of saying gh is undefined, we can simply say it’s zero unless s(g)=t(h). This is essentially all there is to the groupoid algebra.  The groupoid algebra \mathbb{C}[G] of a groupoid G is the vector space with basis the morphisms of G, with multiplication given on this basis by composition whenever this is defined, zero when undefined, and extended linearly from there.

It’s easy to see that this gives an associative algebra: the multiplication is linear since we define it on a basis and extend linearly, and it’s associative since the group multiplication is. It is a unital algebra if and only if the groupoid has finitely many objects, and in this case the unit is the sum of all of the identity morphisms.

Mainly to avoid saying “groupoids with finitely many morphisms”, I’ll just stick to finite groupoids from now on, where the sets of objects and morphisms are both finite.

If we have a groupoid homomorphism, then we get an algebra homomorphism between the corresponding groupoid algebras, also by linear extension. So we get a functor

\mathbb{C}[\cdot]\colon\mathbf{FinGpd} \to \mathbf{Alg}

from the category of finite groupoids to the category of unital algebras.

But in fact, this extends canonically to a functor

\mathbb{C}[\cdot]\colon\mathbf{FinGpd} \to \mathbf{WHopf}

from the category of finite groupoids to the category of weak Hopf algebras.

To see how this works, notice first that there’s a canonical functor

\mathbb{C}[\cdot]\colon\mathbf{Set} \to \mathbf{Coalg}

from the category of sets to the category of coalgebras:  Every set is a comonoid in a unique way, so we just linearly extend that comonoid structure to a coalgebra.

In case that’s not clear to you, here’s what I mean in detail.  Given a set X, there is a unique map \Delta\colon X \to X \times X that is coassociative, namely the diagonal map \Delta(x) = (x,x). This is easy to prove, so do it if you never have.  Also, there is a unique map to the one-element set \epsilon\colon X \to \{0\}, up the choice of which one-element set to use.  Linearly extending \Delta and \epsilon, they become a coalgebra structure on the vector space with basis X. Moreover, any function between sets is a homomorphism of comonoids, and its linear extension to the free vector spaces on these sets is thus a homomorphism of coalgebras.  This gives us our functor from sets to coalgebras.

So, given a finite groupoid, the vector space spanned by its morphisms becomes both an algebra and a coalgebra.  An obvious question is: do the algebra and coalgebra structure obey some sort of compatibility relations?  The answer, as I already gave away at the beginning, is that they form a weak Hopf algebra.  The antipode is just the linear extension of the inversion map g \mapsto g^{-1}.

(More generally, for those who care, the category algebra \mathbb{C}[C] of a finite category C (or any category with finitely many objects) is a weak bialgebra, and we actually get a functor

\mathbb{C}[\cdot] \colon \mathbf{FinCat} \to \mathbf{WBialg}

from finite categories to weak bialgebras.  If C happens to be a groupoid, \mathbb{C}[C] is a weak Hopf algebra; if it happens to be a monoid, \mathbb{C}[C] is a bialgebra; and if it happens to be a group, \mathbb{C}[C] is a Hopf algebra. )

This is nice, but have we squashed out all of the lovely “oid”-iness from our groupoid when we form the groupoid algebra? In other words, having built a weak Hopf algebra on the vector space spanned by morphisms, is there any remnant of the original distinction between objects and morphisms? 

As I indicated last time, the key is in these two “loop” diagrams:

 WHA-target-source

The left loop says to comultiply the identity, multiply the first part of this with an element g and apply the counit. Let’s do this for a groupoid algebra, where 1 = \sum_x 1_x, where the sum runs over all objects x.  Since comultiplication duplicates basis elements, we get

\Delta(1) = \sum_x 1_x \otimes 1_x

We then get:

g\mapsto \sum_x \epsilon(1_x\cdot g) \otimes 1_x = 1_{t(g)}

using the definition of multiplication and the counit in the groupoid algebra.  Similarly, the loop going in the other direction gives g \mapsto 1_{s(g)} , as anticipated last time. 

So, we can see that the image of either of the two “loop” diagrams is the subspace spanned by the identity morphisms.  This is a commutative subalgebra of the groupoid algebra, and these maps are both idempotent algebra homomorphisms.  So, they give “projections” onto the “algebra of objects”.   

In fact, something like this happens in the case of a more general weak Hopf algebra.  The maps described by the “loop” diagrams, are again idempotent homomorphisms and we can think of them as analogs of the source and target maps.  But there are some differences, too.  For instance, their images need not be the same in general, though they are isomorphic.  The images also don’t need to be commutative.  This starts hinting at what Hopf algebroids are like. 

But I’ll get into that later.

 

Hopf algebroids and (quantum) groupoids (Part 1)

1 September 2014

I’ve been thinking a lot about weak Hopf algebras and Hopf algebroids, especially in relation to work I’m doing with Catherine Meusburger on applications of them to gauge theory.  I don’t want to talk here yet about what we’re working on, but I do feel like explaining some basic ideas.  This is all known material, but it’s fun stuff and deserves to be better known.

First of all, as you might guess, the “-oid” in “Hopf algebroid” is supposed to be like the “-oid” in “groupoid”.  Groupoids are a modern way to study symmetry, and they do things that ordinary groups don’t do very well.  If you’re not already convinced groupoids are cool—or if you don’t know what they are—then one good place to start is with Alan Weinstein’s explanation of them here:

There are two equivalent definitions of groupoid, an algebraic definition and a categorical definition.  I’ll use mainly categorical language.  So for me, a groupoid is a small category in which all morphisms are invertible.  A group is then just a groupoid with exactly one object.

Once you’ve given up the attachment to the special case of groups and learned to love groupoids, it seems obvious you should also give up the attachment to Hopf algebras and learn to love Hopf algebroids.  That’s one thing I’ve been doing lately.

My main goal in these posts will be to explain what Hopf algebroids are, and how they’re analogous to groupoids.  I’ll build up to this slowly, though, without even giving the definition of Hopf algebroid at first.  Of course, if you’re eager to see it, you can always cheat and look up the definition here:

but I’ll warn you that the relationship to groupoids might not be obvious at first.  At least, it wasn’t to me.  In fact, going from Hopf algebras to Hopf algebroids took considerable work, and some time to settle on the correct definition. But the definition of Hopf algebroid given here in Böhm’s paper seems to be the one left standing after the dust settled.  This review article also includes a brief summary of the development of the subject. 

To work up to Hopf algebroids, I’ll start with something simpler: weak Hopf algebras. These are a rather mild generalization of Hopf algebras, and the definition doesn’t look immediately “oid”-ish. But in fact, they are a nice compromise between between Hopf algebras and Hopf algebroids.  In particular, as we’ll see, just as a group has a Hopf algebra structure on its groupoid algebra, a groupoid has a weak Hopf algebra structure on its groupoid algebra.

Better yet, any weak Hopf algebra can be turned into a Hopf algebroid, and Hopf algebroids built in this way are rich enough to see many of features of general Hopf algebroids. So, I think this gives a pretty good way to understand Hopf algebroids, which might otherwise seem obscure at first. The strategy will be to start with weak Hopf algebras and consider what “groupoid-like” structure is already present. In fact, to emphasize how well they parallel ordinary groupoids, weak Hopf algebras are sometimes called quantum groupoids:

So, here we go…

What is a Weak Hopf algebra?  This is quick to define using string diagrams.  First, let’s define a weak bialgebra.  Just like a bialgebra, a weak bialgebra is both an associative algebra with unit:

WHA-algand a coassociative coalgebra with counit:

wha-coalg(If the meaning of these diagrams isn’t clear, you can learn about string diagrams in several places on the web, like here or here.) 

Compatibility of multiplication and comultiplication is also just like in an ordinary bialgebra or Hopf algebra:

WHA-compat

So the only place where the axioms of a weak bialgebra are “weak” is in the compatibility between unit and comultiplication and between counit and multiplication.  If we define these combinations:WHA-adj

then the remaining axioms of a weak bialgebra can be drawn like this:

WHA-unit

The two middle pictures in these equations have not quite been defined yet, but I hope it’s clear what they mean. For example, the diagram in the middle on the top row means either of these:

Screen shot 2014-08-22 at 4.29.47 PMsince these are the same by associativity.

Just as a Hopf algebra is a bialgebra with an antipode, a weak Hopf algebra is a weak bialgebra with an antipode.  The antipode is a linear involution S which I’ll draw like this:WHA-antipode

and it satisfies these axioms:

WHA-s1-3Like in a Hopf algebra, having an antipode isn’t additional structure on a weak Hopf algebra, but just a property: a weak bialgebra either has an antipode or it doesn’t, and if it does, the antipode is unique.  The antipode also has most of the properties you would expect from Hopf algebra theory.

One thing to notice is that the equations defining a weak Hopf algebra are completely self-dual.  This is easy to see from the diagrammatic definition given here, where duality corresponds to a rotation of 180 degrees: rotate all of the defining diagrams and you get the same diagrams back.  Luckily, even the letter S is self-dual.

There’s plenty to say about about weak Hopf algebras themselves, but here I want to concentrate on how they are related to groupoids, and ultimately how they give examples of Hopf algebroids. 

To see the “groupoidiness” of weak Hopf algebras, it helps to start at the bottom: the antipode axioms.  In particular, look at this one:

WHA-s2

The left side instructs us to duplicate an element, apply the antipode to the copy on the right, and then multiply the two copies together.  If we do this to an element of a group, where the antipode is the inversion map, we get the identity.  If we do it to a morphism in a groupoid, we get the identity on the target of that morphism. So, in the groupoid world, the left side of this equation is the same as applying the target map, and then turning this back into a morphism by using the map that sends any object to its identity morphism.  That is:

g \mapsto 1_{t(g)}

where t is the map sending each morphism to its target, and 1_x denotes the identity morphism on the object x.  

Likewise, consider the dual of the previous axiom:

WHA-s3

In the groupoid world, the left hand side gives the map

g \mapsto 1_{s(g)}

where s denotes the map sending each morphism to its source. 

So… if weak Hopf algebras really are like groupoids, then these two loop diagrams:

WHA-target-source must essentially be graphical representations of the target and source maps.

Of course, I only said if Hopf algebras are like groupoids, and I haven’t yet explained any precise sense in which they are.   But we’re getting there.  Next time, I’ll explain more, including how groupoid algebras give weak Hopf algebras. 

Meanwhile, if you want some fun with string diagrams, think of other things that are true for groupoids, and see if you can show weak Hopf algebra analogs of them using just diagrams.  For example, you can check that the diagrammatic analog of 1_{s(1_{s(g)})}=1_{s(g)} (“the source of the source is the source”) follows from the weak Hopf algebra axioms.  Some others hold make a trivial rephrasing: while the obvious diagrammatic translation of 1_{t(S(g))} = 1_{s(g)} does not hold,  if you draw it instead starting from the equation 1_{t(S(g))} = S(1_{s(g)}), then you get an equation that holds in any weak Hopf algebra. 

Blog post on Observer Space by Jeffrey Morton

17 October 2012

DW: This is a very nice blog post by Jeffrey Morton about observer space! He wrote this based on my ILQGS talk and my papers with Steffen Gielen. (In fact, Jeff has written a lot of other nice summaries of papers and talks, as well as stuff about his own research, on his blog, Theoretical Atlas — check it out!)

Theoretical Atlas

This entry is a by-special-request blog, which Derek Wise invited me to write for the blog associated with the International Loop Quantum Gravity Seminar, and it will appear over there as well.  The ILQGS is a long-running regular seminar which runs as a teleconference, with people joining in from various countries, on various topics which are more or less closely related to Loop Quantum Gravity and the interests of people who work on it.  The custom is that when someone gives a talk, someone else writes up a description of the talk for the ILQGS blog, and Derek invited me to write up a description of his talk.  The audio file of the talk itself is available in .aiff and .wav formats, and the slides are here.

The talk that Derek gave was based on a project of his and Steffen Gielen’s, which has taken written form in a…

View original post 2,961 more words