Archive for the ‘Geometry’ Category

Discrete robot motion planning in a continuous domain

28 February 2019

A robot has a discrete configuration space if it has a discrete set of possible actions at each step, right?

Wrong! In fact, even in a perfectly idealized world where there is zero error in motion, a robot with only discrete actions can effectively reach a continuum of different configurations. This can make motion planning for such robots very interesting.

Here’s a robot I built when I was about 15 years old:

my robot

I don’t have this robot anymore.  I gave it to the local jawas in the California desert before moving to Germany.   But now, a weekend’s worth of programming let me bring it back to life in a virtual world.  Plus, I’ve made it more intelligent than it ever was in its previous life.  Bwahhh-ha-ha!

Let me explain what’s interesting about this robot, and why I coded it into a virtual world.

The original robot could not do much.  It could rotate and move forward or backward only in fixed increments, and I could pre-program a sequence of moves ahead of time, which it would then execute.  I know—not very impressive as robots go these days.

But the limitations of this robot are also what make it interesting.  Since it only had discrete motions, and rather coarse ones at that, getting it to move from one location to another without running into things was an interesting challenge!

An ideal discrete robot

Let’s consider an idealized version of this robot which can do only two things:

  1. Move forward or backward by exactly 1 meter.
  2. Rotate in place in increments of exactly 45 degrees.

The robot I built was very much like this, although it moved some shorter distance (1m is just for a convenient number), and of course executed neither of the two possible moves with perfect consistency or accuracy. Still, it was rather good at doing what it was designed to.

In fact, we can idealize this robot just a bit more.  Let’s ignore the need for the robot to rotate and just assume it can move instantly in any of eight directions, each separated from 45 from the last.   Thus, if we point the robot Northward, at each stage it can move in any of eight basic compass directions: N, S, E, W, NW, NE, SE, and SW.

In a wide open space with nothing to run into, if you give our idealized robot a pen (like the Logo turtle) and send it out on a random walk, you’ll see something like this:

robot-random-walk

There’s something important to observe here: the locations where the robot stopped between moves, denoted by dots, can end up being very close together, even though the robot always moves in steps of one meter.

In fact, the robot can get essentially anywhere. The positions the robot can get to form a dense subset of the plane:  Given enough moves, it can stop as close as you wish to any destination point you pick.  I say it can “stop” close to the destination because I don’t want to count it is reaching its destination if it just rolls over the destination in transit without actually stopping there.  That would be cheating!

For example, suppose the robot is at (0, 0) and you want to program it to get to (0, 0.5).  Just telling the robot to move straight toward (0, 0.5) won’t work: it will drive through the point (0, 0.5), but it won’t stop there; it will keep going all the way to (0, 1).  In fact, you’ll never be able to reach (0, 0.5) exactly, but if you’re satisfied with getting there to within 0.1 meters of the destination, then there’s an easy solution with only 3 steps:

halfstep-within-0.1

The red x marks the goal at (0, 0.5), and the final resting spot of the robot, marked by a red dot , is within 0.1 of this.  More precisely, it’s easy to see that the robot comes to rest at the point (0,\sqrt{2}-1), or approximately (0, 0.4142), which is within our acceptable error of 0.1 m.

If we’re not satisfied getting within 0.1 m, and we’d rather get within, say, 0.05 m (5cm) of the goal, we can still do it with a bit more thought.  Here’s a way to do it with 20 moves, not counting rotations:

halfstep-within-0.05

Of course, solutions are not at all unique: Here are two more solutions to the same problem:

However, each of these also consists of 20 moves.  In fact, these are all minimal solutions: every solutions to this problem, at the specified precision of 5cm, has at least 20 moves not counting rotations.

Of course, you could also demand even more precision.  If you want to get from (0, 0) to (0, 0.5) to within a distance of 0.01, i.e. within 1cm, given our step size of 1m, you’ll need at least 119 moves:

halfstep-within-0.01

Yikes!

Finding a plan to get from one location to another is an example of “motion planning” in robotics, and doing this kind of motion planning is one of the ways I had fun with the original robot when I was a teenager.  I learned to do some of this intuitively just by getting a feel for how to use combinations of moves.  Of course, that kind of intuition never let me do anything so complicated as finding the 119 moves above!

Motion planning on a hypercube lattice

The virtual reincarnation of my old robot, despite the disadvantage of lacking any corporeal existence, has one big advantage over its former self: it can solve the motion planning problem itself.

To  do this, it uses a classic AI tool—A* search.  This is an algorithm for finding minimal paths through weighted graphs, and is commonly used for route finding in map programs, solving puzzles, and planning complex sequences of actions.

In the robot’s case, although it occupies a 2d world which we identify with a subset of \mathbb{R}^2, the A* search is carried out instead on the 4-dimensional lattice \mathbb{Z}^4 representing the four directions it can travel in the physical space: the N-S, E-W, NE-SW, and SE-NW directions.

In other words, the robot really thinks of itself as living on a hypercube lattice.  You may be used to seeing pictures of hypercubes in one of their 3d projections, like this:

800px-Hypercube

However, if you project a hypercube down into two dimensions in such a way that all of the edges have the same length in the projection, you can get this:

1024px-4-cube_graph

This picture of a hypercube is something I could draw with my robot!  All of the angles are multiples of 45 degrees, and all of the edges have the same length.  (Plus, I can draw it in one pass: all of the vertices are even, so there exists an Euler circuit.)

Of course, there’s a deep reason for this: the real configuration space of the robot, still ignoring the robot’s orientation, is a projection of a hypercube lattice into \mathbb{R}^2.  The vertices of this lattice are the states the robot can occupy at rest, while the edges represent the moves, or allowed transitions between states.

The entire hypercube lattice is like an ordinary cubical lattice:

Cubic_honeycomb

but one dimension higher, and instead of 8 cubes meeting at each vertex, it has 16 hypercubes meeting at each vertex.  This is rather complicated to draw, and I won’t show a picture of the hypercube lattice.  However, for your amazement, here is a picture I drew of its projection into two dimensions:

Screenshot from 2019-02-28 11-35-51

There’s nothing wrong with this picture—The lattice’s projection is dense in \mathbb{R}^2, as my robot shows, remember! :-)

Explicitly, the basic projection \mathbb{Z}^4 \to \mathbb{R}^2 can be written

(m, n, a, b) \mapsto (x,y)

where

\begin{array}{rcl}  x &=& m + \frac{1}{2}(a+b)\sqrt{2} \\  y &=& n + \frac{1}{2}(a-b)\sqrt{2}\end{array}

The eight possible moves—or in other words, the edges in the hypercube lattice—correspond to incrementing or decrementing one of the coordinates $m, n, a, b$.  You can check that these project exactly to the eight allowed robot motions.

I combine this projection into \mathbb{R}^2 with a translation and rotation, so that the robot’s current position in the lattice is always (0,0,0,0).

A* Search implementation

To solve the motion planning problem, the virtual robot uses A* search on the hypercube lattice.  This searches through the tree of paths starting at the robot’s current position (0,0,0,0) in the lattice \mathbb{Z}^4 until it finds a minimal length route to some point in the lattice whose projection into \mathbb{R}^2 is less than the specified tolerance away from the destination in \mathbb{R}^2.  At each stage, the allowed moves are those edges in the hypercube lattice whose projections do not either leave the “room” the robot is contained in or intersect any obstacles.

A* search also relies on a “heuristic distance”—a way of conservatively estimating distance remaining to to the goal.  This is because the tree of paths is traversed in order of total estimated distance for a given route.  The advantage of this over a  breadth first search, which would in principle also find a minimal solution, is that far fewer paths need to be explored.

An effective heuristic distance in the case of the robot, at least for relatively small problems, comes from again using the projection from \mathbb{Z}^4 to \mathbb{R}^2 and checking the Euclidean distance to the specified goal.

The big point here is that although the robot’s shortest path between two points in a room might not look to us much like a straight line:

robot-stright-line

it’s still a minimal path in a 4 dimensional lattice, projected down to the plane.

Real robots

Obviously, my virtual robot is an idealization.  For the real robot it is based on, two kinds of uncertainty come into play: uncertainty in the current configuration, and uncertainty in the moves from one configuration to another.  The planning problem must interact with the “robot localization” problem.

First, there is uncertainty in the robot’s current position and orientation.  For example, even assuming the robot can move with perfect accuracy, optimal planning is very sensitive to errors in angular placement (“attitude”).  If you want to get from (0,0) to (1, 2), and the robot’s placement is very accurate, it will move like a knight:

knight-0degrees

But rotate its starting position just 2 degrees to the left, and you’ll need 16 moves to get to the same point with an accuracy of ±0.05 m:

knight-2degrees-left

Rotate it 2 degrees to the right, and you’ll need 20 moves to get the same accuracy:

knight-2degrees-right

Small perturbations in the position of the robot obviously has a similar effect.

More importantly, the moves themselves are not completely accurate.  On the real robot, I had a potentiometer that adjusted the relative speed of the two side wheels to calibrate the forward motion.  Imperfect calibration means not driving in a completely straight line.  Similarly, during a rotation, there was surely also some tiny change in position as well.

When we account for both types of uncertainty, there is a probability distribution for the initial configuration of the robot, including position and orientation.  There is also a probability distribution for the result of each possible rotation or translation move.  Using Bayesian probability, these two distributions are convolved each time we make a move, in order to get the probability distribution for the updated configuration.

The accumulation of errors make accurate motion planning impossible unless we give the robot the ability to “sense” its environment, use sensory data to update its configuration distribution, and adjust the plan accordingly.  But that’s a discussion for another time.

3d Fractal models

29 March 2018

My student Colton Baumler has been printing 3d versions of some of the fractal designs I posted about a few months ago:

IMG_9818

That’s his three-dimensional interpretation of the first few iterations of this design of mine:


octagon-snowflake

What’s fun about Colton’s version is that each new layer of squares is printed a bit taller than the previous layer.  I had really only imagined these as two-dimensional objects, so for me it’s really fun to have 3-dimensional models of them to hold and play with!  Colton’s idea of adding some depth really adds another … er … dimension to the overall effect:

IMG_8738

His work also gives a nice way to illustrate some of the features of these fractals.  For example, visually proving that the “inside” and “outside” in my fractals converge to the same shape can be done by printing the same model at different scales.  Here are three copies of the same fractal at different scales, each printed with the same number of iterations:

IMG_9830

Not only do these nest tightly inside each other, the thickness is also scaled down by the same ratio, so that the upper surfaces of each layer are completely flush.

Colton has been doing this work partly because designing fractals is a great way to learn 3d printing, and he’s now getting some impressively accurate prints.  But, I also like some of his earlier rough drafts.  For example, in his first attempt with this fractal based on triangles:


koch6

there were small gaps between the triangles, which Colton hadn’t intended.  But, this gave the piece a sort of rough, edgy look that I like, and it casts shadows like castle battlements:

IMG_9839

Colton is still doing nice new work, and we’ll eventually post some more pictures here. But I couldn’t wait to post a preview of some of his stuff!

IMG_9850

(Designs and photos © 2018 Colton Baumler and Derek Wise)

A decagonal snowflake from pentagons

25 June 2017

So far in these posts about fractals, I’ve shown (1) how letting triangles reproduce like this:

tri-rule

generates a bunch of copes of the Koch snowflake at different scales:


koch-outlined

Similarly, I’ve shown (2) how letting squares reproduce like this:

quad-rule

generates a bunch of copies of a fractal related to the Koch snowflake, but with 8-fold symmetry:


octagon-snowflake
So what about letting pentagons reproduce?   For pentagons, an analog of the replication rules above is this:

penta-rule

Each of the 10 cute little pentagon children here is a factor of \frac{1}{2+\varphi} smaller than its parent, where \varphi is the golden ratio.

However, something interesting happens here that didn’t happen with the triangle and square rules. While triangles and squares overlap with their ancestors, pentagons overlap with both their ancestors and their cousins.  The trouble is that certain siblings already share faces (I know, the accidental metaphors here are getting troublesome too!), and so siblings’ children have to fight over territory:

penta-3gens

In this three-generation pentagon family portrait, you can see that each second generation pentagon has two children that overlap with a cousin.

As we carry this process further, we get additional collisions between second cousins, third cousins, and so on.  At five generations of pentagons, we start seeing some interestingly complex behavior develop from these collisions:

penta1 There’s a lot of fascinating structure here, and much of it is directly analogous to the 6-fold and 8-fold cases above, but there are also some differences, stemming from the “cousin rivalry” that goes on in pentagon society.

Let’s zoom in to see some collisions near where the two ‘wreaths’ meet on the right side of the picture:

penta1zoom

I find the complicated behavior at the collisions quite pretty, but the ordering issues (i.e. which members of a given generation to draw first when they overlap) annoy me somewhat, since they break the otherwise perfect decagonal symmetry of the picture.

If I were doing this for purely artistic purposes, I’d try resolving the drawing order issues to restore as much symmetry as possible. Of course, I could also cheat and restore symmetry completely by not filling in the pentagons, so that you can’t tell which ones I drew first:

penta1zoomed-outline

It’s cool seeing all the layers at once in this way, and it shows just how complex the overlaps can start getting after a few generations.

Anyway, because of these collisions, we don’t get seem to get a fractal tiling of the plane—at least, not like we got in the previous cases, where the plane simply keeps getting further subdivided into regions that converge to tiles of the same shape at different scales.

Actually, though, we still might get a fractal tiling of the plane, if the total area of overlap of nth generation pentagons shrinks to zero as n goes to infinity!  That would be cool.  But, I don’t know yet.

In any case, the picture generated by pentagons is in many ways very similar to the pictures generated by triangles and squares. Most importantly, all of the similar-looking octagonal flower shaped regions we see in this picture including the outer perimeter, the inner light-blue region, and tons of smaller ones:

penta1

really are converging to the same shape,  my proposal for the 10-fold rotationally symmetric analog of the Koch snowflake:

snowflake-deca

How do we know that all of these shapes are converging to the same fractal, up to rescaling?  We can get a nice visual proof by starting with two pentagons, one rotated and scaled down from the other, and then setting our replication algorithm loose on both of them:

Proof:

pentagon-proof

We see that the area between the two fractal curves in the middle shrinks closer to zero with each generation.

Puzzle for Golden Ratio Fans: What is the exact value of the scaling factor relating the two initial pentagons?

Next up in this infinite series of articles: hexagons!  …

I’m joking!  But, it’s fairly clear we can keep ascending this ladder to get analogs of the Koch snowflake generated by n-gons, with (2n)-fold rotational symmetry.  More nice features might be sacrificed as we go up; in the case generated by hexagons, we’d have collisions not only between cousins, but already between siblings.

More fractal fun

23 June 2017

In the previous article, I explained how the famous Koch snowflake can be built in a different way, using “self-replicating” triangles. This was a revelation for me, because I had always thought of the Koch snowflake as fundamentally different from other kinds of fractals like the Sierpinski triangle, and now I think of them as being basically the same.

In the Sierpinski triangle, each triangle yields three new, scaled-down triangles, attached at the midpoints of sides of the original, like this:

steps-sierpinski

These triangles are usually thought of as “holes” cut out of a big triangle, but all I care about here is the pattern of the triangles.  As I explained last time, the Koch snowflake can be built in a similar way, where each triangle produces six new ones, like this:

steps-koch

You might say this bends the usual rules for making fractals since some of the triangles overlap with their ancestors.  But, it makes me happy because it lets me think of the Sierpinski triangle and the Koch snowflake as essentially the same kind of thing, just with different self-replication rules.

What other fractals can we build in this way?  The Sierpinski carpet is very similar to the Sierpinski triangle, where we now start with squares and a rule for how a square produces 8 smaller ones:

carpet

This made me wonder if I could generalize my construction of the Koch snowflake using triangles to generate some other fractal using squares.  In other words, is there some Koch snowflake-like fractal that is analogous to the ordinary Koch snowflake in the same way that the Sierpinski carpet is analogous to the Sierpinki traingle?

There is!  Taken to the 5th generation, it looks like this:

octagon-snowflake

The outline of this fractal is an analog of the Koch snowflake, but with 8-fold symmetry, rather than 6-fold.  Compare the original Koch snowflake with this one:

Just as I explained last time for the Koch snowflake (left), the blue picture above actually provides a proof that the plane can be tiled with copies of tiles like the one on the right, with various sizes—though in this case, you can’t do it with just two sizes of tiles; it takes infinitely many different sizes!  In fact, this tiling of the plane is also given in Aidan Burns’ paper I referenced in the previous post.

But, my construction is built entirely out of self-replicating squares.  What’s the rule for how squares replicate?

Before I tell you, I’ll give two hints:

First, each square produces 8 new squares, just like in the Sierpinski carpet.  (So, we could actually make a cool animation of this fractal morphing into the Sierpinski carpet!)

Second, you can more easily see some of the bigger squares in the picture if you make the colors of the layers more distinct.  While I like the subtle effect of making each successive generation a slightly darker shade of blue, playing with the color scheme on this picture is fun.  And I learned something interesting when my 7-year old (who is more fond of bold color statements) designed this scheme:

octahedral-c

The colors here are not all chosen independently; the only choice is the color of each generation of squares.  And this lets us see bits of earlier-generation squares peeking through in places I hadn’t noticed with my more conservative color choices.

For example, surrounding the big light blue flower in the middle, there are 8 small light blue flowers, and 16 even smaller ones (which just look like octagons here, since I’ve only gone to the 5th generation); these are really all part of the same light-blue square that’s hiding behind everything else.

The same thing happens with the pink squares, and so on.  If you stare at this picture, you can start seeing the outlines of the squares.

So what’s the rule?  Here it is:

(more…)

4!-torsor a la George Hart

20 April 2015

As a project with a certain 4-year-old relative of mine, we constructed the proof I described before that the outer vertices of George Hart’s 12-Card Star form a 4!-torsor.  (I guess I didn’t say it that way before, but it’s true!)  Here’s our proof:

IMG_2215Last time I suggested using a deck of 12 cards like this:

deckoftwosBut instead, we used four solid colors, three cards of each.  So, our “star” permutes the colors red, white, black, and silver:

IMG_2090

You can get any permutation of these colors in our Star by exactly one symmetry taking outer vertices to outer vertices.  The “exactly one” in this isomorphism is what makes the set of outer vertices a 4!-torsor rather than just a 4!-set.

Here’s what it looks like when you put three pieces together, from both sides:

IMG_2210IMG_2212