A robot has a discrete configuration space if it has a discrete set of possible actions at each step, right?
Wrong! In fact, even in a perfectly idealized world where there is zero error in motion, a robot with only discrete actions can effectively reach a continuum of different configurations. This can make motion planning for such robots very interesting.
Here’s a robot I built when I was about 15 years old:
I don’t have this robot anymore. I gave it to the local jawas in the California desert before moving to Germany. But now, a weekend’s worth of programming let me bring it back to life in a virtual world. Plus, I’ve made it more intelligent than it ever was in its previous life. Bwahhh-ha-ha!
Let me explain what’s interesting about this robot, and why I coded it into a virtual world.
The original robot could not do much. It could rotate and move forward or backward only in fixed increments, and I could pre-program a sequence of moves ahead of time, which it would then execute. I know—not very impressive as robots go these days.
But the limitations of this robot are also what make it interesting. Since it only had discrete motions, and rather coarse ones at that, getting it to move from one location to another without running into things was an interesting challenge!
An ideal discrete robot
Let’s consider an idealized version of this robot which can do only two things:
- Move forward or backward by exactly 1 meter.
- Rotate in place in increments of exactly 45 degrees.
The robot I built was very much like this, although it moved some shorter distance (1m is just for a convenient number), and of course executed neither of the two possible moves with perfect consistency or accuracy. Still, it was rather good at doing what it was designed to.
In fact, we can idealize this robot just a bit more. Let’s ignore the need for the robot to rotate and just assume it can move instantly in any of eight directions, each separated from 45 from the last. Thus, if we point the robot Northward, at each stage it can move in any of eight basic compass directions: N, S, E, W, NW, NE, SE, and SW.
In a wide open space with nothing to run into, if you give our idealized robot a pen (like the Logo turtle) and send it out on a random walk, you’ll see something like this:
There’s something important to observe here: the locations where the robot stopped between moves, denoted by dots, can end up being very close together, even though the robot always moves in steps of one meter.
In fact, the robot can get essentially anywhere. The positions the robot can get to form a dense subset of the plane: Given enough moves, it can stop as close as you wish to any destination point you pick. I say it can “stop” close to the destination because I don’t want to count it is reaching its destination if it just rolls over the destination in transit without actually stopping there. That would be cheating!
For example, suppose the robot is at (0, 0) and you want to program it to get to (0, 0.5). Just telling the robot to move straight toward (0, 0.5) won’t work: it will drive through the point (0, 0.5), but it won’t stop there; it will keep going all the way to (0, 1). In fact, you’ll never be able to reach (0, 0.5) exactly, but if you’re satisfied with getting there to within 0.1 meters of the destination, then there’s an easy solution with only 3 steps:
The red x marks the goal at (0, 0.5), and the final resting spot of the robot, marked by a red dot •, is within 0.1 of this. More precisely, it’s easy to see that the robot comes to rest at the point , or approximately (0, 0.4142), which is within our acceptable error of 0.1 m.
If we’re not satisfied getting within 0.1 m, and we’d rather get within, say, 0.05 m (5cm) of the goal, we can still do it with a bit more thought. Here’s a way to do it with 20 moves, not counting rotations:
Of course, solutions are not at all unique: Here are two more solutions to the same problem:
However, each of these also consists of 20 moves. In fact, these are all minimal solutions: every solutions to this problem, at the specified precision of 5cm, has at least 20 moves not counting rotations.
Of course, you could also demand even more precision. If you want to get from (0, 0) to (0, 0.5) to within a distance of 0.01, i.e. within 1cm, given our step size of 1m, you’ll need at least 119 moves:
Yikes!
Finding a plan to get from one location to another is an example of “motion planning” in robotics, and doing this kind of motion planning is one of the ways I had fun with the original robot when I was a teenager. I learned to do some of this intuitively just by getting a feel for how to use combinations of moves. Of course, that kind of intuition never let me do anything so complicated as finding the 119 moves above!
Motion planning on a hypercube lattice
The virtual reincarnation of my old robot, despite the disadvantage of lacking any corporeal existence, has one big advantage over its former self: it can solve the motion planning problem itself.
To do this, it uses a classic AI tool—A* search. This is an algorithm for finding minimal paths through weighted graphs, and is commonly used for route finding in map programs, solving puzzles, and planning complex sequences of actions.
In the robot’s case, although it occupies a 2d world which we identify with a subset of , the A* search is carried out instead on the 4-dimensional lattice
representing the four directions it can travel in the physical space: the N-S, E-W, NE-SW, and SE-NW directions.
In other words, the robot really thinks of itself as living on a hypercube lattice. You may be used to seeing pictures of hypercubes in one of their 3d projections, like this:
However, if you project a hypercube down into two dimensions in such a way that all of the edges have the same length in the projection, you can get this:
This picture of a hypercube is something I could draw with my robot! All of the angles are multiples of 45 degrees, and all of the edges have the same length. (Plus, I can draw it in one pass: all of the vertices are even, so there exists an Euler circuit.)
Of course, there’s a deep reason for this: the real configuration space of the robot, still ignoring the robot’s orientation, is a projection of a hypercube lattice into . The vertices of this lattice are the states the robot can occupy at rest, while the edges represent the moves, or allowed transitions between states.
The entire hypercube lattice is like an ordinary cubical lattice:
but one dimension higher, and instead of 8 cubes meeting at each vertex, it has 16 hypercubes meeting at each vertex. This is rather complicated to draw, and I won’t show a picture of the hypercube lattice. However, for your amazement, here is a picture I drew of its projection into two dimensions:
There’s nothing wrong with this picture—The lattice’s projection is dense in , as my robot shows, remember! :-)
Explicitly, the basic projection can be written
where
The eight possible moves—or in other words, the edges in the hypercube lattice—correspond to incrementing or decrementing one of the coordinates $m, n, a, b$. You can check that these project exactly to the eight allowed robot motions.
I combine this projection into with a translation and rotation, so that the robot’s current position in the lattice is always (0,0,0,0).
A* Search implementation
To solve the motion planning problem, the virtual robot uses A* search on the hypercube lattice. This searches through the tree of paths starting at the robot’s current position (0,0,0,0) in the lattice until it finds a minimal length route to some point in the lattice whose projection into
is less than the specified tolerance away from the destination in
. At each stage, the allowed moves are those edges in the hypercube lattice whose projections do not either leave the “room” the robot is contained in or intersect any obstacles.
A* search also relies on a “heuristic distance”—a way of conservatively estimating distance remaining to to the goal. This is because the tree of paths is traversed in order of total estimated distance for a given route. The advantage of this over a breadth first search, which would in principle also find a minimal solution, is that far fewer paths need to be explored.
An effective heuristic distance in the case of the robot, at least for relatively small problems, comes from again using the projection from to
and checking the Euclidean distance to the specified goal.
The big point here is that although the robot’s shortest path between two points in a room might not look to us much like a straight line:
it’s still a minimal path in a 4 dimensional lattice, projected down to the plane.
Real robots
Obviously, my virtual robot is an idealization. For the real robot it is based on, two kinds of uncertainty come into play: uncertainty in the current configuration, and uncertainty in the moves from one configuration to another. The planning problem must interact with the “robot localization” problem.
First, there is uncertainty in the robot’s current position and orientation. For example, even assuming the robot can move with perfect accuracy, optimal planning is very sensitive to errors in angular placement (“attitude”). If you want to get from (0,0) to (1, 2), and the robot’s placement is very accurate, it will move like a knight:
But rotate its starting position just 2 degrees to the left, and you’ll need 16 moves to get to the same point with an accuracy of ±0.05 m:
Rotate it 2 degrees to the right, and you’ll need 20 moves to get the same accuracy:
Small perturbations in the position of the robot obviously has a similar effect.
More importantly, the moves themselves are not completely accurate. On the real robot, I had a potentiometer that adjusted the relative speed of the two side wheels to calibrate the forward motion. Imperfect calibration means not driving in a completely straight line. Similarly, during a rotation, there was surely also some tiny change in position as well.
When we account for both types of uncertainty, there is a probability distribution for the initial configuration of the robot, including position and orientation. There is also a probability distribution for the result of each possible rotation or translation move. Using Bayesian probability, these two distributions are convolved each time we make a move, in order to get the probability distribution for the updated configuration.
The accumulation of errors make accurate motion planning impossible unless we give the robot the ability to “sense” its environment, use sensory data to update its configuration distribution, and adjust the plan accordingly. But that’s a discussion for another time.