Robot Arm Tutorials

Calculating Jacobian matrixes for a 6 axis Robot Arm

What are Jacobian matrixes (good for)?

I want to know how fast the Sixi robot has to move each joint (how much work in each muscle) to move the end effector (finger tip) at my desired velocity (direction * speed). For any given pose, the Jacobian matrix describes the relationship between the joint velocities and the end effector velocity. The inverse jacobian matrix does the reverse, and that’s what I want.

The Jacobian matrix could be a matrix of equations, solved for any pose of the robot. That is a phenomenal amount of math and, frankly, I’m not that smart. I’m going to use a method to calculate the instantaneous approximate Jacobian at any given robot pose, and then recalculate it as often as I need. It may be off in the 5th or 6th decimal place, but it’s still good enough for my needs.

Special thanks to Queensland University of Technology. Their online explanation taught me this method. I strongly recommend you watch their series, which will help this post (a kind of cheat sheet) make more sense.

What tools do I have to find the Jacobian matrix?

  • I have the D-H parameters for my robot;
  • I have my Forward Kinematics (FK) calculations; and
  • I have my Inverse Kinematics (IK) calculations.

I have a convenience method that takes 6 joint angles and the robot description and returns the matrix of the end effector.

/**
 * @param jointAngles 6 joint angles
 * @param robot the D-H description and the FK/IK solver
 * @return the matrix of the end effector
 */
private Matrix4d computeMatrix(double [] jointAngles,Sixi2 robot) {
     robot.setRobotPose(jointAngles);  // recursively calculates all the matrixes down to the finger tip.
     return new Matrix4d(robot.getLiveMatrix());
}

The method for approximating the Jacobian

Essentially I’m writing a method that returns the 6×6 Jacobian matrix for a given robot pose.

/**
 * Use Forward Kinematics to approximate the Jacobian matrix for Sixi.
 * See also https://robotacademy.net.au/masterclass/velocity-kinematics-in-3d/?lesson=346
 */ 
public double [][] approximateJacobian(Sixi robot,double [] jointAnglesA) { 
     double [][] jacobian = new double[6][6]; 
     //....
     return jacobian;
}

The keyframe is a description of the current joint angles, and the robot contains the D-H parameters and the FK/IK calculators.

Each column of the Jacobian has 6 parameters: 0-2 describe the translation of the hand and 3-5 describe the rotation of the hand. Each column describes the change for a single joint: the first column is the change in the end effector isolated to only a movement in J0.

So I have my current robot pose T and one at a time I will change each joint a very small change (0.5 degrees) and calculate the new pose Tnew. (Tnew-T)/change gives me a matrix dT showing the amount of change. The translation component of the Jacobian can be directly extracted from here.

     double ANGLE_STEP_SIZE_DEGREES=0.5;  // degrees
     double [] jointAnglesB = new double[6]; 

     // use anglesA to get the hand matrix 
     Matrix4d T = computeMatrix(jointAnglesA,robot);

     int i,j;
     for(i=0;i<6;++i) {  // for each axis
         for(j=0;j<6;++j) {
             jointAnglesB[j]=jointAnglesA[j];
         }
         // use anglesB to get the hand matrix after a tiiiiny adjustment on one axis. 
         jointAnglesB[i] += ANGLE_STEP_SIZE_DEGREES; 
         Matrix4d Tnew = computeMatrix(jointAnglesB,robot);

         // use the finite difference in the two matrixes
         // aka the approximate the rate of change (aka the integral, aka the velocity) 
         // in one column of the jacobian matrix at this position.
         Matrix4d dT = new Matrix4d();
         dT.sub(Tnew,T);
         dT.mul(1.0/Math.toRadians(ANGLE_STEP_SIZE_DEGREES));
         jacobian[i][0]=dT.m03;
         jacobian[i][1]=dT.m13;
         jacobian[i][2]=dT.m23;

We’re halfway there! Now the rotation part is more complex. We need to look at just the rotation part of each matrix.

        Matrix3d T3 = new Matrix3d(
                 T.m00,T.m01,T.m02,
                 T.m10,T.m11,T.m12,
                 T.m20,T.m21,T.m22);
        Matrix3d dT3 = new Matrix3d(
                 dT.m00,dT.m01,dT.m02,
                 dT.m10,dT.m11,dT.m12,
                 dT.m20,dT.m21,dT.m22);
        T3.transpose();  // inverse of a rotation matrix is its transpose
        Matrix3d skewSymmetric = new Matrix3d();
        skewSymmetric.mul(dT3,T3);
        //[  0 -Wz  Wy]
         //[ Wz   0 -Wx]
         //[-Wy  Wx   0]
         jacobian[i][3]=skewSymmetric.m12;  // Wx
         jacobian[i][4]=skewSymmetric.m20;  // Wy
         jacobian[i][5]=skewSymmetric.m01;  // Wz
    }
    return jacobian;
}

Testing the Jacobian (finding Joint Velocity over Time)

So remember the whole point is to be able to say “I want to move the end effector with Force F, how fast do the joints move?” I could apply this iteratively over some period of time and watch how the end effector moves.

public void angularVelocityOverTime() {
    System.out.println("angularVelocityOverTime()");
    Sixi2 robot = new Sixi2();

    BufferedWriter out=null;
    try {
        out = new BufferedWriter(new FileWriter(new File("c:/Users/Admin/Desktop/avot.csv")));
        out.write("Px\tPy\tPz\tJ0\tJ1\tJ2\tJ3\tJ4\tJ5\n");
        
        DHKeyframe keyframe = (DHKeyframe)robot.createKeyframe();
        DHIKSolver solver = robot.getSolverIK();
        double [] force = {0,3,0,0,0,0};  // force along +Y direction

        // move the hand to some position...
        Matrix4d m = robot.getLiveMatrix();
        m.m13=-20;
        m.m23-=5;
        // get the hand position
        solver.solve(robot, m, keyframe);
        robot.setRobotPose(keyframe);
        
        float TIME_STEP=0.030f;
        float t;
        int j, safety=0;
        // until hand moves far enough along Y or something has gone wrong
        while(m.m13<20 && safety<10000) {
            safety++;
            m = robot.getLiveMatrix();
            solver.solve(robot, m, keyframe);  // get angles
            // if this pose is in range and does not pass exactly through a singularity
            if(solver.solutionFlag == DHIKSolver.ONE_SOLUTION) {
                double [][] jacobian = approximateJacobian(robot,keyframe);
                // Java does not come by default with a 6x6 matrix class.
                double [][] inverseJacobian = MatrixHelper.invert(jacobian);
                
                out.write(m.m03+"\t"+m.m13+"\t"+m.m23+"\t");  // position now
                double [] jvot = new double[6];
                for(j=0;j<6;++j) {
                    for(int k=0;k<6;++k) {
                        jvot[j]+=inverseJacobian[k][j]*force[k];
                    }
                    // each jvot is now a force in radians/s
                    out.write(Math.toDegrees(jvot[j])+"\t");
                    // rotate each joint aka P+= V*T
                    keyframe.fkValues[j] += Math.toDegrees(jvot[j])*TIME_STEP;
                }
                out.write("\n");
                robot.setRobotPose(keyframe);
            } else {
                // Maybe we're exactly in a singularity.  Cheat a little.
                m.m03+=force[0]*TIME_STEP;
                m.m13+=force[1]*TIME_STEP;
                m.m23+=force[2]*TIME_STEP;
            }
        }
        
    }
    catch(Exception e) {
        e.printStackTrace();
    }
    finally {
        try {
            if(out!=null) out.flush();
            if(out!=null) out.close();
        } catch (IOException e1) {
            e1.printStackTrace();
        }
    }
}

Viewing the results

The output of this method is conveniently formatted to work with common spreadsheet programs, and then graphed.

Position and Joint velocity over Time

I assume the small drift in Z is due to numerical error over many iterations.

Now what?

Since velocity is a function of acceleration (v=a*t) and acceleration is a force I should be able to teach the arm all about forces:

  • Please push this way (squeeze my lemon)
  • Please stop if you meet a big opposite force. (aka compliant robotics aka safe working around humans)
  • You are being pushed. Please move that way. (push to teach)
  • Are any of the joints turning too fast? Warn me, please.

Final thoughts

All the code in this post is in the open source Robot Overlord app on Github. The graph above is saved to the Sixi 2 Github repository.

Please let me know this tutorial helps. I really appreciate the motivation! If you want to support my work, there’s always my Patreon or the many fine robots on this site. If you have follow up questions or want me to explain more parts, contact me.

Robot Arm

Calculating the workspace of a robot arm

After people ask “what good is a robot arm like Sixi?” the next question is always “how strong is it and how far can it reach?” To answer the reach question visually, I wanted to calculate the boundaries of the Sixi robot’s workspace.

What is a robot workspace?

What tools do we have to calculate the workspace?

Here’s a paper I found about an analytical method that starts with the Jacobian of a robot and goes from there: https://pdfs.semanticscholar.org/e5f8/d98ce96b1dfcce05966bed52a85a215cf0a9.pdf

Here’s another locked behind a paywall: https://www.semanticscholar.org/paper/Calculation-of-the-Boundaries-and-Barriers-of-the-a-Peidro-Reinoso/58ebab14786596a892eccf07f9a8750d40fa0a78

Looking closer at the abstract for each paper I find, there doesn’t seem to be a consensus on the best way to find the workspace boundary. Some even call for Monte Carlo methods (eg some fuzzy guesswork). A large part of the problem seems to be that the boundary is a concave hull, possibly even with unreachable interior pockets (like a donut hole). These are much harder to compute than a convex hull.

I have Robot Overlord and its IK/FK solutions. I could move the arm through the entire workspace on the planes I care about and plot points in a giant table of data, then feed that to something like MATLAB and ask it to generate the best-fitting outer perimeter. MATLAB has a boundary() method that should work pretty good.

While waiting for MATLAB to install, I generated the XZ and XY plots.

The XY workspace plot

For the XY plot I made the arm stretch out as far as possible, turned it around the base, then reach in as close as possible and turn the other way. That meant turning the anchor, the shoulder, and the elbow. This was all done with Forward Kinematics, which can easily calculate the position of the robot’s hand. I swept through the range, moving 1 degree at a time, and dumped the hand positions into a CSV file, which I then graphed in OpenOffice Calc. The result looks like a Pacman.

plot of sixi xy workspace
Dimensions in centimeters. Click for the full image.

The XZ workspace plot

For the XZ workspace plot I repeated the process by turning the shoulder, the elbow, and the wrist.

Dimensions in centimeters. Click for the full image.

Conclusion

MATLAB was crazy slow and not needed for the plot I wanted. I guess it would be good if I was drawing a 3D envelope? But I’m not, so it’s overkill.

You can find the code to generate the plots in https://github.com/MarginallyClever/Robot-Overlord-App/blob/master/src/test/java/com/marginallyclever/robotOverlord/MiscTests.java. There is a plotXZ() and a plotXY() that generate the CSV files needed for each graph.

Robot Arm

Intuitive robot programming

While making demos of what the Sixi arm can do I discovered that I’m a terrible driver. More than a few times I wanted to drive the hand +X and instead rotated the wrist, or got the direction wrong. It might not seem like much now but if it were holding a cup of liquid or in a narrow confine that would be very bad for the end user!

A good fix would be a way to preview a move before committing to that move, and maybe having some kind of undo feature. Now by moving the joystick the blue “ghost” arm moves while the yellow “live” arm stays still. When you like the pose you’ve reached, press X on the playstation controller to commit that move. If you want to undo – put the ghost back on the live arm – press the circle button on the playstation controller. Lastly if you want to drive the arm back to the starting position, triangle button will move the ghost to the starting position and then X would commit the move.

Let me know in the comments how is your experience with driving the simulation. I will be dedicating the next few weeks getting ready for the Vancouver Mini Maker Fair, September 14, 2019 at Science World.

Robot Arm

Record & Playback 4

I have been building a robot arm. You may have seen it on my Instagram. I also have an open source Java app called Robot Overlord, which can simulate the arm in 3D. Robot overlord can drive the simulation AND the real arm by moving the joystick. All the moves in this video were done with the Playstation controller:

In a previous post on Hackaday.io, I briefly covered a system I wrote into Robot Overlord that would capture all the joystick data and play it back on command. Technically, that worked. Qualified success.

Watch I stream robot related tutorials from imakerobots on www.twitch.tv

However! Driving this way is way inefficient. new instructions are sent to the arm 30 times a second. The arm can’t see where it is going, so it can’t plan to reach high speeds. It’s like a very fast game of Marco Polo. Also if you’re a novice driver like me it’s really easy to go the wrong way. It would be great if I could move the simulated arm to a destination pose BUT only update the real robot when I’m sure I like my destination pose. Then the arm would then move in a straight line from start pose to end pose at top speed.

First I needed a way to save any pose to a file on disk and then bring it back. Then I could save and load single poses. Then I could play those poses back to the real robot, same as I did with the joystick model. Then I could repeat tests, which helps me confirm things work correctly.

If I have a start and an end pose then I can find a way to interpolate between two poses – I can split that line into sub poses if needed. I can already send poses to the robot. So what I can do is find the happy trade off between too many poses (no acceleration) and too few (less accurate movement).

Looking through my daily notes I see I started on the new system some time before 2019-8-13, because that was when the weirdness started: I found cases where recording to disk and coming back were out of sync. Not identically 1:1. Discombobulated. When I tried to play back a recording the hand of the robot (J5) was always turned 90 degrees from the original recording. As I began to dig into why, I opened a whole can of worms. Bigguns.

Worm: The robot sim in Robot Overlord was wrong.

When Jin Han and I built the computer model of the robot arm in Fusion360, the design was started in November 2018 and back then we started facing the wrong direction.

Arm designed pointing at -Z

When I say it was built facing the wrong direction, I mean that I imagined That both Fusion360 and Robot Overlord would have the hand pointing at +X and up was +Z. In fact, in Fusion360 the hand is pointing at -Z and up is +Y, and in Robot Overlord I reassembled the arm with the hand facing -Y and up is +Z. Copying the model over was stupid hard and I didn’t realize that was partly because I was doing it the wrong way, turned 90 degrees on two axies. It would have been easier if it was upside down and backwards!

My method to solve it was to load one joint at a time starting at the base, get it turned facing upwards, and then add another link and so on. Once all the bones were in their relative positions, build D-H parameters that matched.

Worm: The D-H model of the arm was wrong.

The Sixi was the first robot arm I ever coded that used Denavit–Hartenberg parameters. One of the reasons I used D-H parameters is that they’re well documented and supported by other people into robotics. I can easily use D-H to calculate Forward Kinematics (FK), where i know the angle of every joint in the arm and I want to get the pose of the hand. (A pose is a position in space and an orientation. One common way to describe this combo is with a 4×4 matrix). I could also use Youtube videos that explained how to calculate Inverse Kinematics for a robot arm with D-H parameters. Especially tricky is the spherical wrist:

I found the videos on spherical wrists were incomplete and it wasn’t until I stumbled on these notes from York University in Canada that I found the missing piece.

Worm: Inverse Kinematics calculations were wrong.

Of course my code didn’t quite match the stuff I’d been taught because my model was facing -Y instead of +Z – a 90 degree turn. Every time the tutorials said use atan(y,x) I had to write atan(-x,y).

Not knowing that I’d done all this stuff wrong yet, I had to diagnose the problem. I build a jUnit test in src.test.java.com.marginallyclever.robotOverlord.MiscTests.java:TestFK2IK(). This test sweeps the arm through the set of all angles keyframe0. Every instance in keyframe0 creates some possible pose m0. Some m0 can be solved with Inverse Kinematics to make some other keyframe1. keyframe1 can create a pose m1. m1 should ALWAYS match m0. I got lot reams of big data, all of which told me Yes, there’s definitely something wrong. It took about a week of nail-biting research until I figured out and unscrambled each of those worms.

So what does all that mean? It means I can now build meaningful recordings and now I can start to search for the right happy trade off .