Realtime ANN training on Manipulator


I want to try training and artificial neural network to perform an Inverse kinematic transformation (IKT) for a given
manipulator robot. The manipulator is simply three shoulders joined by three ‘motors’ . The IKT means that I input
the location of the end point of the manipulator (aka EEF - end effector) and the ANN gives me the 3 motor angles
that are required to get the EEF to the desired location. The way I do it is by setting the angles (rotations of nodes) and
then I do a EEF.getWorldTranslation() to get the EEF location.

The first phase is getting a training data set which means that I input a random set of motor angles and record
the EEF location. This is done a couple of hundred/thousand times to get a decent training set.
The next phase is training the ANN which means that a set of EEF locations has to be input to the ANN
and the sets of angles come out which is then used to compare to the training data angles to train the ANN.

  • My question can be summarized as follows:
    I need to iteratively set node rotations and record the resultant chain translations. Can I depend on the
    simpleUpdate loop to do this? In other words: If I set Node rotations in a given simpleUpdate instance , can
    I be 100% sure that the output will be visible in the next simpleUpdate instance, Irregardless on the time it took to
    do processing inside the simpleUpdate loop?

In other words: Is it possible that manipulating a spacial inside the simpleUpdate loop does not
show in the next iteration for some reason?

Hmm… after experimenting I see that the updates happen immediately, just not graphically.
So the way I understand it is that whatever I do to the scenegraph happens immediately , but only
renders graphically at the end of the loop iteration right?

Things update when you update them but the only render when rendering happens.

Thanks for the answer. That makes things much easier for me :slight_smile: