Isadora Kinect Tracking Tutorial – Part 3

Isadora Kinect Tracking Tutorial – Part 3

by Montgomery Martin
(Based on techniques presented during Troika Ranch’s Live-I Workshop 2015)

Receiving the Kinect Data in Isadora

In Part 3 of this tutorial, you will learn how to set up Isadora to receive the OSC and Syphon feeds that Processing broadcasts from the Xbox 360 Kinect camera.

You should have previously completed the part 1 and part 2 of this tutorial before continuing here. (Introduction to Motion Tracking using the Xbox 360 Kinect Camera and Configuring Processing for Kinect Tracking)

For help with working with this data within Isadora itself, you may also wish to review the following video tutorials, accessed from Isadora’s Help menu: Tutorial 5: Using Effects, Tutorial 6: Value Scaling, and Tutorial 14: Creating Your Own Actors.

Setting up the Syphon Feed

We can quickly and easily transmit the Depth feed from the Kinect camera to Isadora via Syphon (Mac OS) or Spout (Windows). This camera feed can then be used within Isadora like any other video source, such as a a live camera input or movie player. The unique image created by the Depth camera is especially useful for creating body-shaped video masks to create composited images, or even for blob-detection based motion tracking using the Eyes++ actor.

  1. Mac OS: open the file“Isadora_Kinect_Tracking_Mac.pde.”
    Windows: open the file “Isadora_Kinect_Tracking_Win.pde.”
  2. Processing will be running and the Sketch Editor will appear. Choose Sketch > Run to start capturing from the Kinect.
  3. Launch Isadora and start a new project.
  4. Mac OS: Add a Syphon Receiver actor to the Scene Editor, and click on its ‘server’ input. A drop down dialog appears, displaying all the active Syphon servers on your computer. Select the option “Depth ::: Isadora_Kinect_Tracking_Mac”
    Windows: Add a SpoutReceiver2 actor to the Scene Editor, and click on its ‘server’ input. A drop down dialog appears, displaying all the active Syphon servers on your computer. Select the option “Depth ::: Skeleton_OSC_Syphon_2D”
    Note: If you have written your own Processing code or are using a different file from the one provided for these tutorials, the name of the Syphon or Spout server should match the filename of your sketch.
  5. From the toolbox, select a Projector actor and drag it into the scene editor.
  6. Connect the Video Output on the Syphon Receiver or SpoutReceiver2 actor to the Video Input on the Projector actor, just like you would for any other video source.
  7. Show your stages by choosing Output > Show Stages or a stage preview by choosing Output > Force Stage Preview. The depth camera image from the Kinect camera appears onscreen.
Receiving Skeleton Data via Open Sound Control

Processing sends positional data drawn from tracking the skeleton to Isadora via Open Sound Control. The result is a large array of numbers we can use to dynamically control interactive elements within Isadora. The positional numbers transmitted by the Kinect are measured in millimeters, so we will need to take full advantage of Isadora’s powerful value-scaling tools to translate the raw positional data from the Kinect sensor into a useful number.

 TIP: It is often useful to have the Syphon stream available for reference while you are working with Open Sound Control, especially if you are moving 3D objects or particles using the motion sensor. You may wish to start a new project, or continue using the project you created for the Syphon part of this tutorial.

  1. From the menu bar, select Communications > Stream Setup. The Stream Setup dialog window appears.
  2. In the Stream Setup window, click the “Stream Select” drop-down menu, and select “Open Sound Control”.
  3. Beside the drop down menu, check the “Auto-Detect Input” box.
  4. Ask your performer to stand in front of the Kinect sensor. A huge amount of data appears. As your performer moves about, the numbers in the Data field should change. If possible, you may want your performer to stand still and move each body part separately to verify that your system is tracking the right points. The Stream Setup dialog looks like this

    (Your port assignments might be slightly different from the above image.)
  5. Click the “renumber ports” button. Isadora automatically assigns a port number to each OSC stream address. Quickly scan down the stream list and ensure that each port has the “Enable” check box filled. Remember, the Kinect sends 15 stream addresses in total, one for each point on the body it tracks.

Notice that Isadora assigns each stream address to port numbers in increments of three. This is because each stream address is actually broadcasting three numbers: the X, Y, and Z coordinate for each body part, which Isadora sequentially assigns in the corresponding order.

Therefore, in our example image above, Port 1 corresponds to the X position of the head, Port 2 corresponds to the Y position of the head, and Port 3 corresponds to the Z position of the head, and so on for each of the 15 tracking points on the skeleton.

 IMPORTANT! You may wish to write down the port numbers for each tracking point each data point for quick reference. In total, tracking a single person uses 45 OSC ports. Remember, when you save your Isadora project file these port assignments are saved as well.

  1. Close the Stream Setup window.
  2. In the Toolbox, select the OSC Listener actor. Drag three copies of this actor into the scene editor.
  3. Set the “Channel” property on each instance of the OSC Listener to 1, 2, and 3. In our example configuration, these should correspond to the X, Y, and Z position of the head respectively.
  4. Ask your performer to move around the activation space. You should see the “value” output of each OSC listener changing rapidly as they move around.

At this point, you can connect the value output of the OSC Listener actors to virtually any numeric input within Isadora to control that value with the movements of the performer’s body.

The Kinect Point User Actor

Since the Kinect is tracking 45 points of data, it very useful to group the individual OSC Listener actors by body point within a user actor. We have created a user actor you can use:


This user actor means we only have to remember the point assignments for each major body part, from 1 to 15, instead of 45 different numbers. By changing the “Channel” input on our user actor, Isadora automatically calculates the corresponding OSC port numbers for the X, Y, and Z of that point. The X, Y, and Z data of the point are then returned as the output, ready for you to connect to any interactive input. The channel assignments are as follows:


Body Point








Left Shoulder


Left Elbow


Left Hand


Right Shoulder


Right Elbow


Right Hand


Left Hip


Left Knee


Left Foot


Right Hip


Right Knee


Right Foot


Tips on Working with the Kinect Skeleton Data

The positional data reported by the Kinect is measured in millimeters. As a result, the values are so large as to be somewhat unwieldy within Isadora.

Using a Calculator actor connected to the Output each OSC Listener actor which divides the Kinect data received by 10, or even 100, can be very useful instead of directly linking the raw OSC Listener output value to many of Isadora’s interactive inputs.

Of course, you may need to experiment extensively to develop the value scaling that suits the needs of project. Consider carefully the movements you want to track, and why you want to track them!

The Min Value Hold, Max Value Hold, and Limit-Scale Value actors are also very useful to scaling the Kinect data, particularly if you don’t need a full range of motion from a particular body part.

Finally, clever use of the Comparator actor can be useful in creating “triggers” to activate scenes, audio files, or movies when body parts pass through certain points in space.

Comments are closed.