Biomechanics of Expressivity: Visualization Work

Expressivity Visualization [front]

As I’ve noted previously, meaningfully visualizing my expressivity vectors has certain challenges. The dimensionality of the data is difficult to visualize from a single vantage point. A good sense of motion seems to be key even though it is somewhat abstracted from the full range of motion of an upper body. And, of course, as this is human expression, I have a desire for the humanness to come through in the visualized numbers.

In our last class period, Sanniti and I looked at several different options to better help visualize my output. In the end I settled on something incorporating spheres in a 3D volume.

The images at right are screenshots of a Processing sketch I used to work out the details. The viewing volume is navigable by mouse drags thanks to an available camera library. A library well suited to animating paths in space proved to be broken, so I was forced to abandon it. Position and size of the spheres are controlled by simple keyboard input. These screenshots represent a front and top view of the same arrangement of spheres. Note the small cubes that represent the location of the individuals being tracked.

Expressivity Visualization [top]

I first tried various schemes with elongated spheres (ellipsoids). These proved both unwieldy to generate and too similar to the simple (and underwhelming) pylon in 3D space I’ve been using for development. It seems a simple arrangement of spheres is much more compelling. Size maps to magnitude of expressivity. This naturally provides a sense of the energy of the gesturing. Though not quite depicted here, Processing sketch sandbox allowed me to play with scaling limits and position limits. The Kinect data allows me to discover the approximate size of individuals in front of the system. I can arbitrarily choose arm length (away from the free-floating markers in space) as a limit on scaling and placement of the spheres in space. Though I can’t yet see this in use, I believe it will provide a subtle amount of context that maps to human dimensions.

I also found that there’s just really no good way to absorb the dimensionality of the data at once (without far more time and work than I have available). But, being able to freely move and zoom the camera significantly aids in allowing a viewer to build up a mental model of what they are viewing on screen.

Leave a Reply

Your email address will not be published. Required fields are marked *