Biomechanics of Expressivity: Literature Review

Expressivity in human motion is traditionally described in qualitative terms. In biomechanical terms it is most often analyzed quantitatively as the movement of joints through space. However, by inspecting the biomechanic “cost” of bodily motion we may be able to describe the expressive (and affective) content of non-verbal communication between humans in even richer quantitative terms.

Expressive comes from a Latin root meaning to press out. That is to say, even the etymology of expressive contains some idea of a force necessary to convey a thought or feeling. A review of the relevant literature certainly reveals that expressivity has been analyzed from the standpoint of biomechanics. However, that said, the existing work seems to be concerned with kinematics and describing the nature of the movement of limbs and joints through space. Typically, this work is concerned with:

  • Frameworks for describing motion often borrowing from theater and dance
  • Categorizations of emotion exhibited in movement
  • Mapping human motion to animation.

Further, the existing work looks at individuals. Conversely, work involving two or more humans tends to take the form of gesture studies of the non-verbal language between individuals without a biomechanical perspective. Finally, if these motions are measured and recorded, expensive motion capture systems are typically employed.

My proposed exploration of this area will take a number of different approaches. I propose examining expressivity as a biomechanical phenomenon incorporating not only kinematics but also a simple kinetic model of human motion. Specifically, the goal is to correlate emotional “force” with the forces involved in expressive gestures — i.e. bigger emotional displays require bigger biomechanical forces. Any such expressive gestural displays are most typically found in the interactions of two or more people. Consequently, this work will look at non-verbal gesturing between individuals. Lastly, where fine-grained motion analysis with expensive equipment is typical of the existing approaches, this approach will use coarse-grained analysis by way of inexpensive depth cameras such as the Kinect.

Works Consulted:

  1. Behavioral Biomechanics Lab, Department of Kinesiology, University of Michigan

    The Behavioral Biomechanics Lab studies the connection of emotional expression and biomechanics. The lab’s stated purpose and its project list (http://www.sitemaker.umich.edu/mgrosslab/projects) make plain the emphasis on only the kinematics of emotion in individuals. Their linked work reveals a heavy reliance on elaborate motion capture systems.

  2. Crane, Elizabeth, and Melissa Gross. “Methodological Considerations for Quantifying Emotionally Expressive Movement Style.” Ann Arbor 1001 (2007): 48109-2013.

    The authors (a part of the Behavioral Biomechanics Lab at the University of Michigan) note that whole body movement analysis is difficult and methods for analyzing it often rely on coding rather than quantitative methods. The authors’ solution to this problem is to use motion capture systems to analyze the kinemtics of body movement. The paper discusses many methodological issues in yielding good motion capture from test subjects.

  3. Pelachaud, Catherine, et al. “Expressive Gestures Displayed by a Humanoid Robot during a Storytelling Application.” New Frontiers in Human-Robot Interaction (AISB), Leicester, GB (2010).

    Here the authors describe early work to present stories read by a robot in a physically expressive manner complementary to the story. Ultimately, the authors develop a description framework from a video corpus of story readers that allows a robot to mimic human expressiveness. The approach regards expressivity as a matter of replicating kinematics.

  4. Hertzmann, Aaron, Carol O’Sullivan, and Ken Perlin. “Realistic human body movement for emotional expressiveness.” ACM SIGGRAPH 2009 Courses. ACM, 2009.

    The authors present an overview of the issues of expressiveness in animation as a course for SIGGRAPH 2009. While they largely cover many issues already noted elsewhere in this review, they touch on the early failures and current progress towards physics-based (i.e true biomechanical) character animation. Rather than simply replaying the kinematics of a motion capture (with some latitude) a fully physics-based character could move through space as a body truly would regardless of circumstance. The authors present small mathematical models necessary for such animation.

  5. Meyerhold’s Biomechanics

    Meyerhold developed a method of actor training he called Biomechanics (only loosely related to the engineering discipline). The practice and training is built around the interrelation of psychology and physiology to aid in emotional expression on stage.

  6. Chi, Diane M. A motion control scheme for animating expressive arm movements. Diss. University of Pennsylvania, 1999.

    This dissertation evaluates existing biomechanical models as far too limited to address the fine aspects of human motion for use in analyzing and replicating expressive movement. Consequently, the author chooses to draw from artistic approaches to movement ultimately proceduralizing components of Laban Movement Analysis to produce expressive arm movements in an animated figure.

Movement on Project Ideas

Skin Distortion Tracking

This is a tricky problem. Bend and stretch sensors are generally uni-axial (sensing in one direction). Further, applying them to the skin will alter the movement of the skin they might try to measure. But NYU’s very own Movement Lab may have just the thing in its Nonrigid Motion Acquisition and Modeling work. A dancer or model with a structured pattern of marks applied to the skin surrounding his or her joints could potentially be analyzed using the Movement Lab’s video-based nonrigid motion acquisition and modeling techniques.

Kinect / Optical Motion Capture Accuracy Comparison

The ideal setup for comparing the inexpensive Kinect to a much more capable (and expensive) system such as that produced by Vicon is to run both simultaneously, performing the same motion capture. However, as both use infrared light (in different ways) this seemed highly problematic. After some digging I learned that the Vicon cameras have multiple lighting modes that should work fine with a Kinect. And, in fact, this flying robot that relies on both a Kinect and Vicon demonstrates that the two can play nice together.

All this being said, further searching revealed that several recent projects have already done work comparing the accuracy of the Kinect against a traditional optical motion capture system. One looks at passive in-home measurement of stride-to-stride gait variability. Another compares the two systems in terms of human perception of their resultant data streams. And a third project conducted a fairly extensive biomedical validation of upper and lower body joint movements between the two technologies. In terms of a class project, this work could be validated and/or extended to motions not already specifically tested.

Biomechanics of Expressivity

My eventual dissertation work in Human Computer Interaction is likely to involve work in free-space gesture. I am particularly interested in the idea of sensing the gesturing between humans (i.e. dyadic non-verbal communication) as an input to a system in favor of gesturing directly to a system (e.g. Kinect-based games). A component of working with such human-to-human interaction is the expressivity of the individuals interacting with one another. Of course, expressivity can be a highly subjective term incorporating senses of artistry, subtlety, liveliness, etc. in different contexts.

The combination of expressivity and biomechanics has attracted little work in academic literature (such references do appear fairly often in discussions of dance and theater — though not usually in a strict engineering sense). Most often expressive biomechanics appears in the context of computer animation or in human mimicry in robots. For my purposes, I’d like to begin working with expressivity by objectively measuring its “energy” content in an individual and/or between individuals. As such, quantifying expressivity — especially between people — could be used to automatically code video of interactions, act as an input to novel games, or serve as an input to environmental control.

A simple way to do this might be to simply perform analysis of frame-to-frame change in 2D video or volume-to-volume change in 3D depth sensing. However, a far more nuanced approach might be to use knowledge of the range of motion of human joints to yield a measure of expressivity. Calculating the kinematics of the human body using a technology like the Kinect could produce a reasonably objective measure of body language expressivity.

I propose tracking joint orientation and limb acceleration and velocity up the kinetic chain of the body while possibly incorporating an estimated model of human mass to properly weight (in the mathematical sense) a total expressivity. For example, free-space motions at the shoulder require greater force than motions at the wrist. It so happens that motions at the shoulder also tend to yield larger motions in volumetric space than motions at the wrist. Thus, the total measured forces and subjective evaluation of expressivity are likely well correlated. Clearly, a precise definition of expressivity is needed, but it is my hope that this sketch of the idea communicates the idea well enough. A further extension might be to sum all gestural vectors to arrive at a single “affective vector.” For example, a downward affective vector might map to boredom or embarrassment.

Body language is most fully expressed between two people. While the Kinect can track two skeletons, it generally works best facing the individuals it is tracking. So measuring expressivity between two individuals facing one another may likely require an arrangement of two Kinects. The line of sight of these two Kinects might cause interference with one another and their angle of sensing could also be less than ideal. I believe these problems can be solved with two different approaches: vibrating the cameras to eliminate interference and using an SDK other than the default able to sense the human form at even oblique orientations to the Kinect device itself.

MYO — Incredible new wearable gesture sensing

The MYO is a wearable armband that senses the electrical activity of the muscles in the forearm — down to the motion of individual fingers. It also features a 6-axis inertial measurement unit. In combining this motion sensing and muscle activity sensing it can do incredible things. Interestingly, because it measures electrical activity in the forearm muscles, gesture sensing can seem instantaneous as the muscles fire before the hand or fingers start to move. Read the FAQ for more. Pre-orders now with devices available in late 2013. A future class is going to have a lot of fun with this thing.

http://www.youtube.com/watch?v=oWu9TFJjHaM

 

Sharp criticisms of a redesigned pointy object

The title of our class is Biomechanics for Interactive Design. That is, we are being introduced to the field of biomechanics toward applying an understanding of the mechanics of bodies in design applications. To that end we read the paper entitled The effect of a new syringe design on the ability of rheumatoid arthritis patients to inject a biological medication co-authored by our recent guest speaker Dan Formosa.

In summary, work represented in the paper succeeds at applying an understanding of biomechanics and design to solve a narrowly defined problem — maximizing the conversion of hand strength to measurable force in a syringe. However, the paper itself fails on several counts to properly limit its scope and support its explicit and implicit arguments. Further, the design solution itself, as presented, is problematic with regard to the true goal of the project. First, I will address the success of the work as represented. Then, I will delineate the various problems with the paper and the assertions and assumptions made.

At its heart, the paper successfully demonstrates the connection of design changes between the standard R syringe and the new N syringe to an increase in injection force applied by a test population of users suffering Rheumatoid Arthritis. The numbers show quite conclusively that the new N syringe successfully translates existing hand strength to a measurably improved injection force within the study setup. The study method is sound. The improvement in force numbers is significant enough to outweigh any methodological nitpicking. The interrelation of biomechanics and design successfully led to greater injection force. Intuitively, the design changes are good ones and map well to corresponding evidence. This is commendable work involving a number of smart insights.

Beyond the preceding, however, the authors make problematic claims and also infer they have successfully solved a problem when, in fact, the solution is yet to be conclusively demonstrated.

The objective measures of force in use that the authors present are sound. However, the subjective measures of preference and comfort as collected by structured survey are notably questionable. First of all, the study participants could very likely have experienced priming effects. The N syringe is explicitly and/or implicitly presented as “new and improved” in language and appearance. This could significantly influence study participants’ opinions. Further, the study participants received an honorarium, possibly predisposing them to pleasing their study administrators. Further still, given the generally high place of medicine in our society and empathy for the target user population, the prospect of helping medical product design could sway study participants’ opinions in favor of the new design.

The authors state that the target user population, in fact, makes use of several grip styles. Only one was tested. Within this confine, the force numbers look solid and are persuasive towards the design solution. Yet, the product design cannot be considered successful given the aims of its application unless it leads to overall better injections accomplished by a larger percentage of the target population as compared to the R syringe. If the new design limits grip style and/or inhibits application of force in those grips, the design may have optimized a single grip while inhibiting force application in other grip styles and thus fails overall. It may be that the existing R syringe affords a greater level of injection success on average than the new N design. To their credit, the authors call out grip styles as well as the issue of varying injection angles, but they do not effectively scope the success of their work appropriately in context of these issues.

Finally, the most significant criticism I have to offer is in the false assigning of success in a numerically quantified ergonomic change to the larger aims of the product design. In short, if the target population does not adopt the new syringe design and, further, if the new design does not yield improved injections, the design is a failure. These are issues of user behavior and efficacy that must be addressed in the field and in a way that limits priming effects — most likely through some sort of double-blind study of statistically significant time period and test population. The authors subtly suggest and infer that their measurable success in a limited aspect of ergonomically improved design will positively correlate to behavioral change and overall injection success. This is yet to be seen.

The authors overreach in their assertions as to user comfort and preference and in their inference of a successful design intervention. Within a limited context, the application of biomechanics and ergonomic design are sound and to be commended. The other results collected are unsound, as is the larger assumption of product design success.

This paper illustrates what I believe may be a central challenge in incorporating biomechanics into interactive design. By virtue of the engineering and numbers attached to biomechanics, we may be tempted to label as a success an entire project for sake of only those aspects of a design that are quantified before us.

Free Bodies. Diagrammed… Shoulder in the Coronal Plane

I wrote a Kinect-based application I call “Free Bodies. Diagrammed.” whose purpose is to reveal the real physics of a physical experience.

The range of motion of a healthy human shoulder moving in the coronal plane is not particularly difficult to measure. With no tools and only a brief explanation of basic geometry and anatomical planes, the average person could easily identify 180+ degrees of rotation simply by moving their arm. That said, I found the notion of interactively demonstrating some of the basic physics we have covered in class to be a worthy project for our second assignment.

The application maps the rotation of the left shoulder in real time and performs several other measurements and calculations along the way. The application measures the angle of travel in the coronal plane away from the position of the arm at rest. It also measures the distance of the left hand from the left shoulder (only in the coronal plane). Inspired by the example in class, from these it instantaneously calculates and displays the torque necessary to statically suspend an imaginary can of soup at one’s side (the arm itself is considered to be massless). The intended use is to hold a can of soup and feel the experience of the torque while mapping the visceral experience to live data display. Varying one’s shoulder angle and even bending one’s elbow instantaneously reveal the changes in torque felt at the shoulder to suspend a soup can. In this way, the relationship of angle and perpendicular distance to torque can be directly experienced bodily and mentally.

While not a perfect approach, by largely ignoring depth components of the skeleton, the X-Y plot of the joint data is a natural projection of the coronal plane of the body. The Kinect can track up to two skeletons. By using the calculated arm length and shoulder angle together with various joint coordinates, the application is able to draw the data hovering near relevant locations on the body; in this way two skeletons and their data can be naturally displayed (people cannot occupy the same points in 3D space and thus as an added benefit will naturally position themselves for the best data display).

 

Coronal Shoulder 1

Coronal Shoulder 2

Coronal Shoulder 3

Bodily Orientations Around Mobiles: Lessons Learnt in the Island Nation of Vanuatu

Paper: Bodily Orientations Around Mobiles: Lessons Learnt in Vanuatu [ACM reference]

Abstract:

Since we started carrying mobiles phones, they have altered the ways in which we orient our bodies in the world. Many of those changes are invisible to us – they have become habits, deeply engrained in our society. To make us more aware of our bodily ways of living with mobiles and open the design space for novel ways of designing mobiles and their interactions, we decided to study one of the last groups of users on earth who had not been exposed to mobiles: the people of Vanuatu. As they had so recently started using mobiles, their use was still in flux: the fragility of the mo- bile was unusual to them as was the need to move in order to find coverage. They were still getting used to carrying their mobiles and keeping them safe. Their encounters with mobile use exposed the need to consider somaesthetics practices when designing mobiles as they profoundly affect our bodily ways of being in the world.

Excerpts (much more in full paper):

Brian had apparently completely forgotten the presence of the mobile phones hanging down from his neck, which was an extremely common way for people in Rah to carry their phones around. He was engaged in an everyday activity. As Jorege put it at one time: “sometimes we just lean over to look at something, or to get some water out of the canoe […] we forget that we have the phone on our chest [laughter] and then the phone goes in the water [followed by generalized laughter]”.

Above we identified the […] following themes:

  • Somaesthetic implications: tensions in posture or muscles from wearing mobiles
  • Competing for bodily space: wearing the mobile requires finding a ‘space’ on your body where it can be worn

We need to better understand how technology, like mobiles, alters our bodily ways of being in the world: the movements of our body, the stiffening of certain muscles, the way we move through the landscape, how we appropriate it, wear it and find bodily and social space for it. Obviously, this process is developing over time – we get a socio-digital material (or socio-bodily-digital material) that is over time, more or less, fitted to the setting. But, by altering the design, we might alter body schemas to be better adjusted to social norms, bodily practices, but also better adjusted to what is somaesthetically pleasing — giving rise to better experiences with the device.

Dr. Cynthia Breazeal: Biomechanics in Social Robotics

Dr. Breazeal is an Associate Professor of Media Arts & Sciences at the MIT Media Lab and is the founder and director of the Personal Robots Group. Her background is in electrical and computer engineering as well as computer science. Her unique contributions to the field of biomechanics are related to human robot interaction. Breazeal’s bio explains:

Her research focuses on developing the principles, techniques, and technologies for personal robots that are socially intelligent, interact and communicate with people in human-centric terms, work with humans as peers, and learn from people as an apprentice.

Breazeal’s work incorporates the biomechanics of human interactions in robotic form. The hardware and software systems she and her group build aim to create natural, comfortable, socially appropriate interactions between humans and robots through these robots’ motions, posture, and facial expressions — behavioral biomimetics might be an appropriate term here. A related direction to this work is Breazeal’s interest in building robots able to recognize and mimic human actions as training for task completion (instructing robots through demonstration rather than programming).

Publications

Biologically Inspired Intelligent Robots book cover

Dr. Breazeal has over 100 publications to her credit including three books. Of perhaps greatest interest to those in the field of biomechanics is the book she edited entitled Biologically Inspired Intelligent Robots.

From the book’s summary:

“Advances in biologically-inspired technologies, such as artificial intelligence and artificial muscles, are making the possibility of engineering robots that look and behave like humans a closer reality. The multidisciplinary issues involved in the development of these robots include materials, actuators, sensors, structures, functionality, control, intelligence, and autonomy. This book reviews various aspects ranging from the biological model to the vision for the future.”

Selected Projects

Cog

As a graduate student, Breazeal worked with Dr. Rodney Brooks at the MIT Artificial Intelligence Lab. The robot Cog was a project of Brooks’ Humanoid Robotics Group in the late 1990s and early 2000s. Breazeal was Cog’s chief architect. The impetus for Cog was to explore intelligence as an embodied phenomenon. That is, Cog’s AI systems were to learn by doing and experiencing the world physically as we humans do — akin to human babies.

Cog was built with the biomechanics and sensory abilities of a human torso. For instance, its visual system simultaneously incorporated wide-angle and fine focus video as well as replicated our saccades (the rapid movement of the human eye between precise fixation points). Cog’s limbs were built to respond with the elasticity of human joints and connective tissues rather than the rigidness of a traditional industrial robotic arm.

Breazeal’s experiences with Cog inspired her future direction.

The following video gives a good introduction to Cog (note: Breazeal does not make an appearance in this particular video).

Nexi

In more recent years, Breazeal’s attention has turned to social robotics. Nexi is an instance of Breazeal’s group’s Mobile/Dexterous/Social project. Nexi’s dexterity moderately approximates that of human arms and hands (a great improvement over Cog). Nexi’s neck, head, and face were specifically designed to recreate human facial expressions and posture and to do so with the speed of the human musculoskeletal system.

As an example of Nexi’s abilities, she has been used to study human trust in relation to social cues [see video accompanying article]. In conversation with human study participants, Nexi was used to reliably repeat facial and posture cues thought to be related to deception — even the best human actors could not hope to do so sufficiently for controlled studies. After interacting with Nexi, study participants played a variation of the classic prisoner’s dilemma game including a financial reward; their decisions in this game would be primed by the level of trustworthiness Nexi expressed and measured in terms of bets placed.

The following video shows a demonstration of Nexi’s dexterity and emotive expressiveness.