Using Kinect to Puppeteer my Avatar? My Arms are Getting Tired Just Thinking About It

Wagner James Au pointed me to his New World Notes blog post about Microsoft’s Kinect – hooked up to Second Life. It highlights a video made by Thai Phan of ICT showing a way to control SL avatars using Kinect.

The Kinect offers huge potential for revolutionizing user interaction design. Watch this video and you will agree. But I would warn readers against the knee-jerk conclusion that the ultimate goal of gestural interfaces is to allow us to just be ourselves. Let me explain.

Multi-Level Puppeteering

Either because of the clunky interfaces to the Second Life avatar, or because Thai is smart about virtual body language messaging (or both), he has rigged up the system to recognize gestures – emblems if you will. These are interpreted into semantic units that trigger common avatar animations. One nice procedural piece he added was the ability to start and stop animations, holding them in place, and then stopping them, based on the user’s movements.

Wagner James Au points out the powerful effect and utility that would come about if we had a more direct-manipulation approach, whereby all the body’s motions are mapped onto the avatar. He makes an open call for “different variations of Kinect-to-SL interaction, experimenting with the most natural body-to-avatar UI”. He cites the avatar puppeteering work I did while I was at Linden Lab. He also cites the continuing thread among residents hoping to have puppeteering revived. We are all of similar minds as to the power of direct-manipulation avatar animation in SL.

But, as my book points out, direct gestural interfaces are not for everyone, and … not all the time! Also, some people have physical disabilities, and so they cannot “be themselves” gesturally. They have no choice but to use virtual body language to control their avatar expressions.

And for people like Stephen Hawking, the Kinect is useless. Personally, I would LOVE it if someone could invent a virtual body language interface as an extension to Hawking’s speech synthesizer, to drive a Hawking avatar. Wouldn’t it be cool to witness the excitement in Hawking’s whole being while describing the Big Bang?

Throttling the Gestural Pipeline

But, getting back to the subject of those of us who are able to move our bodies…

The question I have is…WHEN is whole-body gestural input a good thing, and WHEN is it unnecessary and cumbersome? Or moot?

Here’s a prediction: eventually we will have Kinect-like devices installed everywhere – in our homes, our business offices, even our cars. Public environments will be installed with the equivalent of the Vicon motion capture studio. Natural body language will be continually sucked into multiple ubiquitous computer input devices. They will watch our every move.

With the likelihood of large screens or augmented reality displays showing our avatars among remote users, we will be able to have our motions mapped onto our avatars.

Or not.

And that’s the point: we will want to be able to control when, and to what degree, our movements get broadcast to the cloud. Ultimately, WE need to have the ability to turn on or off the distribution of our direct-manipulation body language. The Design Challenge is how to provide intuitive controls.

The Homuncular Kinection

I have a crew of puppeteers in my brain: bodymaps, homunculi, controllers in my prefrontal cortex, mirror neurons, and other neural structures. So that my arms don’t get tired from having to do all my avatar’s gesturing, my neural puppeteers are activated when I want to evoke mediated body language. I do it in many ways, and across many media (including text: eg, emoticons).

Stephen Hawking also has humuncular puppeteers in his brain.

Ultimately, I want several layers of control permitting me to provide small-motion substitutions for large motions, or complex bodily expressions. Virtual Body Language will permit an infinite variety of ways for me to control my avatar…including wiggling my index finger to make my avatar nod “yes”.

THIS is where the magic will happen for the future of Kinect interfaces for avatar puppeteering.

4 Responses to Using Kinect to Puppeteer my Avatar? My Arms are Getting Tired Just Thinking About It

  1. I’d be interested in this capability to work with patients who don’t comply with their exercise regimen as they have co-morbidity of depression, body image issues that prevent them from attending group exercise (adds in the peer pressure) If they could try from home with a SL class expecting them, and Kinect would show what they were doing in RL…would that increase compliance and eventually overcome their RL issues regarding exercise?

  2. OMG
    You got a blog now Jeffrey?

    Loving the articles, please keep going! This is incredibly interesting stuff.

  3. magnificent post, very informative. I wonder why the other experts of this sector don’t notice this. You should continue your writing. I’m confident, you’ve a huge readers’ base already!

  4. Evan says:

    This post is really a nice one it assists new net visitors, who are wishing in favor of blogging.|

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: