Screensharing: Don’t Look at Me

January 11, 2012

Imagine discussing a project you are doing with a small group: a web site, a drawing, a contraption you are building; whatever. You would not expect the people to be looking at your face the whole time. Much of the time you will all be gazing around at different parts of the project. You may be pointing your fingers around, using terms like “this”, “that”, “here” and “there”.

When people have their focus on something separate from their own bodies, that thing becomes an extension of their bodies. Bodymind is not bound by skin. And collaborating, communicating bodyminds meld on an object of common interest.


The internet is dispersing our workspaces globally, and the same is happening to our bodies.

The anthropologist, Ray Birdwhistell coined the term “kinesics“, referring to the interpretation, science, or study of body language.

I invented a word: “telekinesics”. I define it as, “the science of body language as conducted over remote distances via some medium, including the internet” (ref)

My primary interest is the creation of body langage using remote manifestations of ourselves, such as with avatars and other visual-interactive forms. I don’t consider video conferencing as a form of virtual body language, because it is essentially a re-creation of one’s literal appearances and sounds. It is an extension of telephony.

But it is virtual in one sense: it is remote from your real body.

Video conferencing, and applications like Skype are extremely useful. I use Skype all the time to chat with friends or colleagues. Seeing my collaborator’s face helps tremendously to fill-in the missing nonverbal signals in telephony. But if the subject of conversation is a project we are working on, then “face-time”, is not helpful. We need to enter into, and embody, the space of our collaboration.

Screen Sharing

This is why screen sharing is so useful. Screen sharing happens when you flip a switch on your Skype (or whatever) application that changes the output signal from your camera to your computer screen. Your mouse cursor becomes a tiny Vanna White – annotating, referencing, directing people’s gazes.

Michael Braun, in the blog post: Screen Sharing for Face Time, says that seeing your chat partner is not always helpful, while screen sharing “has been shown to increase productivity. When remote participants had access to a shared workspace (for example, seeing the same spreadsheet or computer program), then their productivity improved. This is not especially surprising to anyone who has tried to give someone computer help over the phone. Not being able to see that person’s screen can be maddening, because the person needing help has to describe everything and the person giving help has to reconstruct the problem in her mind.”

Many software applications include cute features like collaborative drawing spaces, intended for co-collaborators to co-create, co-communicate, and to to co-mess up each other’s co-work. The interaction design (from what I’ve seen) is generally awkward. But more to the point: we don’t yet have a good sense of how people can and should interact in such collaborative virtual spaces. The technology is still frothing like tadpole eggs.

Some proponents of gestural theory believe that one reason speech emerged out of gestural communication was because it freed up the “talking hands” so that they could do physical work – so our mouths started to do the talking. Result: we can put our hands to work, look at our work, and talk about it, all at the same time.

Screen sharing may be a natural evolutionary trend – a continuing thread to this ancient  activity – as manifested in the virtual world of internet communications.



Virtual Sentience Requires a Gaze

November 28, 2011

(This blog post is re-published from an earlier blog of mine called “avatar puppetry” – the nonverbal internet.  I originally wrote it in September of 2009. I’ll be phasing out that earlier blog, so I’m migrating a few of those earlier posts here before I trash it).


I was speaking with my colleague Michael Nixon at the School of Interactive Art and Technology. We were talking about body language in non-human animated characters. He commented that before you can imbue a virtual character with apparent sentience, it has to have the ability to GAZE – in other words, look at something. In other words, it has a head with eyes. Or maybe just a head. Or… a “head”.

Here’s the thing about gaze: it pokes out of the local (“lonely”) coordinate system of the character and into the global (“social”) coordinate system of the world and other sentient beings. Gaze is the psychic vector that connects a character with the world. The character “places it’s gaze upon the world”. Luxo Jr is a great example of imbuing an otherwise inanimate object with sentience (and lots of personality besides) by using body language such as gaze.

I have observed something missing in video conferencing. Gaze. Notice in this set of four images how the video chat participants cannot make eye-contact with each other. This is because they are not sharing the same physical 3D space. Nor are they sharing the same virtual 3D space!

Gaze is one of the most powerful communicative elements of natural language, along with the musicality of speech, and of course facial and bodily gesture. This is especially true among groups of young single people in which hormones are flying, and flirtation, coyness, and jealousy create a symphony of psychic vectors…

At, I designed the initial avatar gaze system. With the help of Chuck Clanton, I created an “intimacam”, which aimed perpendicular to the consensual gaze of the avatars, and zoomed-in closer when the avatar heads came closer to each other.

The greatest animators have known about the power of gaze for as long as the craft has existed. This highly-social component of body language has a mathematical manifestation in the virtual spaces of cartoons, computer games, and virtual worlds. And it is one of the many elements that will become refined and codified and included into the virtual body language of the internet.

Human communication is migrating over to the internet – the geo-cortex of posthumanity. Text is leading the way. Body language has some catching up to do. Brian Rotman has some interesting things to say along these lines in his book, Becoming Beside Ourselves.

We can learn a lot from Pixar animators, as well as psychologists and actors, as we develop virtual worlds and collaborative workspaces.


In response to my earlier post, Laban-for-animators expert Leslie Bishko made this comment:

“My .2c – breath promotes the illusion of sentience, gaze promotes the illusion of interaction and relationship!”

Without a Body, Our Conversations Bifurcate

August 23, 2011

While talking on the phone or texting with a friend, it is impossible to give your friend visual signals that indicate understanding, affirmation, confusion, or levels of attention. These indicators are typically provided by head motions, facial expressions, hand movements, and posturing. In natural face-to-face interaction, these signals happen in real time, and they are coverbal; they are often tightly-synchronized with the words being exchanged.

You may have had the following experience: you are exchanging texts in an online chat with a friend. There is a long period of no response after you send a text. Did you annoy your friend? Maybe your friend has gone to the bathroom? Is your friend still thinking about what you said? One problem that ensues is cross-dialog: during the silent period, you may change the subject by issuing a new text, but unknowingly, your friend had been writing some text as a response to your last text on the previous topic. You get that text, and – relieved that you didn’t annoy your friend – you quickly switch to the previous topic. Meanwhile, your friend has just begun to respond to your text on the new topic. The conversation bifurcates – simply due to a lack of nonverbal signaling.

Like frogs in boiling water, most of us are not aware that our bodies are slowly dissolving as we engage increasingly in text-based communication, which is often asynchronous (or at least running at lower than conversation-rates). My theory: new forms of body language are emerging in the absence of our real bodies. Smart design of visual/interactive interfaces can adapt to this natural evolution. I don’t see it as a choice. It’s simply a part of our evolution – our adaptability.

Jill Chivers, in the blog, “I’m Listening – the Power and Magic of Listening in Everyday Lives“, makes a great case for reaching for the phone when repeated email pings are not getting through to someone, or for going face-to-face, when phone calls are left unanswered.

Call her old-fashioned, call her a Luddite. But she is simply suggesting that we all need to stay connected in ways that maximize our body language. It’s not an anti-technology stance. In fact, I would argue that we need more technology and smarter technology – just that it has to be the kind of technology that manifests embodiment over the internet – in whatever forms it takes. Without bodies, virtual or otherwise, and without the synchrony of realtime bodies, voices, and some stream of co-presence, we tend to fragment into text-like pieces.

Some people like deconstructing themselves into textual fragments. Sometimes I like it – I can hide behind my well-crafted words. But I don’t like the fact that I like it. I don’t want to like it anymore than I do. I would prefer to like connecting with people more in realtime, like I used to – before the world was wired.

Finally, here’s a relevant piece by Si Dawson

Avatar Gaze Breaks Fourth Wall

July 17, 2011

I’ve done a lot of pontificating about avatar gaze in virtual worlds (eye contact, and all the emotional and sometimes strange effects that this causes). I’m writing a paper called Virtual Gaze, soon to be published in a book by  ETC PressWhile checking on other references to avatar gaze, I came across this blog post by Ironyca about the avatars in Blue Mars. Here’s a picture from that blog:

That avatar is looking at ME! Creepy.

Wagner James Au comments on this effect in a blog post. He quotes the engineer who added this feature (Koji Nagashima):

“On a cinematic project,” Koji explains, “all animators carefully make animation for eyes. But in our world, the program needs to take care of that.” Eye animation in a virtual world or MMO is challenging because the avatar’s position or the user’s camera changes so often. “That’s very interesting for me,” Koji says.

Okay guys. It’s interesting, I agree. And it’s a cute trick. But it makes no sense to me. Have you thought about the media effects? Do you understand how this creeps out users? In cinema, there is this concept of Breaking the Fourth Wall, which refers to a fictional character acknowledging the reader/viewer/audience, and acknowledging the fact that he or she is fictional. The fourth wall, in this case, is the computer screen. So, what is the reason you broke this wall?

I developed avatar gaze code for This is described in my chapter, The Three Dimensional Music of Gaze. Gaze is a powerful form of body language, and virtual worlds, by and large, are still lacking in this form of expressive connectivity. The silent act of shooting a bit of eye contact on someone’s avatar can be a signal to start a conversation. It can show attraction, and it can show WHO YOU ARE TALKING TO (one bit of affordance that is often missing in virtual worlds). It can also create romantic effects, as shown in these pictures from (the left image shows a prototype I developed with Chuck Clanton for chat props).

The Look

In real life, my wife only needs to make one quick glance at me, and I know that it is time to take out the trash. Or…depending on the timing, or the situation…it may mean something else entirely: something which is more fun than taking out the trash. This simple bit of body language is powerful indeed. Virtual gaze can enliven virtual worlds – infusing silent communicative energy between avatar faces. Virtual gaze gives virtual worlds increased validation as a communication medium of embodiment.


As far as Koji’s trick of having avatars break the fourth wall, I’d like to hear what you think. How does this effect the Blue Mars experience? Ironyca’s critique says it better than I could:

“At least she kept looking at me! I have always thought of an avatar as a virtual representation of me in cyberspace, but perhaps Blue Mars disagrees. To me, the identity immersion was completely broken by the fact my avatar liked to look at me (i.e. gaze at the “camera”), and sometimes her posing even looked flirtatious. I was highly disturbed by the fact, that my idea of her and me being the same was thrown overboard, when she continously decided to turn her head and smile at me. If she is looking at ME, I can’t be HER.”

Jaron Lanier Yanks my Homunculus

May 18, 2011

While doing research for my book, I realized that I didn’t know squat about the homunculus. Okay, I knew that there were two of them somewhere in the brain: one that corresponds to what your body parts feel and another that corresponds to what your muscles can do. But I didn’t realize that there are really many “homunculi” strewn throughout your brain! There are even homunculi in the cerebellum. I am referring quite loosely to “bodymaps”, all of varying shapes, sizes, and levels of definition. The aggregate of all of them, and their effect, is what authors Blakeslee and Blakeslee call the “Body Mandala“, in their book, The Body Has a Mind of Its Own.

Jaron Lanier believes that humans of the future will be truly “multihomuncular”. Why? Because with virtual reality, and new flavors of virtual worlds and augmented reality, we will be able to inhabit avatars that do not necessarily look like us. Jaron’s favorite example is the cuttlefish – a mollusk that is able to animate its skin in order to express itself. But he goes further than this. In virtual reality, you might be able to wiggle your toes to make the clouds move. If you do this kind of thing enough times, your homunculi will start to morph to accommodate this part of the environment as an extension of your body. And as far as the brain is concerned, there is no clear difference between a virtual body and a real one.

At Microsoft, Lanier is working on a project called Somatic Computing. His team is exploring the Kinect as a way to translate real body movement into “computational gestures”.

This video shows Lanier being interviewed by a very non-Jaron-Lanier-looking person. At the end he explains how “being” a molecule can help you achieve deep intuition for what it is like to dock with another molecule. This has radical implications for education.

Personally, I find most uses of virtual worlds for education to be droll. There are still universities constructing virtual classrooms with chairs, podiums, and chalk boards (yawn).

A teacher should be able to demonstrate the chemistry of water by turning into a water molecule. Not only that, the teacher should be able to control all the students’ virtual cameras to swoop into the scene. Even better: all the students should be able to turn themselves into hydrophylic molecules and dance with the teacher. Not trying to get kinky here. Just sayin’.

The current popular understanding of virtual worlds is quite unimaginative. And its true potential to expand minds and offer shared experiences has barely been tapped. Lanier has a following and the publicity to maybe help shift the conversation back to what it was in those early hippy days when he and his friends were trying this stuff out for the first time.

Prepare for mind expansion. Without acid.