Wednesday, October 13, 2010

From Emoticon to Emoti-bot

Mid-August of this year, the news feeds exploded with stories about the Nao robot and its ability to use emotional responses to interact with people. The little humanoid robot displays its feelings through physical postures, hunching its shoulders when it’s sad or opening up its arms for hugs when it’s happy.

It’s a fantastic technological development, and some would say a terrifying portent of our impending machine-based doom. But despite the promising work that the robot is doing with autistic children and the potential aid for the disabled, all I could think about while reading these articles what do they mean by emotions?


 Later that month, Popular Science published an article about a computer that has independently read some of Aesop’s fables and developed an emotional response to them. It read a series of stories about birds, and responded “I feel bad for the bird,” without having been programmed to specifically respond with sympathy versus happiness or fear. It did this using a computer programming code called emotional markup language, a corollary to the better-known HTML, or hypertext markup language, which powers most of the internet.

Emotional markup language is rather new, and hasn’t built up a strong repertoire of codes. In 2006 a group of developers organized a forum to “investigate a language to represent the emotional states of users and the emotional states simulated by user interfaces.” Backed by the World Wide Web Consortium, an international collaborative effort to build standards for web programming, the venture began with the premise that emotions and intelligence are intertwined, and that computers need to have emotional responsiveness since the users, humans, have emotional states. 




The programming has a long way to go to establish any consistent format for widespread use, but will it ever be emotion? Given that science doesn’t have a definitive, widely accepted model for what emotion is or how it works, there may not be any way to know. What do we compare computer emotion to if we can’t point to or draw a picture of what emotion is in people?

At best, these current machines have the affectation of emotion, displaying postures that humans recognize as being associated with certain feelings. It’s nowhere near the doomsday scenarios people worry about, with computers growing feelings of resentment toward humans and thus destroying us all. It’ll probably be much simpler and more mundane than all that.

Imagine this – it’s the future and you’re strolling down a virtual-reality street in New England. You see a newly inserted cafĂ© that you’d like to click into, but you can’t figure out how to work their newfangled Flash doorknob. After a few attempts, you become a little less gentle, you rattle the door and peer in the windows and maybe curse a little. The computer, sensing your frustration, offers up a little helper shaped like a paperclip that asks you “You seem to be trying to enter. Would you like help with that?” And maybe the paperclip has a comforting look on its face.


(There, there.  I understand.)

1 comment: