The visual expression of emotion is a complex process. Reading and interpreting the expressed emotions of others is puzzling and remains a mystery for many of us.
This may change soon by implementing emotion reader technology into our lives that may enable us to easily read and understand emotions of people around us.
Several research groups around the world created complex algorithms that allow training computer systems to read human emotions based on facial expressions, the tone of voice, body movement and lip patterns etc.
The MIT’s Media Lab and cooperation partners recently developed glasses that identify 24 feature points on a person’s face and interpret micro-expressions. This technology makes it possible to ‘read’ visual emotional signs.
Moreover, the user gets the information about what the other person is feeling using earphones and a green-red-light system on the lens of the glasses (good emotions=green, bad emotions=red).
Another group of the MIT Media Lab developed the “jerk-o-meter” – a technology that interprets gesture mirroring and variations of the tone of voice using an electronic badge around the neck. Among other factors, the audio sensors read the degree of aggressive behavior of the wearer, the pitch of the voice and the volume of vocal sound.
The data from the badge can be sent to other devices such as smartphones where it can be graphically displayed.
Most recently, a group at the International University in Selangor, Malaysia developed a genetic algorithm that interprets the shape of the human mouth displaying different emotions. Their research is based on the knowledge that lips are vital for the outward expression of emotion. Both, the upper and lower lips, are analyzed as two separate ellipses by the algorithm. The researchers used photos of persons to train a computer to identify six most common human emotions including happiness, sadness, fear, angry, disgust, surprise and a neutral expression.
One day, such an emotion detector may be helping persons with an insufficient ability of speech to interact more effectively with computer-based devices for information exchange and even allow the development of improved voice synthesizers that facilitate communication of disabled individuals.
The growing technologies are improving all aspects of interaction between humans and computers especially in the area of human emotion recognition. It will open up a new area how we interact with our devices, how our devices interact with us and even how we interact with each other.