Top Menu
facial expressions

Humans have always been very good in recognizing emotions on people’s faces, but what if computers would be able to do that too? Recent advances in the fields of machine learning and artificial intelligence are allowing computer scientists to create smarter apps that can identify things like sounds, words, images, and even facial expressions.

Earlier this week, the Microsoft Project Oxford team announced plans to release public beta versions of new tools that help developers take advantage of those capabilities, including one that can recognize emotion. Chris Bishop, head of Microsoft Research Cambridge in the United Kingdom, showed off the emotion tool in a keynote talk at Future Decoded, a Microsoft conference on the future of business and technology.

Even though this is not necessarily new technology as Snapchat has begun configuring facial-recognizing features in their latest updates, what Microsoft is discussing is something that reaches a new level.

Microsoft, unlike Snapchat, has suggested they have successfully coded a computer to not only recognize facial features, but to read emotions that are visible when making facial expressions. The newest Microsoft technology will read your eyes, mouth, eyebrows, and other features to distinguish between different expressions. The tool trains computers to recognize eight core emotional states: anger, contempt, fear, disgust, happiness, neutral, sadness and surprise.

As of now, the tool seems less than foolproof and doesn’t have the capacity to recognize the full spectrum of human emotion. However, the potential is there, and Microsoft hopes developers who don’t necessarily have expertise in machine learning or artificial intelligence will be able to use the tools to build such features into their apps.

In addition, Microsoft is releasing public beta versions of several other new tools by the end of the year, available for a limited free trial. They include:

  • Spell check – This spell check tool recognizes slang words such as “gonna,” as well as brand names, common name errors and difficult-to-spot errors such as “four” and “for.” It also adds new brand names and popular expressions.
  • Video -This tool lets customers easily analyze and automatically edit videos by doing things like tracking faces, detecting motion and stabilizing shaky video.
  • Speaker recognition –This tool can be used to recognize who is speaking based on learning the particulars of an individual’s voice.
  • Custom Recognition Intelligent Services (CRIS) – This tool makes it easier for people to customize speech recognition for challenging environments, such as a noisy public space. It also could be used to help an app better understand people who have traditionally had trouble with voice recognition, such as non-native speakers or those with disabilities.
  • Updates to face APIs – In addition to the new tools, Microsoft Project Oxford’s existing face detection tool will be updated to include facial hair and smile prediction tools, and the tool also has improved visual age estimation and gender identification.

Developers who are interested in these tools can find out more about them and give them a try by visiting the Microsoft Project Oxford website.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>