can I use Humain in the facial expression recognition field? what descriptors (to characterize humain emotions) can be used in this domain? there are any codes in matlab, opencv to implement this descriptors.
Extract shape- or appearance-based features to describe the face
Classify the presence of facial expressions using supervised learning
There are a number of tools for each of these steps and more are being released each year. There are also several packages that attempt to accomplish all the steps for you.
However, as attractive as an off-the-shelf toolbox is, transferring from the databases these systems were trained on to novel data is a very challenging task. As Christian Wallraven aptly said, validation is king here. I wouldn't recommend using a system that isn't transparent about their performance in spontaneous and novel databases. To my current knowledge, only CERT has does this (see the discussion here).
I would argue that the best solution right now is to work with a computer scientist to develop classifiers for your specific data. You'll still have to do some manual coding, but that initial investment can pay off when you automatically code the rest of the database. Attached is one of my papers that does this on a real psychological database (with all the challenges that implies), carefully validates it, and quantifies its operational parameters. When done this way, it works exceptionally well.
Article Spontaneous facial expression in unscripted social interacti...
yes, of course you can use HMMs for facial expressions. In fact, graphical models such as HMMs with state transitions (directed or undirected) are one of the main methods for analyzing expressions - it should be stressed, of course, that it makes most sense to apply such frameworks in the context of **video** analysis rather than for analyzing static images.
Most applications in this area work by first extracting a set of features on the face and then tracking those. The HMM is then used to train expression-specific models of the feature positions on the face over time. Extensions such as Conditional Random Fields (CRFs) and their variants often show superior performance compared to HMMs, but typically require a lot more training data. Additional processing steps may include to first classify the feature movements into so-called Action Units (see the framework by Ekman and Friesen).
There are a lot of review papers out there that describe the current state of the art in facial expression analysis.
- Chapter 19 in the excellent Handbook of Face Recognition by Li and Jain deals with Facial Expressions
- labs that publish a lot on facial expression analysis include the lab by Jeff Cohn, (who together with Takeo Kanade published a benchmark database of expressions) and Maja Pantic in London. Just go to their webpage and check out some recent review papers.
As for code, you can use any HMM toolbox for Matlab for training HMMs. The generalization of HMMs (undirected graphical models) can be trained using this toolbox, for example:
http://www.cs.ubc.ca/~schmidtm/Software/UGM.html
As for features, this is a little more tricky. Here's some matlab code for face recognition that is based on feature tracking:
http://www.robots.ox.ac.uk/~vgg/research/nface/
Finally, there are "turnkey" solutions out there that can be quite good:
I'm a psychologist and I work with Software for Automatic Recognition and Analysis.
I do not know what you are going to do, but I summarise in short a few concepts for what is important from our point of view (research in psychology of emotions):
1. today a lot of products perform the (WRONG) equation: Action Unit 12 = "Happy" (the same applies to other discrete emotions). The approach "one muscle = one emotion" is unscientific, and misleading for any applied use. From our point of view these products are useless.
2. a system capable to dial with a consistent number of Action Units (see FACS 2nd edition; Ekman, Friesen & Hager, 2003) is desirable (so far, products on the market are not fully mature yet).
In the case of the emotient software I mentioned, however, they will give you access to the actual Action Unit activations (as well as to their own "interpretation" of this in terms of compound expressions).
Having said this, as always, any computer vision algorithm can fail to an almost arbitrary degree depending on lighting, pose, ethnicity, resolution, sequence length, etc.
Automatic analysis of expressions is a long way away and any computer algorithm should carefully be tested and validated under close human supervision...
See: "The Human Face" a comprehensive book including perception and recognition studies, Mary Katsikitis (ed) 2003 Kluwer Academic Publishers. Among th bulk of publications on facial recognition, this book covers the most important facets of the topic.
Extract shape- or appearance-based features to describe the face
Classify the presence of facial expressions using supervised learning
There are a number of tools for each of these steps and more are being released each year. There are also several packages that attempt to accomplish all the steps for you.
However, as attractive as an off-the-shelf toolbox is, transferring from the databases these systems were trained on to novel data is a very challenging task. As Christian Wallraven aptly said, validation is king here. I wouldn't recommend using a system that isn't transparent about their performance in spontaneous and novel databases. To my current knowledge, only CERT has does this (see the discussion here).
I would argue that the best solution right now is to work with a computer scientist to develop classifiers for your specific data. You'll still have to do some manual coding, but that initial investment can pay off when you automatically code the rest of the database. Attached is one of my papers that does this on a real psychological database (with all the challenges that implies), carefully validates it, and quantifies its operational parameters. When done this way, it works exceptionally well.
Article Spontaneous facial expression in unscripted social interacti...
With all due respect to Jeffrey, I have to disagree with him.
FACET was developed by Emotient, which is an outgrowth of a team that pioneered the use of machine learning for expression recognition and at the University of California San Diego. Back at UCSD we collected some of the largest datasets of spontaneous facial expressions, an shared them with several academic groups, including the group at Pitt/CMU which is lead by Fernando and Jeff, and that has been another leading team in academia.
Back in 2001 we wrote our first paper on automatic recognition of spontaneous facial expressions, and since then we have been one of the leading groups in academia emphasizing the differences in dynamics and morphology of spontaneous expressions and focusing laser sharp on automatic recognition of spontaneous expressions.
While at Emotient we have collected and expert coded what we believe is the largest collection of videos of spontaneous expressions . This dataset is several order of magnitude larger than the ones used in academia and it includes a very wide range of pose, illumination, imaging conditions, age, and ethnicities.
As far as we know our technology has the highest spontaneous facial expression accuracy on unconstrained daily life conditions. The technology is used in iMotions' Attention tool, which combines eye tracking and facial expression recognition in a very easy to use research tool.
You can learn more about our technology at Emotient.com
You can learn more about IMotion's Attention Tool at imotionsglobal.com
You can learn more about the academic research from the UCSD team that founded Emotient at mplab.ucsd.edu following the links to the publications page.
Hi Javier. I could indeed be misinformed or outdated in my opinion, or perhaps CERT is an exception to my rule. I just looked through the sites you mentioned and found the attached paper; hopefully this is the one you were referring to.
From this article, it looks like your group is well aware of the importance of validation and the complexity of domain transfer. I was particularly impressed by your double cross-validation procedure. It appears that your large and varied training set does help domain transfer, and your transfer results were better than I expected them to be (2AFC=0.72). However, in defense of my earlier post, performance was higher in the original database (2AFC=0.81) and better transfer was achieved by retraining CERT on a combination of the original and novel data (2AFC=0.76).
Anyway, kudos to you guys for being so transparent with CERT. I'll edit my earlier answer in light of this information.
Conference Paper Action unit recognition transfer across datasets
But with regards to software FACET has gone (seems that it is one of the very few things that actually has left internet completely) and Emotient has been burried inside Apple.
The only real alternative I know of is OpenFace https://github.com/TadasBaltrusaitis/OpenFace/wiki . My tests show that it is in some respects better than Emotient used to be and in some weaker.
PS the latest version at the moment is https://github.com/TadasBaltrusaitis/OpenFace/releases/download/OpenFace_2.0.6/OpenFace_2.0.6_win_x64.zip despite it is not the one on the wikipage.