I think you might find my work (Dynamic Visual Analogue Mood Scales) of interest. Though they are primarily intended to enable people with severe communication problems to communicate their mood, it has been suggested that they could also be used for purposes such as yours. For example, you could use images from different parts of the scales as part of a forced-choice task where participants are asked to classify varying degrees of intensity of emotional expressions, which could then be used to rate people's ability to recognise them.
Take a look at the project posters on my page and the DVAMS scales themselves, which are online at dvams.com.
the Continuous Evaluation Procedure (CEP) by Claudia Muth and colleages measures evaluative judgements (liking) and recognizability ('determinacy') and other scales on dynamic diyplays/movies. It should be applicable to emotion recognition als well.
Muth, C, Raab, MH, Carbon, C-C (2015). The stream of experience when watching artistic movies. Dynamic aesthetic effects revealed by the Continuous Evaluation Procedure (CEP), Frontiers in Psychology 6(365).
You can download it at https://www.researchgate.net/profile/Claudia_Muth
since at our department we are interested in facial emotion recognition in children and adolescents with psychiatric disorders in general and with high-functioning autism spectrum disorders in particular, we compiled studies from the literature that assessed facial emotion recognition (static / dynamic; natural vs. virtual characters, ...) some time ago (see our Technical Report "Categorical Perception of Emotional Facial Expressions in Video Clips with Natural and Artificial Actors: A Pilot Study"). In this TechRep, we also shortly describe the "pilot version" of our DECT (Dynamic Emotion Categorization Test). We used an updated version (with 2 natural and 2 virtual characters) in several studies, for example in pre-/post designs studies for evaluation of our social skills programme TOMTASS.
So, maybe you find relevant work in Table 1 of our TechRep, or one of our various DECT versions (currently we are developing versions with virtual characters with / without non-photorealistic renderings - together with our project partners from Computer Science / Computer Graphics) maybe of interest for you.
Hi (sorry for any mistakes, I am French Canadian!)
As part of my master's degree, I've developed a set of synthetic characters dynamically displaying the six basic emotions. I was interested in the relation between emotion recognition and psychopathy, but as another objective, I also wanted to develop a measure with better ecological validity (instead of using static expressions displayed at 100%).
The animations vary by emotion, intensity (0-40%; 0-60%; 0-100%) and angle (frontal view, 45 degrees, profile). They contain 6 characters (3M; 3F) from different ethnic backgrounds. They last 2.5sec - but we also have 5sec and 10sec versions in full frontal view.
You can see a validation here: Joyal, C. C., Jacob, L., Cigna, M. H., Guay, J. P., & Renaud, P. (2014). Virtual faces expressing emotions: an initial concomitant and construct validity study. Frontiers in human neuroscience, 8 (787), 1-6.
An article was also published in Criminologie, but it is in French, as is my master.