FACS (Facial Action Coding System)

How we behave and perceive the world depends often on our emotions. For example, the product we’d choose from the shelf may be influenced by its funny or touching advertisement we saw on TV. That’s why, in the neuromarketing world, it’s so important to understand the emotions that the creation evokes.

How can we recognize the emotions?

To analyze emotions, we can naturally ask study participants about how they feel. But an even better idea would be to collect data about people's emotions without asking them a single question. To do so, we may use a webcam-based tool with the Facial Action Coding System implemented.

The Facial Action Coding System (FACS) is a system that allows the quantification and measurement of facial movements. It was initially developed by a Swedish anatomist - Carl-Herman Hjortsjö, and adopted by Paul Ekman and Wallace V. Friesen in 1978. Ekman and Friesen later updated the system in 2002. FACS encodes movements of individual facial muscles from different slight, instant changes in facial appearance.

There are seven universal emotions: fear, anger, sadness, disgust, contempt, surprise and enjoyment (happiness). It was discovered that even visually impaired/blind people show spontaneous facial expressions, hence, it’s innate, not visually learned. Using the mentioned system, coders can code almost any human facial behavior by breaking it down into specific action units (AU). They are defined as contractions or relaxations of one or more facial muscles. The FACS manual has over 500 pages and describes all action units along with their meaning according to Ekman’s interpretation.

Measuring facial movement

Researchers can learn how to read and interpret the action units from manuals and workshops to manually code the facial actions, but nowadays, computers are getting better and better at recognizing and identifying the FACS codes. Let’s take a look at the main codes used for facial expression recognition:

AU numberFACS name
1Neutral face
2Inner brow raiser
3Outer brow raiser
4Brow lowerer
5Upper lid raiser
6Cheek raiser
7Lid tightener
8Lips toward each other
9Nose wrinkler
10Upper lip raiser
11Nasolabial deepener
12Lip corner puller
13Sharp lip puller
14Dimpler
15Lip corner depressor
16Lower lip depressor
17Chin raiser
18Lip pucker
19Tongue show
20Lip stretcher
21Neck tightener
22Lip funneler
23Lip tightener
24Lip pressor
25Lips part
26Jaw drop
27Mouth stretch
28Lip suck

Of course, there are many more FACS codes, like head movement codes, eye movement codes, visibility codes, or gross behavior codes. There can be different intensities for each of the codes. They are described using letters - from A (minimal intensity possible for the given person) to E (maximal intensity):
A - trace,
B - slight,
C - marked or pronounced,
D - severe or extreme,
E - maximum.
Sometimes there are additional letters added:
R - if the action happens only on the right side of the face,
L - if the action happens only on the left side of the face,
U - if the action occurs on only one side of the face but has no specific side,
A - if the action is bilateral but is stronger on one side of the face.
Knowing all of that, let’s take a look at some emotions decoded by the FACS:

EmotionAction units
Surprise1+2+5B+26
Sadness1+4+15
Happiness6+12
Fear1+2+4+5+7+20+26
Disgust9+15+17
ContemptR12A+R14A
Anger4+5+7+23

FACS provides all the facial action units responsible for a specific facial expression. Let’s take a look at the “surprise”. It consists of:
1 - neutral face,
2 - Inner brow raiser,
5B - slight upper lid raiser,
26 - jaw drop.

RealEye automated facial action coding feature

In RealEye, we also use the FACS (by Paul Ekman Group) to code emotions in our facial expression recognition system. There are three plots available:
happy - when cheeks are raising, and corners of the lip are pulling (a person is smiling),
surprise - when brows are raising, the upper lid is slightly rising, and the jaw is dropping,
neutral - lack of any face movement.

The mask used in RealEye for automatic recognition of facial expressions

The data is captured with a 30 Hz sampling rate (samples are taken each 30 ms). All the facial motion data can be exported into the CSV file (values are normalized from 0 to 1).
You should also remember that talking during the study (i.e., for moderated sessions) can influence facial features, which means it influences emotion reading. We also enabled the option to run RealEye studies without enabling the facial coding feature.

Why don’t we provide more emotions?

We run a lot of internal tests and have a lot of discussions within the team. We also talk a lot with our customers to acknowledge their point of view. All of that leads us to the conclusion that the three emotions offered by our tool are the most common to observe and, at the same time, the most important.
Before even implementing the FACS into our tool, we checked and tested several tools and platforms dedicated to facial coding (all used the FACS as well). We observed that the great majority of all the results showed only surprise and happiness - other emotions indicated almost no changes in their graphs at all. When sitting in front of the screen, people usually don’t show many emotions, especially anger, sadness, disgust, or fear. We haven’t seen the data of these emotions, no matter what the stimulus was. So, after further investigation, we realized that, we can’t guarantee that emotions other than happiness, surprise, neutrality, and sadness wouldn’t be only face reading errors, not actual emotions. It’s also worth noticing that the RealEye tool performs individual facial calibration for every participant, which isn’t a standard for the facial coding tools - that’s why they are decoding the sadness too often (like in the “resting face” case). To show that, here’s a plot showing emotions detected by one of the mentioned tools while showing the study participants an entertaining/neutral ad.

The sadness, in our opinion and based on our experience and tests, is caused by the resting face influence, which makes the results less trustworthy for us. 

Considering all of that, to allow you to analyze facial expressions with a high level of confidence, we decided to add this feature to our tool but narrow the spectrum of detected emotions to only the ones that are most commonly used and that we are sure about to avoid any uncertain data (have the lowest error rate). From the technical point of view, we could extend our offer to capture sadness as well.

Want to know more? Contact us!

"It’s been a pleasure working with RealEye. Their customer service is prompt, valuable, and always friendly. The quick turnarounds on custom development requests are the most impressive. The RealEye team delivers great tailored solutions. Thank you for being a wonderful partner!"

Sam Albert
Chief Digital Officer

"I'm really impressed with what Adam has created with RealEye. It's astounding how easy and fast it is to track and report on eye movement for a page or design."

David Darmanin
CEO, hotjar.com

"Webcam-based eye-tracking has vast potential within market research and RealEye made a great effort customizing their solutions to our needs. We succeeded in having live online interviews with eye-tracking included and we look forward to build on this pilot study to take further advantage of this solution in future research."

Stefan Papadakis
Insight Consultant, IPSOS
Trusted by freelancers, small to big companies, students, and universities.

Frequently Asked Questions

How accurate are the eye-tracking results?

RealEye studies are proven to be around 110 px accurate. This allows analyzing users interaction on a website with precision reaching the size of a single button.We predict the gaze point with frequency up to 60 Hz.For in-depth analysis of webcam eye-tracking accuracy check the following articles:

Who can participate in my study?

Either your own or RealEye participants. You can invite your users or panel and share the study with them using a participation link. All they need to have is a laptop/PC with a webcam.We also have a network of panelists from all over the world - mainly from the UK and the US. Randomly picked users can be assigned to your task. They are called RealEye participants. We will not show them your stimuli before the test starts, so their interaction will be natural.

Note: RealEye participants can't take part in a 'Live Website' studies and studies longer than 10 minutes. Read more about the limitations here.

Can I pay per study?

There are only monthly payments, so there’s no possibility to pay per study. But keep in mind that you can cancel your license any time (even in the same month) - you’ll keep the account access until the end of the billing period (30 days from the last payment).

Can I integrate RealEye with other tools?

You can easily integrate the RealEye tool with external tools (eg. surveys), but also compare the results obtained from other tools using our CSV file (i.e. by the timestamp).
Your browser does not support the HTML5 canvas tag.