No Lies: The Problem with Biometric Emotion Detection 

 
 

Bibi Imre-Millei, Online Assistant Editor

March 20, 2021

Picture1.png

Can algorithms distinguish between human emotions? The short answer is no. 

 

Humans have been fascinated with reading and pathologizing the inner thoughts and feelings of other humans for as long as we have written records. Ancient Babylonians observed and cataloged the signs of various mental illnesses we may recognize today. In other settings, European fables noted pulse monitoring as a way to detect lies and Chinese inquisitors measured salivation of suspected liars. The cataloguing of human behaviour and emotion has a long history filled with bias. As humans have researched those similar and different to our own communities, we have projected societal beliefs onto our observations, building racist, sexist, and ableist paradigms into often fundamentally colonial endeavours. Theories of human emotion and intention are linked to the history of biometrics. Biometrics are simply defined as the supposedly unique physical and behavioral characteristics of humans such as fingerprints, cardiac signatures, or retinal scans. Biometrics also include “soft biometrics” such as gait analysis or reading facial expressions. 

The first biometrics are thought to be the fingerprint signatures Egyptian potters used to sign their work, also used by ancient Chinese landowners to authenticate deeds. But by the nineteenth century, biometrics began to be used in Europe and by various colonial administrations. Pioneers of biometrics like Francis Galton and William Herschel relied on measurements of people’s bodies and faces to identify them, including ethnically. Soon, composite portraits and body part measurements were used to create a picture of what particular “races” looked like. Many European scientists at this time went further, claiming that certain races were predisposed to certain temperaments and mental states. In the 1800s theories of race were also linked to notions of hierarchy, with Europeans representing the most evolved race: the most attractive, the most intelligent, and the most trustworthy. I do not want to repeat more racist assumptions here, however, it is important to note various pseudosciences used in emotion detection today, including synergology (study of body language) and anthropometry derived from some of these assumptions. Lie detection, the main example in this article, was originally based on blood pressure and began to emerge in the 1920s and began to be sold to police departments in the 1930s in the United States. The raw data from blood pressure lie detectors is read and interpreted by humans who project social and cultural ideas onto these readings. 

 

By the 1950s, race science was being debunked by anthropological and archaeological findings. In 1966 Harvard biologist Stephen Jay Gould debunked anthropometry and the American Association of Physical Anthropologists publicly denied any biological theory of race.But these theories remain embedded in the science and technology we use today.

What do scientific theories of race have to do with companies which claim their algorithms read human emotions?

Algorithms which use biometric measurements to read human emotion, thoughts, or intentions have recently re-gained prominence, after some failures in the early 2000s. The latest example is Silent Talker, a company which claims to detect lies and other emotional expressions through connecting body language to psychology. But other biometrically based algorithms are also in the spotlight, such as a company which claims its algorithm can rate attractiveness and another which claims it can read children’s emotions as they learn. Corporations and governments are beginning to rely on algorithms to make decisions about a person’s trustworthiness at borders and in hiring practices.

Silent Talker, among other companies with algorithms they claim can read human emotion or intention, are already in use at European Union border crossings. This class of algorithms are founded on theories of micro-expressions and some of the same measurements polygraphs used: they are based on biometric collection and analysis. Polygraphs, for example, are banned for use in private employment in most places and are usually not admissible in court, but are still used in United States government hiring and police departments.

As the ACLU asserts, lie detectors can re-entrench racial bias since they only provide raw data. Determinations are based on the examiner’s interpretation, for example, a blood pressure peak might be seen as a sign of deception. In the 1990s, a study was conducted by the US Department of Defence which determined Black participants were more likely to have false positives on polygraphs than white participants -- that is, Black participants were more often thought to be lying when they were not. However, there is little current data on lie detection and race, as police departments and businesses often do not keep aggregate data of lie detection tests. 

On the other hand, micro-expressions are based on the idea that psychology is revealed through the face in tiny muscle spasms and movements. The claims that micro-expressions can reveal our thoughts, including our lies, is mostly based on the work of Paul Ekman who kept the datasets from his studies a secret, citing national security concerns. But deception and other emotions are psychological constructs, and no matter how well an algorithm can catalogue each tiny movement in a face, each spike in blood pressure, each fiddle of the hands, the link between expression and actual thoughts, emotions, and intentions is social and cultural. Even in fairly homogenous cultures it is clear that some people attempt to convince you they are truthful by looking you in the eye, while other liars cannot help but look away. Models that use one indicator or another also privilege a certain neurochemistry. For example, people with anxiety disorders often find it difficult to look others in the eye regardless of truthfulness. 

Lie detection is the perfect example of why the overall push for various types of emotional detection is dangerous. Indeed, Silent Talker was originally designed to determine how consumers felt about products by reading their facial expressions. The rapid securitization of these kinds of technologies points to the lure of government contracts not the actual accuracy and rigour behind algorithms as an accurate reader of emotions. Emotion reading has the same pitfalls of biometrics in general: it is most accurate on white men and its human technicians tend not to be aware of (or do not care about) social and cultural relationships embedded in their analysis of the data. More soft biometrics, such as gait analysis, are making their way into military biometric databases, where fingerprint and retinal scan data have long resided. Biometrics in the military are not only used for securing areas in theatre, but also for lethal targeting. The militarization and securitization of pseudoscience is not new, but in a world where many of us have easy access to information, we may have arrived at a moment where we can question and act against it. 

When investigating companies that claim to read emotions, detect lies, or estimate attractiveness using algorithms, it is helpful to begin this with sets of questions and fact statements. Algorithms do not feel attraction, and they cannot read emotions. Algorithms rely on whatever data and parameters are inputted by their programmers and begin learning or generating sets of conclusions. When a company claims its algorithm assesses attractiveness, we must ask: attractiveness to who? When a company claims its algorithm assesses suspiciousness, we must ask: who and what determines suspicious behaviour? When a company claims its algorithm can detect any kind of emotion, we must ask: based on what data, what studies? Some red flags to look out for include relying on pseudosciences such as body language science, gait analysis, polygraphs, readings of micro spasms in the face or other body parts, and of course anthropometry. We must remember even supposedly neutral practices like fingerprinting are immersed in social politics. Children, older individuals, Asian women, and people who work in manual labour or jobs that involve handling chemicals are all more likely to have issues when getting their fingerprints scanned. Fingerprinting and other biometrics have also been used in genocides and colonization, so we must be sensitive to the concerns of marginalized groups, and mindful of possible violent outcomes. Technologies are steeped in the politics of age, race, ability, sex, and class. 

Marginalized communities, including Black communities, Indigenous peoples, people of colour, various sectors of disabled communities, the 2SLGBTQIA+ community and others have been warning us of the discriminatory and violent genealogies of biometrics. We need to listen. The solution is not simply to test so-called emotion reading algorithms on marginalized communities for more accurate readings: this is a trap that only ensures more data is collected on those most vulnerable to its abuse. Instead, we must abandon our notions of technological determinism. We must reject a modernity that relies on pseudoscientific ideas into which we have entrenched our most unsavoury and violent biases. We must reject the view that more and more accurate technology is a good in and of itself. Indeed, we must move away from grounding our conception of progress in a sense of order and adopt a framework of justice. 

Like Us on Facebook