Site icon Youth Ki Awaaz

Why AI-Enabled Emotion Recognition Technology Is A Cause For Concern

cctv camera

In Nineteen Eighty-Four, the dystopian novel published in 1949, George Orwell conceptualised the novel idea of the Thought-Police (Thinkpol), the secret police force of the superstate Oceania that monitors and punishes thoughtcrime which is the personal and political ideas that go against the diktats of the ruling regime.

However, with the advent of facial and emotion recognition technology, the idea doesn’t seem to be as farfetched as it might have been when the book was first published. In the twenty-first century, Thinkpol does not need to rely upon neighbours to report on dissentious thoughts, as now it can be easily monitored through facial and emotion recognition technology.

CCTV cameras. (Source: piqsels)

Initially conceived as a research surveillance project (SPOT) by the U.S. Transportation Security Administration (TSA) in 2003 to detect potential terrorists by reading their facial expressions and emotions, the technology has already grown to a billion-dollar market.

The research program based on the little scientific basis has only resulted in unfair racial profiling, and virtually no arrests, as per the reports published by the Government Accountability Office and ACLU. Despite the shoddy science behind the technology, the market size is projected to grow from $19.5 billion in 2020 to $37.1 billion by 2026.

What Is Emotion Recognition Technology?

In simple words, the Emotion Recognition Technology, a sub-set of Artificial Intelligence, is used to identify the subject’s emotional state based on facial expressions gathered from images and videos and bodily cues. The algorithms are trained to detect facial actions such as lip curl, eyebrow raise, nose scrunch, brow furrow, etc.

The science behind the technology is based on the outdated concept of seven universal emotions, i.e., joy, anger, fear, disgust, contempt, sadness and surprise, popularised by Paul Ekman, a psychologist.

In 2019, a scientific review led by the neuroscientist Lisa Feldman Barrett found no scientific evidence for emotion-recognition technology. The study found it is only 20%–30% of the time that people’s facial expressions match their emotional state.

The study concluded, “It is not possible to confidently infer happiness from a smile, anger from a scowl, or sadness from a frown, as much of current technology tries to do when applying what are mistakenly believed to be the scientific facts.”

The AI Now Institute at New York University recommended new legislation to restrict the use of the technology, stating that the “research shows affect recognition is built on markedly shaky foundations”.

Another study found racial bias in the emotion recognition technology, which indicated that black men exhibit more negative emotions than white men. Bias is already well documented in facial recognition technology.

Deployment Of Emotion Recognition Technology

Despite the questionable science behind the same, the technology is being increasingly adopted by the government and the private sector alike for mass surveillance, workplace hiring, virtual classrooms, digital advertising, etc.

In India, the Lucknow Police has decided to set up a network of AI-enabled cameras to monitor and identify the facial expressions of distressed women and notify the nearest police station.

Taigusys, a Chinese firm, has developed an AI emotion recognition system that can detect the facial expressions and emotions of its workers. As per the developers, the “good” emotions include happiness, surprise and feeling moved by something positive. The program also detects negative emotions like disgust, sorrow, confusion, scorn and anger. Neutral emotions such as the focus on a task are also considered.

As per the claims made by the company, the program can also detect if someone is faking a smile.

AI-enabled emotional recognition is being used in various fields. (Source: pixabay)

Another shocking revelation came from China when BBC reported that the software engineer who installed AI and facial recognition in China claimed that China tested the software on Uighurs. The engineer told BBC, “The Chinese government use Uyghurs as test subjects for various experiments just like rats are used in laboratories.”

Already the technology is being used in the hiring process. In 2019, Unilever claimed to use the AI system in the recruitment process. The company measured candidates’ facial expressions, body language and choice of words against the job success predictors.

Another AI program deployed during pandemic claims to read children’s emotions as they learn in virtual classrooms. The program analysis muscle points on faces to identify emotions like happiness, sadness, anger, fear, etc.

Why Is There Concern Surrounding The Technology?

The primary concern is the questionable scientific foundation of the technology as stated above. The study of Ekman fails to take into account the change in behaviour in a cultural context. Daniel Leufer, a European policy analyst, stated that even if the technology could work despite its shaky scientific basis, it constituted a “gross violation” of several human rights.

Another major issue concerning technology is privacy and freedom of speech and expression. Can the State or the employer infringe the person’s privacy by encroaching upon their thoughts and expressions through technology merely because the individual is in a public space? The ever-pervasive growth of technology can make individuals mere bystanders in the drama of their own life.

Law enforcement agencies can use emotion-detection technology to identify criminal behaviour. A UK-based firm, WeSee, has already made claims that it can spot criminally suspicious behaviour by analysing facial cues.

Critics claim the technology will be used to reward some and punish others.

“Emotions, such as doubt and anger, might be hidden under the surface in contrast to the language a person is using”. However, can the person’s emotions be used against them, and what will be its implications vis-à-vis internationally recognised right against self-incrimination?

According to the Internet Freedom Foundation in India, the move by the Lucknow Police can easily result in over-policing and unnecessary harassment.

Moreover, the expressions that are being tracked and their accuracy are also questioned in the absence of transparency. An expression of distress can easily be due to other factors like an argument over the phone with a friend, etc., and not necessarily due to harassment.

Critics claim the technology will be used to reward some and punish others. In a report by Article 19, a British human rights organisation, expressed concerns that the intrusive technology is being developed without consulting people and could cause harm, especially to minorities and fringe groups.

Legal Framework Specific To The Indian Jurisdiction

At present, emotion recognition technology is more or less unregulated. Information Technology Act, 2000 classifies biometric data as sensitive personal data. The Act contains provisions regarding collecting, disclosing and sharing such information.

Section 43A of the Information Technology Act provides the remedy for the violation. It fastens the liability of damages as compensation on body corporates in case of wrongful loss or wrongful gain to any person due to negligence of body corporate in possessing, dealing or handling of sensitive personal data or information.

However, the biggest loophole is its limited application to body corporates, leaving the field wide open to misuse by the government and its affiliated agencies.

Another legislation that deals with the collection, storage and processing of biometric data is the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016. The Act classifies biometric information as “sensitive personal data or information” as defined under the IT Act, 2000.

The Aadhaar Act allows a requesting entity to seek authentication of the Unique Identifier of an individual vis-à-vis their biometric information. The Act contains provisions apropos obtaining the consent of the principal, keeping an individual informed about the use of information or data. The violation of the provisions may lead to a penalty of imprisonment and a fine.

The draft Personal Data Protection Bill, 2019 classifies the use of facial recognition technology as sensitive personal data.

The bill defines the sensitive personal data as “facial images, fingerprints, iris scans or any other similar personal data resulting from measurements or technical processing operations carried out on physical, physiological, or behavioural characteristics of a data principal, which allow or confirm the unique identification of that natural person”.

The privacy concerns underlying the issue were dealt with by the Indian Supreme Court in the Puttaswamy judgment, which declared the right to privacy as a fundamental right and imposed limitations on the State to curtail the right. After the judgment, any law or regulation invading the right to privacy will have to satisfy the triple test — the requirement of law, legitimate State aim and test of proportionality.

The judgment specifically held that privacy is not lost or surrendered merely because the individual is in a public space. Privacy is attached to the person and not to the place and is an essential facet of human dignity.

The use of technology by the investigative agencies also raises the issue in the context of the right against self-incrimination. No direct case law exists on the subject.

However, the Supreme Court judgment on the use of neuroscientific techniques by the investigative agencies may shed some light on the issue.

In the case, the Supreme Court held the use of neuroscientific techniques like narcoanalysis, BEAP (Brain Electrical Activation Profile) or “brain mapping” and polygraph tests to be unconstitutional as they constitute testimonial compulsion and violate the right of the accused against self-incrimination under Article 20 (3), and the right to life and personal liberty under Article 21 of the Constitution.

The court recognised that the information provided under the influence of drugs and measurement of physiological responses would be an intrusion into the mental privacy of the individual, and therefore, the tests could not be administered without consent.

We must recognise the importance of personal autonomy in aspects such as the choice between remaining silent and speaking. An individual’s decision to make a statement is the product of a private choice and there should be no scope for any other individual to interfere with such autonomy, especially in circumstances where the person faces exposure to criminal charges or penalties.”

One of the reasons to hold the tests unconstitutional is the inaccuracy of the science. Another reason that weighed with the court is that the individual has the choice to remain silent.

Should a case involving unauthorised intrusion into an individual’s emotions arise, then the reliability of the technology in criminal prosecutions, which is based on a shaky scientific foundation, will be put to the test on the anvil of the Constitution.

Featured image via maxpixel
Exit mobile version