Should We Become Emotional With AI?: Performative Engagements With an Affective Algorithm

Should We Become Emotional With AI?: Performative Engagements With an Affective Algorithm

Avital Meshi
Copyright: © 2022 |Pages: 13
DOI: 10.4018/IJACDT.316136
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Affective computing algorithms are becoming increasingly prevalent in devices and services with which we interact daily. Despite positive intentions behind the development of this technology, its impact on our society is still unclear. This paper describes four art performances using an affective algorithm designed to infer emotions based on facial expressions. These performances propose alternative scenarios which imagine possible entanglements with this technology. One performance invites people to consider how their bodies are seen through the lens of an affective algorithm. Another performance displays a dystopian scenario in which this algorithm is deployed as an instrument of the law. The third artwork invites participants to reclaim their agency when confronted with such algorithms. The fourth artwork considers a utopian scenario in which a symbiotic relationship between humans and emotional AI yields higher capacities for both. The purpose of these artworks is to encourage a critical discussion regarding the possible futures of our engagement with affective algorithms.
Article Preview
Top

Introduction

Here is a simple exercise. Lift your eyes and look at the nearest person. What can you know about this person’s emotions? Is this person happy? Upset? Satisfied? Disappointed? Do they seem to have a successful day? Or maybe a terrible one. Do you think you can tell? As much as your assessment might be accurate or not, you would probably agree that this simple exercise is something people do all the time. We look at one another and try to obtain information from each other’s facial expressions, tone of voice, and body language. We are social and emotional beings, and the clues we gain from looking at each other help us decide whom to love, whom to trust, who may need our attention, and who might pose a threat. Being able to recognize other people’s emotions accurately is considered a fundamental component of Emotional Intelligence (Mayer et al., 2001), and it was shown to improve people’s social adaptation and mental health (Nowicki & Duke, 1994; Carton et al., 1999; DePaulo & Rosenthal, 1979) Now, do you think computers can do the same?

The idea that machines would perform human-like skills is not new. While developing the first digital computer, Alan Turing himself urged us to consider the possibility that machines can think (Turing, 2009). Decades later, contemporary futurists continuously promise that we are almost there (Kurzweil et al., 1990), and some people (e.g., Ex-Google engineers) believe that machines are already alive and kicking (Tiku, 2022). While many of us agree that this idea still requires a leap of faith, it is essential to acknowledge that some machines show signs of Artificial Emotional Intelligence. Designed within a field known in computer science as affective computing, these machines are able to recognize, interpret, replicate and even manipulate human emotions (Yonck, 2020). But why do we need machines to do that? Why is it important for algorithms to “understand” human emotions?

MIT researcher Rosalind Picard, one of the leading founders of affective computing, argued that computers require emotional abilities less for improving their own intelligence and more for facilitating people’s natural abilities. In her manifesto, published in 2000, she posited that long-term influences of interactions with non-affective computers might gradually erode the user’s emotional skills. Hence, affective computing is introduced as a step in validating people’s emotions and fixing an unhealthy environment in which feelings may be deemed worthless (Picard, 2000). Since the publication of these thoughts, the field of affective computing has grown tremendously. A current report estimates that the value of this market is expected to grow from 28.6 billion USD in 2020 to 140 billion USD in 2025 (MarketsAndMarkets, 2021). In 2009, Rosalind Picard and Rana el Kaliouby, another leading affective computing researcher, co-founded a company named Affectiva, producing affective computing models designed for commercial use. Affectiva was one of the first companies to offer affective products, but today, many other corporations and startups are invested in integrating affective computing models into many services and products, making them sensitive and responsive to human emotions.

In her 2015 TED talk, Rana el Kaliouby suggested that soon all of our devices would have an emotion chip. For instance, she provides an example of a fridge that locks itself when it detects its owner is upset and thus blocks binge eating. She argued that soon, “we won’t remember what it was like when we couldn’t just frown at our device, and our device would say, ‘Hmm, you didn’t like that, did you?’” With that, she acknowledges that there are risks to this technology. However, she claims that the potential to benefit humanity far outweighs the potential for misuse. Following Picard’s approach of affective computing as a technological fix, el Kaliouby claimed that technologies have separated us. Still, she claims that by embedding emotional intelligence into our devices they can bring us back together again (TED, 2015). Is this necessarily so? Will this technology indeed validate our emotions and bring us closer to one another? Is it true that the benefits outweigh the risks?

Complete Article List

Search this Journal:
Reset
Volume 13: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 12: 1 Issue (2023)
Volume 11: 3 Issues (2022)
Volume 10: 2 Issues (2021)
Volume 9: 2 Issues (2020)
Volume 8: 2 Issues (2019)
Volume 7: 2 Issues (2018)
Volume 6: 2 Issues (2017)
Volume 5: 2 Issues (2016)
Volume 4: 2 Issues (2014)
Volume 3: 2 Issues (2013)
Volume 2: 2 Issues (2012)
Volume 1: 2 Issues (2011)
View Complete Journal Contents Listing