The Social Psychology of Dialogue Simulation as Applied in Elbot

The Social Psychology of Dialogue Simulation as Applied in Elbot

Fred Roberts
Copyright: © 2014 |Pages: 10
DOI: 10.4018/ijse.2014070103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Because of the high expectations users have on virtual assistants to interact with said systems on a human level, the rules of social interaction potentially apply and less the influence of emotion cues associated with the system responses. To this end the social psychological theories of control, reactance, schemata, and social comparison suggest strategies to transform the dialogue with a virtual assistant into an encounter with a consistent and cohesive personality, in effect using the mind-set of the user to the advantage of the conversation, provoking the user into reacting predictably while at the same time preserving the user's illusion of control. These methods are presented in an online system: Elbot.com.
Article Preview
Top

The Social Psychology Of Dialogue Simulation As Applied In Elbot

The history of dialogue systems spans the decades, beginning with Joseph Weizenbaum’s classic Eliza designed in 1966 and ending with the multitude of systemic approaches available today. This article will examine one of these solutions, Elbot, as representative of the approach to dialogue simulation offered by Artificial Solutions, a leading designer of commercial customer service optimization solutions.

Elbot is a virtual assistant (VA) launched in March 2001. The purpose of the system was fairly new in the context of commercial dialogue systems typically designed to cover a well-defined and self-contained scope of inputs: a finite set of frequently asked questions. Elbot’s purpose was to provide entertainment by conversing intelligently with users on an infinite range of topics.

We quickly saw that users became frustrated with the system when it failed to respond in an interactive manner, an invariable shortcoming of systems dealing with open inputs. Strategies of parroting the input, repeating it with no variation and adding nothing to it; responding vaguely to only one of the keywords of the input, regardless of the importance of that word in the given context; or ignoring the user input entirely and responding with totally unrelated statements are all strategies that fail to create or maintain the illusion of intellect. The high expectations of users on virtual assistants can be understood when one examines the representation of similar technology in popular culture. From Karel Čapek’s 1920 play “Rossum's Universal Robots” to the representation of robots in Isaac Asimov’s “I, Robot” and in contemporary movies such as Steven Spielberg’s “AI” (2001) or Ridley Scott’s “Blade Runner” (1982) and “Prometheus” (2012) we find systems that are sentient, cognizant and highly conversational.

The Turing Test designed by Alan Turing in 1950 is a means of evaluating a machine’s ability to maintain the illusion of intelligence. Interrogators converse either with the system or with another human being and must decide which is which. If we look to this original methodology for clues on how to simulate intelligence, we find the definition “best strategy is to provide answers that would naturally be given by a man” or “satisfactory” “sustained” responses to any questions (p. 435). These criteria are open to a wide variety of interpretations. The interrogators must decide based on their subjective evaluation of the responses which of the two chat partners is human and which the machine. This is no objective criteria, such as “must be able to name five songs”, “knows the capital of Great Britain”, etc. In other words, anything goes. Since conversation is a social act, we might expect social perceptions to play a role in the decisions of the interrogators in deciding that a machine has displayed human intelligence rather than artificial intelligence.

Our interpretation of dialogue simulation is to use various social psychological methods (described below) to support a perception of the system as intelligent and thinking. This includes influencing the user to react in a predictable (finite) manner and specializing the system to respond intelligently to those expected user reactions. The system responses are additionally accompanied by a corresponding emotional component (visualization) that is consistent with the response. With this approach we believe we have created a qualitatively different chat experience.

Definition of Virtual Assistants

The term virtual assistant (VA) refers to commercial dialogue systems with a well-defined, specialized area of expertise. Artificial Solutions VAs are designed to recognize large classes of inputs in all their synonymous variations and associate them with a desired response with consideration of context. The answers are written to respond to a meaning, and not a particular literal choice of words. This gives the knowledge engineer designing the VA complete control over the VA’s character and range of expression. Providing meaningful answers (information) to FAQs related to the area of expertise is a must. Intelligent behavior comes into play when dealing with unexpected or off-topic inputs, a task to which our VA Elbot is entirely devoted.

Complete Article List

Search this Journal:
Reset
Volume 11: 2 Issues (2020)
Volume 10: 2 Issues (2019)
Volume 9: 2 Issues (2018)
Volume 8: 2 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2015)
Volume 5: 2 Issues (2014)
Volume 4: 2 Issues (2013)
Volume 3: 2 Issues (2012)
Volume 2: 2 Issues (2011)
Volume 1: 2 Issues (2010)
View Complete Journal Contents Listing