How Responsible Is AI?: Identification of Key Public Concerns Using Sentiment Analysis and Topic Modeling

How Responsible Is AI?: Identification of Key Public Concerns Using Sentiment Analysis and Topic Modeling

Copyright: © 2022 |Pages: 14
DOI: 10.4018/IJIRR.298646
Article PDF Download
Open access articles are freely available for download

Abstract

Many businesses around the World are adopting AI with the hope of increasing their top-line and bottom-line numbers. The COVID19 pandemic has further accelerated the journey. While AI technology promising to bring enormous benefits, the challenges come in similar proportions. In the current form, the requirements for transparency and trust are relatively low for AI systems. On the other hand, there is a lot of regulatory pressure for AI systems to be trustworthy and responsible. Challenges still exist both on the methods and theory side and how explanations are used in practice. The objective of this paper is to analyze Twitter data to extract sentiments and opinions in unstructured text. We attempted to use contextual text analytics to categorize the twitter data to understand the positive or negative sentiments and feelings for the AI Ethical challenges and highlight the key concerns. Text clustering has also been performed on positive and negative sentiments to understand the key themes behind people's concern.
Article Preview
Top

1. Introduction

Humans are always in search of new tools and technologies to lead a better and productive life. Over the last three industrial revolutions, we have seen a massive shift from muscle power to mechanical power. The advancement in digitization, technology, and data analytics, focuses further on enhancing human capability by exploiting cognitive principles, otherwise known as Artificial Intelligence (AI), as a theme worldwide(Schwab, 2017). John McCarthy, the AI discipline founder, explains that Artificial Intelligence is the “science and engineering of making intelligent machines” (Walch, 2018).

Artificial Intelligence solutions are permeating into every walk of life. Some basic tasks like looking for information in the Google search engine, content writing, drawing to complex activities like seeking digital assistant and Robo advisor from a service provider are powered with AI algorithms. These algorithms are trained and validated based on available societal behavioral data with human ingenuity. While these automated decision-making systems bring enormous benefits to society, those can bring challenges, too, unless handling with utmost care.

There has been disagreement around the scientific definition of humans and their origin. However, the commonly accepted fact is that humans first appeared between 2-3 million years ago(Barras, 2016). Despite evolving over millions of years, humans make irrational decisions and mistakes. Human decisions are colored by the amount of information they have, cognitive ability, the socio-economic condition they belong to, and many more possible dimensions. Similarly, the AI systems built by human endeavor and past societal behavior captured through data can be equally irrational and biased. These irrationalities or biases can appear in the form of infringement of privacy, discrimination, societal exclusion, accident, and rigging political systems(Cheatham, Javanmardian, and Samandari, 2019). After all, humans play a critical role in building these intelligent systems.

The global artificial intelligence market was valued at USD 62.35 billion in 2020 and is expected to grow at a compound annual growth rate (CAGR) of 40.2% from 2021 to 2028 (GrandViewResearch, 2021). AI is supposed to bring 21% of incremental impact in GDP in the United States of America by 2030(Bughin et al., 2018). Open databases have supported the rapid development of AI algorithms, which led to significant outcomes wherein different stakeholders have benefited to a greater extent. Ntoutsi et al.(2020) talk about the far-reaching AI impact on individuals and society, and their decisions might affect everyone, everywhere, and anytime, entailing concerns about potential human rights issues.

Artificial Intelligence systems can be a double-edged sword. While they bring substantial benefits into the decision-making process, any wrong decisions can lead to loss of life, reputational damage, revenue loss, societal unrest, regulatory backlash, criminal investigation, and diminished public trust (Cheatham, Javanmardian and Samandari, 2019). The problem gets magnified, especially when AI systems are built based on automated learning and deployed at scale principle. The AI mistakes can happen from the ideation of a problem through the design and deployment of the solution. Mistakes can be non-intentional to malicious intent, either exploit market conditions or defame and defeat economically and politically certain sections of society.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024)
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022): 3 Released, 1 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing