The Ethical and Technological Concerns of Autonomous Machines being Used in War
| |
Research Anthology on Unmanned Aerial Vehicles | Mehdi Khosrow-Pour, D.B.A.
©2019 | 558 pgs. | EISBN: 9781522583660 | - Content Hand-Selected from Expert Editorial Team
- Insights from 12+ Countries
- Covers AI, Drones, & Mixed-Integer Linear Programming
|
| |
|
|
An institute within the UN recently released a report discussing the shortcomings of utilizing autonomous machines in war and as weapons. According to a recent Popular Science article, this report is aimed at the international community and law makers to underscore the inherent risks of autonomous machines, as their core operating function is through collecting data, artificial intelligence, and deep learning. Therefore, they are highly susceptible to malfunctioning due to harsh environments, errors in data collection and processing, cyber security hacking. The article also aimed to bring up the debate on whether humans should be held accountable for the actions of these machines.
Although, there are inherent risks, many military planners, robotics researchers, and law makers are debating that autonomous machines in warfare could:
- Provide long-term savings, as “each soldier in Afghanistan costs the Pentagon roughly $850,000 per year” vs. a smaller machine that costs US$ 230,000.
- Be sent on more complex missions or utilized in harsher environments.
- Act more humanely and not be influenced by unconscious bias and emotions during battle.
- Save soldiers’ lives and mitigate the need for humans to be put in danger.
Understanding the importance of this international debate, Prof. Jai Galliott (The University of New South Wales, Australia) discusses this, military technoethics, and military technology in his article, “War 2.0: Drones, Distance and Death,” featured in the Research Anthology on Unmanned Aerial Vehicles (IGI Global).
| View a Preview of the Complimentary Chapter Below |
|
War is an all-too-human affair and will probably require the endangerment of human lives in some shape or form, but military robots known as ‘drones’ or ‘unmanned systems’ promise to significantly offset the human cost of war by removing warfighters from the physical dangers of the battle zone and facilitating the conduct of what is purported to be more precise killing. However, the use of these systems toward such ends is not without other implications for thinking at the intersection of military technoethics and just warfare. In this paper, I examine the efficacy of unmanned systems with a particular focus on the mindset-altering dimensions of unmanned warfare and their impact on principal warmaking agents, namely unmanned systems operators. This is because many of the unintended effects of this technology cannot be attributed to the machine, but to human psychology. I first examine some problems associated with technologically mediated fighting and suggest that through a process of moral disengagement and desensitisation, the barriers to immoral conduct in war may be reduced. Having considered the impact on the long distance warrior’s capacity or willingness to adhere to the rules/laws of war, the next section examines the impact on the personal wellbeing of the operators themselves. Here, among other things, the impact of being simultaneously present in contrasting environments is considered in arguing that this, if nothing else, may lead to serious transgressions of just war principles. Toward the end of the paper, I consider whether we can eliminate or relieve some of these technologically mediated but distinctly human moral problems by automating elements of the decision making process. It is concluded that while greater automation certainly has the potential to alleviate some moral concerns generated by these systems, there is a strong case for keeping humans in the decision making chain, even if it involves having to make a delicate moral tradeoff between maintaining and/or improving warfighting capability and limiting harm to noncombatants.
THE ROLE OF THE INDIVIDUAL SOLDIER
While many of the campaigns to halt the development of ‘killer robots’ focus on high-level decision makers, as they are central to the initial decision to develop said systems and engage them in warfare, it is the individual soldier who defends his state and society that must be most unconditional in exercising moral restraint and adhering to just war theory. Michael Ignatieff (1998) writes that more than any other of warmaking agential group, it is the soldiers who actually conduct war that have the most influence on its outcomes and the ability to introduce the moral component. In his words, ‘the decisive restraint on inhuman practice on the battlefield lies within the warrior himself – in his conception of what is honourable or dishonourable for a man to do with weapons’ (Ignatieff, 1998, p. 118). Ironically, soldiers are the primary agents of both physical violence and compassion and moral arbitration in war. As Darren Bowyer (1998) remarks, they deliver ‘death and destruction one moment ... [and deal] out succour to the wounded (of both sides) and assistance to the unwittingly involved civilian population, the next’ (p. 276). The specific concern examined here is whether by removing soldiers from the battlefield and training them to fight via a technologically mediated proxy we may, through a process of psycho-moral disengagement and emotional desensitisation, lower their ability or willingness to exercise restraint and compassion in warfare and adhere to the moral laws of war, namely the principles of discrimination and proportionality enshrined within just war theory, which respectively require that war be directly only at legitimate targets and and involve a morally appropriate level of force. It will be argued that the employment of unmanned systems tracks unethical decision-making and/or lowers barriers to killing, endangering the moral conduct of warfare and countering much of the benefit of using these systems.
Complimentary Research Articles and Chapters on
Military Robotics and Ethics | | | | Exploring Security in Software Architecture and Design | Profs. Michael Felderer (University of Innsbruck, Austria) et al.
©2019 | 349 pgs. | EISBN: 9781522563143 | - Edited by Leading Researchers in Cybersecurity
- Over 30+ Contributors
- Covers Security Risk Analysis, Software Architecture,
& Engineering
|
|
| |
| |
|
| | | | | | | | View All Chapters and Articles on This Topic | | The “View All Chapters and Articles on This Topic” navigates to IGI Global’s Demo Account, which provides a sample of the IGI Global content available through IGI Global’s e-Book Collection (6,600+ e-books) and e-Journal Collection (140+ e-journals) databases. If interested in having full access to this peer-reviewed research content, Recommend These Valuable Research Tools to Your Library | | | | For Journalists Interested in Additional Trending Research:
Contact IGI Global’s Marketing Team at marketing@igi-global.com or 717-533-8845 ext. 100 to access additional peer-reviewed resources to integrate into your latest news stories. |
|
|
Founded in 1988, IGI Global, an international academic publisher, is committed to producing the highest quality research (as an active full member of the Committee on Publication Ethics “COPE”) and ensuring the timely dissemination of innovative research findings through an expeditious and technologically advanced publishing process. Through their commitment to supporting the research community ahead of profitability, and taking a chance on virtually untapped topic coverage, IGI Global has been able to collaborate with over 100,000+ researchers from some of the most prominent research institutions around the world to publish the most emerging, peer-reviewed research across 350+ topics in 11 subject areas including business, computer science, education, engineering, social sciences, and more. To learn more about IGI Global, click here.
Caroline Campbell
Assistant Director of Marketing and Sales
(717) 533-8845, ext. 144
ccampbell@igi-global.com
www.igi-global.com