The Case Against an Autonomous Military

The Case Against an Autonomous Military

  • April 10, 2018
Table of Contents

The Case Against an Autonomous Military

The potential harm of A.I.s deliberately designed to kill in warfare is much more pressing. The U.S. and other countries are working hard to develop military A.I., in the form of automated weapons, that enhance battlefield capabilities while exposing fewer soldiers to injury or death. For the U.S., this would be a natural extension of the existing imperfect drone warfare program—failures in military intelligence have led to the mistaken killing of non-combatants in Iraq.

The Pentagon says that it has no plans to remove humans from the decision process that approves the use of lethal force, but A.I. technology is out-performing humans in a growing number of domains so fast that many fear a run-away global arms race that could easily accelerate toward completely autonomous weaponry—autonomous, but not necessarily with good judgment.

Source: nautil.us

Share :
comments powered by Disqus

Related Posts

Lessons Learned Reproducing a Deep Reinforcement Learning Paper

Lessons Learned Reproducing a Deep Reinforcement Learning Paper

There are a lot of neat things going on in deep reinforcement learning. One of the coolest things from last year was OpenAI and DeepMind’s work on training an agent using feedback from a human rather than a classical reward signal. There’s a great blog post about it at Learning from Human Preferences, and the original paper is at Deep Reinforcement Learning from Human Preferences.

Read More
AlterEgo: Interfacing with devices through silent speech

AlterEgo: Interfacing with devices through silent speech

AlterEgo is a closed-loop, non-invasive, wearable system that allows humans to converse in high-bandwidth natural language with machines, artificial intelligence assistants, services, and other people without any voice—without opening their mouth, and without any discernible movements—simply by vocalizing internally.

Read More