The Case Against an Autonomous Military

The Case Against an Autonomous Military

  • April 10, 2018
Table of Contents

The Case Against an Autonomous Military

The potential harm of A.I.s deliberately designed to kill in warfare is much more pressing. The U.S. and other countries are working hard to develop military A.I., in the form of automated weapons, that enhance battlefield capabilities while exposing fewer soldiers to injury or death. For the U.S., this would be a natural extension of the existing imperfect drone warfare program—failures in military intelligence have led to the mistaken killing of non-combatants in Iraq.

The Pentagon says that it has no plans to remove humans from the decision process that approves the use of lethal force, but A.I. technology is out-performing humans in a growing number of domains so fast that many fear a run-away global arms race that could easily accelerate toward completely autonomous weaponry—autonomous, but not necessarily with good judgment.

Source: nautil.us

Share :
comments powered by Disqus

Related Posts

DeepMarks: A Digital Fingerprinting Framework for Deep Neural Networks

DeepMarks: A Digital Fingerprinting Framework for Deep Neural Networks

DeepMarks introduces the first fingerprinting methodology that enables the model owner to embed unique fingerprints within the parameters (weights) of her model and later identify undesired usages of her distributed models. The proposed framework embeds the fingerprints in the Probability Density Function (pdf) of trainable weights by leveraging the extra capacity available in contemporary DL models. DeepMarks is robust against fingerprints collusion as well as network transformation attacks, including model compression and model fine-tuning.

Read More
Retro Contest

Retro Contest

We’re launching a transfer learning contest that measures a reinforcement learning algorithm’s ability to generalize from previous experience. In typical RL research, algorithms are tested in the same environment where they were trained, which favors algorithms which are good at memorization and have many hyperparameters. Instead, our contest tests an

Read More
Lessons Learned Reproducing a Deep Reinforcement Learning Paper

Lessons Learned Reproducing a Deep Reinforcement Learning Paper

There are a lot of neat things going on in deep reinforcement learning. One of the coolest things from last year was OpenAI and DeepMind’s work on training an agent using feedback from a human rather than a classical reward signal. There’s a great blog post about it at Learning from Human Preferences, and the original paper is at Deep Reinforcement Learning from Human Preferences.

Read More