If AI Thinks Like a Human It May Get Depressed

If AI Thinks Like a Human It May Get Depressed

  • April 12, 2018
Table of Contents

If AI Thinks Like a Human It May Get Depressed

Artificial intelligences may need to have some sort of control mechanism built into them that functions similar to the way serotonin works in the human brain, Mainen said. This mechanism would allow the machines to adapt to new situations quickly, but it could also result in the entrenchment of certain thought patterns and result in depression in the machines as well.

Source: vice.com

Tags :
Share :
comments powered by Disqus

Related Posts

Lessons Learned Reproducing a Deep Reinforcement Learning Paper

Lessons Learned Reproducing a Deep Reinforcement Learning Paper

There are a lot of neat things going on in deep reinforcement learning. One of the coolest things from last year was OpenAI and DeepMind’s work on training an agent using feedback from a human rather than a classical reward signal. There’s a great blog post about it at Learning from Human Preferences, and the original paper is at Deep Reinforcement Learning from Human Preferences.

Read More
Differentiable Plasticity: A New Method Learning to Learn

Differentiable Plasticity: A New Method Learning to Learn

Neural networks, which underlie many of Uber’s machine learning systems, have proven highly successful in solving complex problems, including image recognition, language understanding, and game-playing. However, these networks are usually trained to a stopping point through gradient descent, which incrementally adjusts the connections of the network based on its performance over many trials. Once the training is complete, the network is fixed and the connections can no longer change; as a result, barring any later re-training (again requiring many examples), the network in effect stops learning at the moment training ends.

Read More
DeepMarks: A Digital Fingerprinting Framework for Deep Neural Networks

DeepMarks: A Digital Fingerprinting Framework for Deep Neural Networks

DeepMarks introduces the first fingerprinting methodology that enables the model owner to embed unique fingerprints within the parameters (weights) of her model and later identify undesired usages of her distributed models. The proposed framework embeds the fingerprints in the Probability Density Function (pdf) of trainable weights by leveraging the extra capacity available in contemporary DL models. DeepMarks is robust against fingerprints collusion as well as network transformation attacks, including model compression and model fine-tuning.

Read More