Towards a Virtual Stuntman

Towards a Virtual Stuntman

  • April 12, 2018
Table of Contents

Towards a Virtual Stuntman

Motion control problems have become standard benchmarks for reinforcement learning, and deep RL methods have been shown to be effective for a diverse suite of tasks ranging from manipulation to locomotion. However, characters trained with deep RL often exhibit unnatural behaviours, bearing artifacts such as jittering, asymmetric gaits, and excessive movement of limbs. Can we train our characters to produce more natural behaviours?

Source: berkeley.edu

Tags :
Share :
comments powered by Disqus

Related Posts

DeepMarks: A Digital Fingerprinting Framework for Deep Neural Networks

DeepMarks: A Digital Fingerprinting Framework for Deep Neural Networks

DeepMarks introduces the first fingerprinting methodology that enables the model owner to embed unique fingerprints within the parameters (weights) of her model and later identify undesired usages of her distributed models. The proposed framework embeds the fingerprints in the Probability Density Function (pdf) of trainable weights by leveraging the extra capacity available in contemporary DL models. DeepMarks is robust against fingerprints collusion as well as network transformation attacks, including model compression and model fine-tuning.

Read More
Lessons Learned Reproducing a Deep Reinforcement Learning Paper

Lessons Learned Reproducing a Deep Reinforcement Learning Paper

There are a lot of neat things going on in deep reinforcement learning. One of the coolest things from last year was OpenAI and DeepMind’s work on training an agent using feedback from a human rather than a classical reward signal. There’s a great blog post about it at Learning from Human Preferences, and the original paper is at Deep Reinforcement Learning from Human Preferences.

Read More