A Google Brain engineer’s guide to entering AI

A Google Brain engineer’s guide to entering AI

  • November 12, 2018
Table of Contents

A Google Brain engineer’s guide to entering AI

Note that this guide was written in November 2018 to complement an in-depth conversation on the 80,000 Hours Podcast with Catherine Olsson and Daniel Ziegler on how to transition from computer science and software engineering in general into ML engineering, with a focus on alignment and safety. If you like this guide, we’d strongly encourage you to check out the podcast episode where we discuss some of the instructions here, and other relevant advice. Technical AI safety is a multifaceted area of research, with many sub-questions in areas such as reward learning, robustness, and interpretability.

These will all need to be answered in order to make sure AI development will go well for humanity as systems become more and more powerful.

Source: 80000hours.org

Tags :
Share :
comments powered by Disqus

Related Posts

Learning Concepts with Energy Functions

Learning Concepts with Energy Functions

We’ve developed an energy-based model that can quickly learn to identify and generate instances of concepts, such as near, above, between, closest, and furthest, expressed as sets of 2d points. Our model learns these concepts after only five demonstrations.

Read More
What’s the Best Deep Learning Framework?

What’s the Best Deep Learning Framework?

Deep learning models are large and complex, so instead of writing out every function from the ground up, programmers rely on frameworks and software libraries to build neural networks efficiently. The top deep learning frameworks provide highly optimized, GPU-enabled code that are specific to deep neural network computations.

Read More
Curiosity and Procrastination in Reinforcement Learning

Curiosity and Procrastination in Reinforcement Learning

Episodic Curiosity through Reachability: Observations are added to memory, reward is computed based on how far the current observation is from the most similar observation in memory. The agent receives more reward for seeing observations which are not yet represented in memory.

Read More