A history of machine translation from the Cold War to deep learning

A history of machine translation from the Cold War to deep learning

  • March 13, 2018
Table of Contents

A history of machine translation from the Cold War to deep learning

I open Google Translate twice as often as Facebook, and the instant translation of the price tags is not a cyberpunk for me anymore. That’s what we call reality. It’s hard to imagine that this is the result of a centennial fight to build the algorithms of machine translation and that there has been no visible success during half of that period.

The precise developments I’ll discuss in this article set the basis of all modern language processing systems—from search engines to voice-controlled microwaves. I’m talking about the evolution and structure of online translation today.

Source: freecodecamp.org

Tags :
Share :
comments powered by Disqus

Related Posts

The Building Blocks of Interpretability

The Building Blocks of Interpretability

In 2015, our early attempts to visualize how neural networks understand images led to psychedelic images. Soon after, we open sourced our code as DeepDream and it grew into a small art movement producing all sorts of amazing things. But we also continued the original line of research behind DeepDream, trying to address one of the most exciting questions in Deep Learning: how do neural networks do what they do?

Read More
Google helps Pentagon analyze military drone footage—employees “outraged”

Google helps Pentagon analyze military drone footage—employees “outraged”

A report from Gizmodo says that Google is partnering with the United States Department of Defense and building drone software. The project will reportedly apply Google’s usual machine learning prowess to identify objects in drone footage. Google’s involvement in the project wasn’t public, but it was apparently discussed internally at Google last week and leaked.

Read More
Semantic Image Segmentation with DeepLab in Tensorflow

Semantic Image Segmentation with DeepLab in Tensorflow

Today, we are excited to announce the open source release of our latest and best performing semantic image segmentation model, DeepLab-v3+ [1], implemented in Tensorflow. This release includes DeepLab-v3+ models built on top of a powerful convolutional neural network (CNN) backbone architecture [2, 3] for the most accurate results, intended for server-side deployment. As part of this release, we are additionally sharing our Tensorflow model training and evaluation code, as well as models already pre-trained on the Pascal VOC 2012 and Cityscapes benchmark semantic segmentation tasks.

Read More