Train ALBERT for natural language processing with TensorFlow on Amazon SageMaker

Train ALBERT for natural language processing with TensorFlow on Amazon SageMaker

  • May 28, 2020
Table of Contents

Train ALBERT for natural language processing with TensorFlow on Amazon SageMaker

At re:Invent 2019, AWSsharedthe fastest training times on the cloud for two popular machine learning (ML) models: BERT (natural language processing) and Mask-RCNN (object detection). To train BERT in 1 hour, we efficiently scaled out to 2,048 NVIDIA V100 GPUs by improving the underlying infrastructure, network, and ML framework. Today, we’re open-sourcing the optimized training codefor ALBERT (A Lite BERT), a powerful BERT-based language modelthat achieves state-of-the-art performanceon industry benchmarks while training 1.7 times faster and cheaper.

This post demonstrates how to train a faster, smaller, higher-quality model called ALBERT on Amazon SageMaker, a fully managed service that makes it easy to build, train, tune, and deploy ML models. Although this isn’t a new model, it’s the first efficient distributed GPU implementation for TensorFlow 2. You can use AWS training scripts to train ALBERT inAmazon SageMakeron p3dn and g4dn instances for both single-node and distributed training.

The scripts use mixed-precision training and accelerated linear algebra to complete training in under 24 hours(five times faster than without these optimizations), which allows data scientists to iterate faster and bring their models to production sooner. It uses model architectures fromthe open-source Hugging Facetransformerslibrary. For more information, seethe GitHub repo.

Source: amazon.com

Tags :
Share :
comments powered by Disqus

Related Posts

A Hacker’s Guide to Efficiently Train Deep Learning Models

A Hacker’s Guide to Efficiently Train Deep Learning Models

Three months ago, I participated in a data science challenge that took place at my company. The goal was to help a marine researcher better identify whales based on the appearance of their flukes. More specifically, we were asked to predict for each image of a test set, the top 20 most similar images from the full database (train+test).

Read More
Word2Vec: A Comparison Between CBOW, SkipGram & SkipGramSI

Word2Vec: A Comparison Between CBOW, SkipGram & SkipGramSI

Learn how different Word2Vec architectures behave in practice. This is to help you make an informed decision on which architecture to use given the problem you are trying to solve. In this article, we will look at how the different neural network architectures for training a Word2Vec model behave in practice.

Read More
Ultimate Guide to Natural Language Processing Courses

Ultimate Guide to Natural Language Processing Courses

Selecting an online course that will match your requirements is very frustrating if you have high standards. Most of them are not comprehensive and a lot of time spent on them is wasted. How would you feel, if someone would provide you a critical path and tell, what modules exactly and in which order will provide you comprehensive, expert-level knowledge?

Read More