Three Approaches to Scaling Machine Learning with Uber Seattle Engineering

Three Approaches to Scaling Machine Learning with Uber Seattle Engineering

  • September 14, 2019
Table of Contents

Three Approaches to Scaling Machine Learning with Uber Seattle Engineering

Uber’s services require real-world coordination between a wide range of customers, including driver-partners, riders, restaurants, and eaters. Accurately forecasting things like rider demand and ETAs enables this coordination, which makes our services work as seamlessly as possible. In an effort to constantly optimize our operations, serve our customers, and train our systems to perform better and better, we leverage machine learning (ML).

In addition, we make many of our ML tools open source, sharing them with the community to advance the state of the art. In this spirit, members of our Seattle Engineering team shared their work at an April 2019 meetup on ML and AI at Uber. Below, we highlight three different approaches Uber Seattle Engineering is currently working on to improve our ML ecosystem and that of the tech community at large.

During his talk, senior software engineer Travis Addair, from the ML Platform team, describes the power of deep learning and explains how Horovod, an open source deep learning framework built at Uber, helps facilitate this important function, especially when used with Apache Spark. As a distributed training platform, Horovod allows companies to scale their ML to hundreds of machines. Horovod’s unique abstracted framework also helps infrastructure professionals and ML engineers focus on doing their best work without stepping on each other’s digital toes.

Travis details how Horovod’s deep learning systems work and demonstrates why NVIDIA, Amazon, Alibaba, ORNL, and other major players are using it for their own ML platforms.

Source: uber.com

Tags :
Share :
comments powered by Disqus

Related Posts

Teaching Computers to Answer Complex Questions

Teaching Computers to Answer Complex Questions

Computerized question-answering systems usually take one of two approaches. Either they do a text search and try to infer the semantic relationships between entities named in the text, or they explore a hand-curated knowledge graph, a data structure that directly encodes relationships among entities. With complex questions, however — such as “Which Nolan films won an Oscar but missed a Golden Globe?” — both of these approaches run into difficulties.

Read More
First Programmable Memristor Computer

First Programmable Memristor Computer

Michigan team builds memristors atop standard CMOS logic to demo a system that can do a variety of edge computing AI tasks Hoping to speed AI and neuromorphic computing and cut down on power consumption, startups, scientists, and established chip companies have all been looking to do more computing in memory rather than in a processor’s computing core. Memristors and other nonvolatile memory seem to lend themselves to the task particularly well. However, most demonstrations of in-memory computing have been in standalone accelerator chips that either are built for a particular type of AI problem or that need the off-chip resources of a separate processor in order to operate.

Read More
New advances in natural language processing

New advances in natural language processing

Natural language understanding (NLU) and language translation are key to a range of important applications, including identifying and removing harmful content at scale and connecting people across different languages worldwide. Although deep learning–based methods have accelerated progress in language processing in recent years, current systems are still limited when it comes to tasks for which large volumes of labeled training data are not readily available. Recently, Facebook AI has achieved impressive breakthroughs in NLP using semi-supervised and self-supervised learning techniques, which leverage unlabeled data to improve performance beyond purely supervised systems.

Read More