Lyft’s Journey through Mobile Networking

Lyft’s Journey through Mobile Networking

  • January 23, 2020
Table of Contents

Lyft’s Journey through Mobile Networking

In 5 years, the number of endpoints consumed by Lyft’s mobile apps grew to over 500, and the size of our mobile engineering team increased by more than 15x. To scale with this growth, our infrastructure had to evolve dramatically to utilize new advances in modern networking in order to continue to provide benefits for our users. This post describes the journey through the evolution of Lyft’s mobile networking: how it’s changed, what we’ve learned, and why it’s important for us as a growing business.

The early iterations of the Lyft apps used commonly known networking frameworks like URLSession on iOS and OkHttp on Android. All of our APIs were JSON over RESTful HTTP. The workflow for developing an endpoint looked something like the following diagram, where engineers hand-wrote APIs on each platform based on a tech spec:

Source: lyft.com

Share :
comments powered by Disqus

Related Posts

Stack Overflow: How We Do Monitoring

Stack Overflow: How We Do Monitoring

What is monitoring? As far as I can tell, it means different things to different people. But we more or less agree on the concept.

Read More
Observability at Scale: Building Uber’s Alerting Ecosystem

Observability at Scale: Building Uber’s Alerting Ecosystem

Uber’s software architectures consists of thousands of microservices that empower teams to iterate quickly and support our company’s global growth. These microservices support a variety of solutions, such as mobile applications, internal and infrastructure services, and products along with complex configurations that affect these products at city and sub-city levels. To maintain our growth and architecture, Uber’s Observability team built a robust, scalable metrics and alerting pipeline responsible for detecting, mitigating, and notifying engineers of issues with their services as soon as they occur.

Read More
Automating Datacenter Operations at Dropbox

Automating Datacenter Operations at Dropbox

Switch provisioning at Dropbox is handled by a Pirlo component called the TOR Starter. The TOR Starter is responsible for validating and configuring switches in our datacenter server racks, PoP server racks, and at the different layers of our datacenter fabric that connect racks in the same facility together. Writing the TOR Starter on top of the ClusterOps queue provides us with a basic manager-worker queuing service.

Read More