Down The Rabbit Hole of Performance Monitoring

Down The Rabbit Hole of Performance Monitoring

  • September 8, 2019
Table of Contents

Down The Rabbit Hole of Performance Monitoring

Hi, I’m Tony, and I’m an engineer on League. This article is a followup to my performance series, where I talk about optimisation and profiling. This will be a high level overview of how we monitor game performance in League of Legends, how we detect when a performance degradation has slipped through QA and escaped into the wild, and how we track global trends in frame times over many patches and millions of players.

I hope you enjoy it! What’s In A Frame Rate? A game’s frame rate is often an important indicator of game quality.

Not all games need to have a high frame rate, but some games depend on it – the better the frame rate, the better the player’s experience with that game. League is a game that’s better played at a high frame rate, so ensuring that it plays as fast as possible is a crucial part of being a League developer. How do we know how fast League is playing on a player’s machine?

How fast is fast enough? The first thing you need to realise is that to optimise something, you must first be able to measure it. If you can’t measure it, you can’t optimise it.

In our case, we’re trying to optimise frame times, so we’ll start by measuring the average frame time for an entire game, as it’s a reasonable initial indicator of performance. We do need to keep in mind that this is not necessarily the most consistent metric for measuring the performance health of a game – imagine a situation where most of a match has a high frame rate, but team fights are extremely slow. While the average frame rate would look excellent, the player experience would be terrible.

So after our first findings, we’ll need to validate our assumptions.

Source: riotgames.com

Share :
comments powered by Disqus

Related Posts

Netflix Play API: Building an Evolutionary Architecture

Netflix Play API: Building an Evolutionary Architecture

At QCon SF, Suudhan Rangarajan presented ‘Netflix Play API: Why We Built an Evolutionary Architecture’. Key takeaways from the talk included: services that have a single identity/responsibility are easier to maintain and upgrade; engineers should spend time identifying core decisions that need to be made when building a service, and determine whether these are ‘Type 1’ or ‘Type 2’ decisions which require thorough deliberation or rapid experimentation, respectively; and designing an ‘evolutionary architecture’, using tools like fitness functions, provides many benefits. Rangarajan, senior software engineer at Netflix, began the presentation by talking about two key business milestones within Netflix in 2016 that also had a large engineering impact.

Read More
The lifecycle of Linux kernel testing

The lifecycle of Linux kernel testing

In Continuous integration testing for the Linux kernel,I wrote about the Continuous Kernel Integration (CKI) project and its mission to change how kernel developers and maintainers work. This article is a deep dive into some of the more technical aspects of the project and how all the pieces fit together. Every exciting feature, improvement, and bug in the kernel starts with a change proposed by a developer.

Read More
Fitness function-driven development

Fitness function-driven development

Test-driven development, or TDD, involves writing tests first then developing the minimal code needed to pass the tests. TDD is an established practice for feature development that can improve code quality and test coverage. What about other, non-functional requirements such as scalability, reliability, observability, and other architectural “-ilities”?

Read More