Measuring Data Throughput

As our first step in estimating the peak performance of our program, we must figure out how much data we are processing, and how fast we are currently processing it.

Aug 16, 2023
∙ Paid

This is the first video in Part 3 of the Performance-Aware Programming series. Please see the Table of Contents to quickly navigate through the rest of the course as it is updated weekly. The listings referenced in the video (listings 100 and 101) are available on the github.

In part two of the course, we built a profiler that lets us easily figure out how much time is being spent in each part of our program. Thanks to that information, we've already discovered some things that maybe we wouldn't have known if we'd never stopped to investigate. For example, we've seen that not only does JSON parsing dominate our runtime, but even simple things like allocating and freeing nodes to represent the parsed JSON account for a considerable proportion of that time. So we've learned a lot just by inspecting the amount of time spent in various parts of our program.

But what we still don't know at this point, and what we'd like to know going forward, is how fast should our code be running?

The full video is for paid subscribers