View profile

Size does matter

Size does matter
By Santiago • Issue #6 • View online
Hey, friend!
Taking a dry, theory-heavy concept and trying to explain it in a way where more people feel invited to learn about it takes a whole lot of work.
This is a constant struggle I face every day. Sometimes, getting past the formulas in machine learning theory when you only need a high-level summary of how things work feels like an impossible mission. This is why so many people are scared to start.
I’ll keep working on this, and for today, here is my twenty-third draft trying to dilute the importance of the number of examples when training neural networks.

Size does matter
Beyond the title’s cuteness, I’ve been exploring this idea for quite some time now: the number of examples to train a neural network is an essential tool we can use to influence the training process.
In machine learning jargon, we call this the “batch size.” A batch is nothing else than a group of examples packed together in an array-like structure.
Let’s talk about how things work.
How many balls in a basket? — the best, tangentially related image I found.
How many balls in a basket? — the best, tangentially related image I found.
First, a little bit of context
We can’t talk shop without focusing for a quick second on how the training process works. Here is a rough summary that should be enough for our purposes:
  1. We take a batch of examples from the training dataset.
  2. We run that batch through the model to compute a result.
  3. We find how far away that result is from where it needs to be.
  4. We adjust the model’s parameters by a specific amount.
  5. We repeat the process for as many iterations as required.
The number of examples we are using to create that batch is the first decision we are making. It’s a critical choice that will impact how the process works.
We have three possible options to pick from:
  1. We can use the entire training dataset to create one single, long-ass batch 🙈.
  2. We can go to the other extreme and use a single example at a time.
  3. We can fall somewhere in the middle and use a few examples of data in every batch.
Let’s think through each one of these options.
Using the whole dataset at once
Machine learning practitioners love to come up with names for everything; hence they decided to call this process “Batch Gradient Descent.” Gradient Descent because that’s the optimization algorithm’s name, and Batch because we’ll be using the entire dataset 🤷‍♂️. Yeah, I know it doesn’t make sense, but let’s roll with it.
If we create a batch with every example from our dataset, run it through the process, and only update the model once at the end, we’ll save a lot of processing time. That’s great, but on the other hand, it may be hard to fit a lot of examples in memory at the same time, so this won’t work for large datasets.
The most interesting aspect of using the entire dataset to compute the updates to the model is that we are smoothing out all of the noise in the data and creating small and stable adjustments. This sounds boring but predictable. Some problems will benefit from this, but the lack of noise may prevent the algorithm from getting out of a suboptimal solution.
Using a single example
This one is called “Stochastic Gradient Descent” (usually referred to as SGD because it takes some time to write the whole thing and acronyms always make us look smarter.)
In this case, we adjust the model’s parameters for every single example in our dataset. Here, we don’t have to process the whole thing at once, so we won’t have memory constraints, and we’ll get immediate feedback about how training is going.
Updating the parameters for every example will cause the adjustments to have a lot of noise. This is good for specific problems because it keeps them from getting stuck—values will jump around getting out of any trap—, but it also produces an ugly, noisy signal while the model is training.
Somewhere in the middle
Splitting the difference is usually a good strategy, and that’s what “Mini-Batch Gradient Descent” does: It takes a few examples from the dataset to compute the updates to the model.
This is a great compromise that gives us the advantages of both previous methods and avoids their problems. In practice, Mini-Batch Gradient Descent is the one commonly used. Still, we usually refer to it as “Stochastic Gradient Descent” because we really want to make sure to make it as confusing as possible. When you hear somebody say “SGD,” keep in mind that they probably use a batch with more than one example 🤦🏻‍♂️.
The final question we need to answer is around how many examples you should include in a batch. There’s been a lot of research to answer this, and empirical evidence suggests that smaller batches perform better.
To make it even more concrete and quoting a good paper exploring this idea, “(…) 32 is a good default value.
Let’s wrap this up
Alright, let’s summarize this really quick with some practical advice.
The number of examples that you use during every iteration of your training process is essential. A good practice is to always start with 32 unless you have a good reason to go with a different size.
After you get a model that works, feel free to experiment with different batch sizes. Usually, I don’t deviate too much from the default value and rarely go with anything other than 16, 32, 64, or 128.
And, of course, a toast 🍸 to those who work hard to make us feel welcome with their use of names and acronyms!
Hell of a week!
The Dodgers ⚾️ keep winning and Real Madrid ⚽️ is in Semifinals of the UEFA Champions League. Sport-wise, I’m really happy!
I also launched a small initiative to keep building my Twitter audience and I’ve gotten more support than what I could have imagined!
I will leave you with the most important lesson I learned during the whole week: don’t rate your ideas. Try them.
Did you enjoy this issue?

Every week, one story that tries really hard not to be boring and teach you something new about machine learning.

Underfitted is for people looking for a bit less theory and a bit more practicality. There's enough mathematical complexity out there already, and you won't find any here.

Come for a journey as I navigate a space that's becoming more popular than Apollo 13, Area 51, and the lousy sequel of Star Wars combined.

If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue