View profile

Choosing is always hard

Choosing is always hard
By Santiago • Issue #19 • View online
I’m already on vacation.
New York City. This is charming, impressive, and sometimes even mind-blowing.
I chose to come here for a different view and some inspiration. In that light, this issue is about choosing.

Considerations when choosing a model
In a Kaggle competition, all we usually care about is how good are the results of our models.
This is not enough in real-life situations.
I put together a list with 7 different things to keep in mind when choosing the appropriate machine learning model for our problem.
Considerations when choosing a machine learning model
Choosing the right answer
You’ve probably heard of ensemble models.
When you have multiple models working together to come up with a single answer, it is not always obvious how to select the best answer from every candidate.
This thread covers an interesting approach that should spark ideas on how you could think about this problem.
Do you know what's better than a machine learning model?

Two models.

More than one model working together to solve a problem is called "an emsemble." A simple way to build this is having each model vote for an answer.

But there's a problem with this approach: ↓
Did you enjoy this issue?

Every week, I’ll teach you something new about machine learning.

Underfitted is for people looking for a bit less theory and a bit more practicality. There's enough mathematical complexity out there already, and you won't find any here.

Come for a journey as I navigate a space that's becoming more popular than Apollo 13, Area 51, and the lousy sequel of Star Wars combined.

If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue