When training a model, it’s common to see your training loss decrease further than your validation loss.
This makes sense: the model uses the training data to learn, so it tends to overfit it.
But sometimes, your model will do better on your validation set. This is unexpected but not necessarily a problem.
Here I cover why this might be happening and how to fix it when it’s a problem.