View profile

The snake that bites its tail

The snake that bites its tail
By Santiago • Issue #7 • View online
Today, I wanted to use your attention to show you three examples of how machine learning can be problematic.
You have probably heard the concerns. Maybe you have read about those that go even as far as suggesting we stop investing in the field altogether!
I’m not up for extremes, but understanding the problems is a great first step, so let’s do this thing!

The snake that bites its tail
I’m the first one excited about the potential for Artificial Intelligence and specifically Machine Learning to change the world we currently live in.
Look around, and you’ll see how things are changing at a neck-breaking pace! Every single day, we are using machine learning to power more and more of our lives.
But as much as I love all of this progress, it doesn’t come for free. Implementing machine learning comes with immense challenges that have the potential to reshape our society in unintended ways.
Understanding the source of these problems is the first step towards finding systematic solutions that make machine learning safe and valuable to everyone. I don’t have the answers, but today, I’ll give you three quick examples of the snake that bites its own tail.
Let’s talk about positive feedback loops.
An infinite loop that never ends and doesn't lead anywhere safe.
An infinite loop that never ends and doesn't lead anywhere safe.
Policing crime
Think about a city. Any city.
You probably know a neighborhood where police cars are more common than traffic lights. Usually, impoverished areas of the city. Usually, areas your mom taught you not to visit.
The police are there because there’s a lot of crime. They aren’t patrolling my street; they are laser-focused on those neighborhoods.
Let’s now build a machine learning model that uses all crime information to predict where the following arrests will happen. Not surprisingly, heavily patrolled neighborhoods report more crimes. This influences the model’s predictions, which will redirect more police there, which will lead to more arrests, which will make the model double down on the same areas.
An infinite loop that we can’t get out of.
Crime may increase elsewhere, but it doesn’t matter. No police, no arrests. Our model is entirely biased towards specific neighborhoods, and it’s incapable of breaking the cycle that it helped create.
Marketing products
Our company started selling ten new products.
We create a quick marketing campaign to promote all of them. Even better, we build a machine learning model that uses the sale results to predict which products will perform best and distribute the marketing budget.
After a week, three of the products rose to the top. The model starts allocating more budget to advertise them, which causes even more sales of those three products. As the other seven receive less funding, it’s hard for them to get any traction.
Sort of a self-fulling prophecy. A biased model stands in the way of the unlucky seven products. Their faith has been sealed.
Hiring the best of the best
Everyone wants to hire the best talent out there. How about building a model that predicts what type of people is more likely to be successful?
Unfortunately, technology is a male-dominated field. Just by having a larger population, it’s likely that more men dominate the list of successful employees. This isn’t even counting any biases that we may introduce when defining the criteria to consider somebody “successful.”
Little by little, the model absorbs the status quo and recommends more of the same. What do you think will happen?
Positive feedback loops
Unfortunately, I didn’t make up any of these stories. They are real examples of a phenomenon called  “positive feedback loops.” Funny enough, none of these seem to offer anything positive at all, but you already know about our track history naming things.
We are making progress, but there’s still a lot of work to do in the machine learning community to find solutions to these problems. We need to keep working on our understanding of biases and fairness in our models and specifically avoiding or mitigating their consequences.
A long way to go still, but everything starts with you, your process, and how serious you are about putting these issues front and center on everything you do.
What else?
Do you have any other examples that show a positive feedback loop? Maybe something that happened at work, or a story that your cousin told you over dinner?
I’d love to hear about your experience with these issues. If you have one, please, reply to this email and let me know. I may organize a public discussion to go over some of these! Let me know if you are up for it.
Did you enjoy this issue?
Santiago

Every week, I’ll teach you something new about machine learning.

Underfitted is for people looking for a bit less theory and a bit more practicality. There's enough mathematical complexity out there already, and you won't find any here.

Come for a journey as I navigate a space that's becoming more popular than Apollo 13, Area 51, and the lousy sequel of Star Wars combined.

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue