Machine learning is transforming the world. However, not all machine learning projects succeed. Through our years of experience in this field, we’ve identified several common reasons machine learning projects fail. Understanding these problems—and why they occur—will help you better assess the viability of your next machine learning project. Most importantly, you’ll be prepared to align the expectations of your team with actual business outcomes.
- Bad Data
- Picking the Wrong Goal: Explanation vs Prediction
- Confusing Correlation for Causation
- Optimization without Exploration
- Unanticipated Data Bias
- Not Defining “Done”
- Picking the Wrong Time Window
1. Bad Data
Good data—that is, data which is clean, available, and relevant—is essential to machine learning.
Clean data is complete (e.g., no missing dates), correct (audited for accuracy, and fixed/estimated where necessary), and consistent (has the same data format for all the times/events being considered). Be sure to implement both manual and automatic internal processes and tools to test that the data is clean now and into the future.
Available data is readily accessible for both ad-hoc exploration and large scale model training. Data for exploration should be available in one place, such as a data lake, data warehouse, or similar data platform. Furthermore, data scientists should be able to query it freely without worrying about interfering with live systems. Building or implementing a specially designed analytics database (see OLAP) is generally the best way to accomplish this.
Most importantly, the data your team uses for machine learning must be relevant. For example, if you are trying to use machine learning to understand your customers, you need data points about each customer beyond just their name and email address. Similarly, if the goal is to predict some sort of outcome, your model training data should include many varied examples of such an outcome.
If your data doesn’t meet the above requirements, your machine learning project will fail. Make sure you fully understand your data before scoping a machine learning project.
2. Picking the Wrong Goal: Explanation vs. Prediction
There is a fundamental tradeoff in machine learning models. You can either prioritize explaining the world to people, or predict outcomes. The type of goal you choose will deeply impact the implementation process.
If the business goal of a machine learning project is to provide real-world insight, or if there are strict regulatory requirements related to how decisions are made, then your model must a strong explanation of what it is doing. In these cases, explanation is more important than making highly accurate predictions. Consequently, this means data transformations — and your choices of model — must be kept simple.
If your goal is to provide the best predictions, you’ll need to use a different set of tools. For example, you can perform sophisticated transformations on the data you feed into the model in order to emphasize important differences over unimportant ones. You can also use complex models such as neural networks, which operate like “black boxes.”
It is vital that you make a choice early on as to whether explanation or prediction is the goal of a particular project. A misunderstood goal can force your team to start a project over from scratch. It’s worth noting that ongoing research is being conducted on ways to get better explanations out of a predictive model, and get better predictions out of explanatory models.
3. Confusing Correlation for Causation
Correlation is not causation. It is vital to keep this fundamental mantra from Statistics 101 in mind when you apply the outputs from machine learning. Machine learning finds correlations in data, but not direct causal relationships.
For example, a model may find that people who view the FAQ page on your website are more likely to purchase your product. However, that doesn’t mean you should immediately launch a massive marketing campaign with the goal of driving everyone directly to your FAQ page. In this example, it’s more likely that a common cause, like interest in your product, is causing both behaviors. Instead, it’s possible the information in the FAQs is not being adequately presented earlier in the customer journey. So, you may want to try making the link to the FAQ page more prominent.
When making a strategic business decision based on the explanations derived from machine learning, conduct an experiment to establish causation. In the above example, you could give a fraction of your customers a different experience (say a more prominent link to the FAQ page) to see how it affects their purchasing behavior. In this case, the power of machine learning reveals hidden and unexpected correlations in the mountains of data you have available. It also inspires experiments, which lead to new and better understanding.
4. Optimization Without Exploration
When you deploy the output of machine learning, it is important to build in the ability to continually validate and improve the model. Without that, your machine learning project won’t adapt to real-world changes, which effectively leads to “blind deployment.”
When you build a model that automatically decides upon an action, such as a product recommendation engine, it is important to understand the value of not simply using the “best” model for your entire audience. Some portion of the audience should be shown a different set of recommended products so that you can explore other recommendation methods. Without such a process, even the best model will eventually start making the wrong decisions. That’s because it will have stopped learning.
For models that provide explanations, retain enough variation in the data to continually validate those explanations and generate new insights. For example, if a prior machine learning project has found that there is an optimal way to market to your audience, a portion of the marketing budget should be set aside to try new strategies in order to ensure that the current strategy is still effective.
If you only optimize machine learning results, it can stop the learning process and lead to failure down the line.
5. Unanticipated Data Bias
At its core, what a machine learning model does is find structure and patterns in data in an automated fashion. That automation is incredibly powerful, but it can also lead to unanticipated risks.
For example, an early defense image recognition project aimed at distinguishing between friendly and enemy tanks showed a lot of promise in the lab. But, it ended up failing in the field. The reason was that all the images of friendly tanks the team used to train the model depicted the tanks in the daytime. So, the model ended up detecting daytime vs. nighttime rather than friendly vs. enemy tanks. Because it was trained automatically, the model did not know that this was the wrong distinction to make.
An Amazon initiative to use machine learning for hiring recently failed because their team trained the model on biased data. It tried to filter through resumes to find the best candidates to hire. But, because it was based off of historical hiring data with a gender bias, it ended up filtering out female candidates.
To a machine learning algorithm, reality is represented completely by the data it consumes. It is vitally important to ensure that your data reflects the right reality as closely as possible, otherwise the entire project may fail to have any real-world use.
6. Not Defining “Done”
Data science and machine learning are never really “done.” But, in order to remain practical, every project must have a finish line. It’s always possible to improve a model’s predictions or to have more confidence in the explanations it gives. However, this requires additional human and computing resources with a real business cost. It is important to clearly define guidelines so you know what constitutes too much or too little resources for a particular project.
There are hundreds of machine learning algorithms out there, and experts are generating more each day. There are even more implementations of those algorithms and ways of using them. If you don’t allow enough time to explore a few different models adequately, you may be missing out on the one that works best for your particular data and business case. On the other hand, you can’t iterate forever to try to improve a model by an extra fraction of a percent. Furthermore, data may sometimes actually represent random noise with no basis for making any predictions. For these reasons, it is important to place a limit on how many resources to throw at one project, as those resources could also be used elsewhere.
At Retina, we apply lessons from agile software development to solve this challenge. We hit “done” several times until we reach “done for now.” The idea is, we start simple to initially limit resources for a project. Then, we add more complexity by repeatedly reaching results from the model and deciding whether or not to continue based on how those results look. This ensures we can achieve quick wins and avoid chasing phantom goals.
7. Picking the Wrong Time Window
Machine learning models use data to understand the world. However, because the world is always changing, you must properly account for the rate of its changes.
If you train your model over too short a window, it will not know how to handle infrequent events. For example, it may miss the effects of the Christmas holiday season on buying behavior.
However, it’s also possible to train a model on too long of a window. In such a case, it may try to understand the way the world was, not how it will be. For example, you might end up inadvertently modeling behavior from the Great Recession of 10 years ago.
Choosing an appropriate time window that captures the right level of change for your business is ultimately a judgement call. It should be made jointly by the machine learning team and those who are using the model’s outputs. Don’t wait until the end of the machine learning project to think about the window of time your model should consider.
At Retina we obsess over increasing customer CLV and reducing CAC (Customer Acquisition Cost) through data science.
As the cost of customer acquisition skyrockets, it is important to focus on CLV at the customer level. Companies who computed their CLV to CAC ratio incorrectly fail in an increasingly competitive environment i.e. (Blue Apron, Wayfair, Chef’d). Companies that get it right are achieving sustainable profit and growth i.e. (FAANG companies, Dollar Shave Club, Ring, etc.).
The Retina Platform computes predictive-CLV and early customer behavioral drivers of CLV at the customer level. Furthermore, it uses next-gen machine learning algorithms (Built on 30 years of academic research). Within a matter of days, Retina automatically builds audiences (Facebook, Google, LinkedIn, Snap) for your marketers. Use these audiences to acquire new high-CLV customers and retain your existing high-value customers.
Interested in what we do? Go to https://staging-retinaai.kinsta.com/story for a more detailed presentation about what we do, how we work and how we can help your business grow to the next level.
If you’d like to see a demo of our solutions, go to https://staging-retinaai.kinsta.com/schedule-demo/.