My first machine learning algorithm was a K-nearest-neighbors (KNN) model. It makes sense for beginners — intuitive, easy to understand, and you can even implement it without using dedicated packages.
Because it makes sense for beginners, it also makes a lot of sense when explaining it to anyone unfamiliar with machine learning. I can’t put into words how much easier it is to get a room full of skeptical people on board with the KNN approach than with a black box random forest.
It’s an unsung hero of modeling approaches and serves as an excellent benchmark before moving on to more complex algorithms, and for many use cases, you might actually find that the time and cost of more complex algorithms aren’t worth it.
To get your modeling inspiration going, here are three example applications of KNN where you might well get much better results in a real-world scenario than you think you will.
I work in marketing, and my work with MMM systems typically involves identifying marketing channels that will improve campaign performance and/or scale the campaign up to reach more people. At a high level, this is known as marketing (or media) mix modeling.
The goal of any kind of modeling with MMM is to understand the effectiveness of each marketing input in both isolation and in combination with others, and then optimize the marketing mix for maximum effectiveness.
The most basic approach is predicting the impact of different marketing strategies based on historical data. A KNN model would consider each marketing strategy as a point in a multi-dimensional space, where the dimensions could be various marketing inputs such as advertising spend, promotional activities, pricing strategy, and so on.
When a new marketing strategy is proposed or an existing strategy needs optimizing, the model can predict the strategy’s results by looking at the ‘k’ most similar historical strategies, i.e., the ‘k’ nearest neighbors in the multi-dimensional space.