We’d never recommend changing robust, well-performing workflows just for the sake of change; “if it ain’t broke, don’t fix it” is a common folksy idiom for a reason: it’s very often the correct approach.
Still, there’s a sizeable gap between “very often” and “always,” and our most frustrating days at work typically come about when our time-tested methods fail to produce our expected outcomes or perform poorly. This is where expanding our knowledge base really pays off: instead of getting stuck in the mental equivalent of a spinning wheel of death, we try something different, tinker with our process, and (sooner or later) move forward with a new solution.
In the spirit of embracing fresh perspectives, we’ve put together a lineup of excellent recent posts that offer an original spin on common machine learning workflows. They cover procedures like drift detection and model training and tasks ranging from image segmentation to named-entity recognition. Make room in your toolkit—you’ll want to add these!
Before diving in, a quick update: if you’re looking for other ways to stay up-to-date with our best recent articles beyond the Variable, we just launched several Medium lists to help you discover more great reads.