It’s become increasingly difficult to have thoughtful conversations about AI’s changing (and rapidly growing) footprint in our daily lives—an unexpected side effect, perhaps, of the technology crossing over from research and industry communities to every corner of social media. There’s certainly a lot of hype, as well as nontrivial amounts of doom-and-gloom prophecies and an endless parade of new tools and apps. But what does it all mean?
We’ve shared some excellent hands-on resources with our readers over the past few months to help you get acquainted with practical topics like prompt engineering and voice-to-text automation. Today, we invite you to take a step (or two) back to explore some of the bigger themes our authors have written about as they grappled with AI’s shifting role in data science and machine learning workflows. Let’s dive in.
- Many of the most exciting innovations of the past few years owe their success to foundational open-source projects. Clemens Mewald believes that era is coming to an end: “Although there seems to be a Robin Hood-esque movement around open source AI, the data is pointing in a different direction.”
- Even the most ardent ChatGPT fan would concede that the chatbot comes with serious limitations, from its tendency to hallucinate to its inability to provide real-time (or even just moderately fresh) information. Mary Newhauser recently surveyed the expansive landscape of ChatGPT plugins—apps that add new functionalities to the tool and connect it to other data sources—and reported back on their benefits and risks.
- For many ML practitioners, the arrival of ChatGPT marked a watershed moment that prompted a serious rethinking of their projects, practices, and business models. Janna Lipenkova outlines four emerging trends in this post-ChatGPT ecosystem and reflects on how they will affect future AI development.