Why scale, personalisation, unclear provenance, and diffusion of AI-generated content require us to act now
“Why do you think responsible Generative AI (GenAI) is important and urgent?” This is a question being posed today by policymakers, researchers, journalists, and concerned citizens alike. Rapid progress in GenAI has captured public imagination, but also raised pressing ethical questions. Models like ChatGPT, Bard, and Stable Diffusion showcase the creative potential of the technology — but in the wrong hands, these same capabilities could foster disinformation and manipulation at unprecedented scale. Unlike previous technologies, GenAI enables the creation of highly personalised, context-specific synthetic media that is difficult to verify as fake. This poses novel societal risks and complex governance challenges.
In this blog post I will dive into four aspects (Scale & Speed, Personalisation, Provenance, Diffusion) that distinguish this new age of GenAI from previous times and highlight why this now is the right time to look into the ethical and responsible use of AI. In this piece, I aim to answer the question “Why now?” by highlighting the critical aspects. Potential solutions will be explored in a subsequent article.
Responsible GenAI is not just a hypothetical concern relevant to tech experts. It’s an issue that affects all of us as citizens navigating an increasingly complex information ecosystem. How can we maintain trust and connection in a world where our eyes and ears can be deceived? If anyone can produce compelling yet completely fabricated realities, how does society arrive at shared truths? Unchecked, the misuse of GenAI threatens foundational values like honesty, empathy, and human dignity. But if we act collectively and quickly to implement ethical AI design, we can instead realise generative technology’s immense potential for creativity, connection, and social good. By speaking up and spreading awareness, we can influence the trajectory of AI in a more aligned direction.