The year 2023 is the rise of Artificial Intelligence, specifically Generative AI. The technology itself isn’t new; numerous tech companies have developed and launched desktop/web/mobile apps with some “generative” components — Maybe you recall having fun with the SimSimi chatbot or have been using the Snapseed background remover feature?
It is the launch of GPT-4 in March 2023 that triggered the rise and public (non-tech) interest in the technology. The ChatGPT user interface provides easy accessibility for usage and the GPT-4 high-quality content-generation capability took the world by storm as people started seeing the utilization potential of this technology.
This Generative AI democratization — the rapid adoption and usage from the populace — comes with some risks and implications, including legally and ethically. Legal/regulation-wise, this is still uncharted territory, but it is still a good idea to be aware of the potential implications of the technology.
Generative AI, with human creativity of usage, can (knowingly or unknowingly) be a source of legal threats. Generative AI regulation is currently (by July 2023) still a relatively open and debatable space, but the risks are imminent regardless.
1. Copyright and Ownership
Quoting Wikipedia, copyright is a type of intellectual property that gives its owner the exclusive right to copy, distribute, adapt, display, and perform creative work, usually for a limited time. In Generative AI, the “owner” aspect of this copyright becomes unclear as there are multiple parties involved in the creation of the content, including:
- The person who wrote the prompt
- The company/organization that built the AI model
- The artists whose works were used in training the model
- The AI which generated the art/content
Each party plays a significant role in the creation of the content and without…