Experts unequivocally agree that ChatGPT, Bing, Bard, and Artificial Intelligence (AI) based systems like them have the potential to change not only internet searching but the whole of the internet — and the world economy.
AI is already seeing use in the workplace. For instance, some might use AI to power chatbots that help answer frequently asked questions, or to create image backgrounds for presentation at an upcoming meeting. Some organizations even have their employees use AI systems to analyze and enhance their information systems’ code. Under the umbrella of the aptly named “generative AI,” these systems can be used to create expressions of ideas and content.
The music industry is also grappling with the impacts of generative AI — fans of artists like Drake and The Weeknd are using generative AI to create original songs using their famous voices. These songs were previously uploaded to popular streaming services until the largest music label in the world, Universal Music Group, had them taken down due to intellectual property issues.
How do these systems work? As with a standard search engine, you can ask ChatGPT a question, like what the weather will be like in a city to which you will be travelling. The platform and tools like it expand on that basic premise with the ability to handle more sophisticated requests, generating answers to unique demands. Writing social media posts, creating a form for customers to use or making a presentation with sophisticated graphics are all within the ability of these new systems.
At a high level, these systems often work by feeding tons of data and information — usually in the form of art made by artists and content creators around the world — into a machine that combines it all and is able to produce something “new” in its ultimate output.
Naturally, this has led artists and organizations relying on their own intellectual property for their business model to question the legal consequences of such technology.
More systems appear to be arriving every day, with platforms like Midjourney (an image request and output system) and Cohere (an AI art and music creation service) disrupting the world of content creation and ownership. The thrust of concern from copyright owners is the desire to know if their works have been infringed by being fed into these systems.
Is that Legal?
Interested parties are trying to determine the ramifications when copyrighted content is fed into generative AI platforms, asking questions like “is the training tracked” and “should a royalty be paid to the content creators?” So far, neither the platforms nor the law have definitively answered these questions.
AI systems are a mix of open and closed source code for the time being, so there’s a shroud of secrecy as to how each exactly works and on what data they’ve been trained. Most platforms claim their system creates works that are new and unique: in other words, works that should be considered original.
If a generative AI system uses protected works as input data in order to produce “new” content, are the rights of the original creator being infringed? Defenders would say each generative AI platform is no different than being inspired by old ideas and outputting new ideas the same way humans do — in the same manner that some creators might pay homage to different themes, characters, and plot points in their creations.
The legal argument that the creator’s rights are being infringed stems from the idea that the original works are being copied in an unauthorized way when they’re fed into the AI’s learning model — the content creators’ position is that they are the muses bringing creativity to the machines, and thus the machines’ outputs necessarily infringe the rights of others. In addition, humans don’t necessarily make a “copy” to pay homage or create works that take inspiration from other works — whereas the AI systems technically do so as they learn.