Some artists have started taking legal action against the alleged theft billion of copyrighted images used to train AI art generators and reproduce unique styles without compensating artists or asking for consent.
A group of artists represented by the law firm Joseph Saveri has filed a US federal class action lawsuit in San Francisco against AI art companies Stability AI, Midjourney and DeviantArt for alleged violations of the Digital Millennium Copyright Act, infringement of the right of publicity and unlawful competition.
The artists act—Sarah Anderson, Kelly McKernan, Karl Ortiz– “try to end this blatant and huge violation of their rights before their professions are wiped out by a computer program powered solely by their hard work,” the official said text of the complaint submitted to court.
Tools like Stable Diffusion, Stability AI’s Midjourney, or the DreamUp generator on DeviantArt allow users to type in sentences to create artworks that resemble living artists. Since the mainstream advent of AI image synthesis last year, AI-generated artworks have been highly controversial among artists, sparking protests and culture wars on social media.

A notable absence from the list of companies named in the complaint is OpenAI, creators of the DALL-E image synthesis model, which in April 2022 arguably started the ball rolling in mainstream generative AI art. Unlike Stability AI, OpenAI has not publicly disclosed the exact contents of its training dataset and has commercially licensed some of its training data from companies like Shutterstock.
Despite the stable diffusion controversy, the legality of how AI image generators work has not been tested in court, although the law firm Joesph Saveri is familiar with legal action against generative AI. In November 2022 the same company filed a lawsuit against GitHub about its Copilot AI programming tool for alleged copyright infringement.
Weak arguments, ethical violations

Alex Champandard, an AI analyst who did this endorsed the rights of artists without directly denying the AI technology, criticized the new lawsuit in several threads on Twitter. To write, “I do not trust the attorneys who filed this complaint based on the content + how it is written. The case could therefore do more harm than good.” Nonetheless, Champandard believes the lawsuit could harm potential defendants: “Anything the companies say to defend thatthemselves will be used against them.”
According to Champandard, we have found that the complaint contains several statements that may misrepresent how the AI image synthesis technology works. For example, the fourth paragraph of Section I states: “When Stable Diffusion is used to generate images from prompts from its users, Stable Diffusion uses the training images to generate what appear to be new images through a mathematical software process. These ‘new’ images are entirely based on the training images and are derivative works of the particular images that Stable Diffusion relies on when assembling a given output. Ultimately, it’s just a complex collage tool.”
In another section attempting to describe how latent diffusion image synthesis works, plaintiffs erroneously compare the trained AI model to “having a directory of billions of JPEG image files on their computer” and claim that that “a trained diffusion model can make a copy of one of its training images.”
During the training process, Stable Diffusion drew from a large library of millions of scraping images. Using this data, its neural network statistically “learned” what certain image styles look like without storing exact copies of the images seen. Although in the rare cases of over-represented images in the dataset (such as mona lisa), Kind of “overfitting” can occur, allowing stable diffusion to spit out an accurate representation of the original image.
Ultimately, when properly trained, latent diffusion models always produce novel images and do not collage or duplicate existing works — a technical reality that potentially undermines the copyright infringement plaintiffs’ argument, despite the fact that their “derivative works” arguments are created by AI Image generators is an open question with no clear legal precedent to our knowledge.
Some of the other points of the complaint, such as unlawful competition (by duplicating an artist’s style and using a machine to replicate it) and violation of public rights (by allowing people to request artwork “in the style” of existing artists without permission), are less technical and may have legs in court.
Despite its problems, the lawsuit comes after a wave of anger over the lack of consent from artists who feel threatened by AI art generators. By their admission, the tech companies behind AI image synthesis have amassed intellectual property to train their models without artists’ consent. They are already before the court of public opinion, even if they are eventually found compliant established jurisprudence regarding the overfishing of public data from the Internet.
“Companies that build large models that rely on proprietary data can get away with doing it privately.” tweeted Champandard, “but doing it openly *and* legally is very difficult — or impossible.”
Should the lawsuit go to court, the courts must clarify the distinctions between ethical and alleged violations of the law. The plaintiffs hope to prove that AI companies benefit economically and profit richly from the use of copyrighted images; They have asked for substantial damages and a permanent cease and desist order to stop alleged infringing companies from further violations.
Emad Mostaque, CEO of Stability AI, when asked for comment, replied that the company had not received any information about the lawsuit as of press time.
This article was previously published on Source link