Click Scanner

Stop paying for artificial fraud clicks

eSeller AI

Sell more with AI chat

Copyright in AI-Generated Images – The State of Play in 2025

TrafficWatchdog team

17.07.2025

source: own elaboration

In recent years, generative artificial intelligence has stormed into the world of art and design – tools like DALL-E, Midjourney, and Stable Diffusion allow users to create an image in seconds based on a text prompt. But who owns the copyright to such a generated work? This question already stirred debates in 2022, and today – in 2025 – it has taken on even greater significance in the face of AI’s growing popularity. Legal and ethical discussions are ongoing worldwide, and lawmakers and courts are beginning to form the first answers. In this article, we examine the current landscape: whether AI-generated images are protected by copyright, who (if anyone) can claim rights to them, what the legal situation looks like in Poland, the European Union, and other countries, and what new rulings and controversies have emerged in recent years.

Are AI-generated images protected by copyright?

Both Polish and EU copyright law are based on the principle that only a human being can be considered the author of a work. The definition of a “work” in the Polish copyright act requires it to be the result of creative activity of an individual character – which legal doctrine and case law interpret as a work created by a human. As a result, content fully generated by an AI algorithm does not meet the requirement of being the result of human activity, and therefore is not protected by copyright. Put simply: if an image was created solely by artificial intelligence (without any human creative input), then no copyright arises in that image – such a "work" has no legally recognized author. These kinds of creations are treated de facto as public domain materials, which anyone may copy or modify without the “creator’s” consent (since there is, legally speaking, no creator).

U.S. copyright law currently takes a similar approach. The U.S. Copyright Office has repeatedly emphasized in its decisions and guidelines that only original human authorship is eligible for protection – the work must reflect personal intellectual contribution by a human, even if AI tools were used in the process. For example, in the high-profile 2023 case concerning the comic book Zarya of the Dawn, the office denied copyright registration for images generated by Midjourney, even though the author had provided over 600 detailed prompts (text instructions regarding characters, style, colors, etc.). The office concluded that even such an extensive list of directions does not make the human the author of the graphic, as the final vision came from the algorithm. However, the office also stated that if the work includes substantial human creative input – for example, if the author later extensively modifies the graphic or combines it with original work – it may be eligible for copyright protection. A similar view is reflected in other jurisdictions. A landmark ruling was issued in the Czech Republic: in 2024, the Prague Municipal Court dismissed a lawsuit by a photographer who accused a local law firm of copying an image he had generated using DALL-E. The court held that an AI-created image is not a work within the meaning of copyright law when there is insufficient human creative input, and that the textual prompt is merely an unprotected idea. This was the first such ruling in Europe and likely sets the tone for future interpretations – works entirely generated by algorithms are not eligible for copyright protection.

In summary, the current legal framework (in Poland, the EU, and the U.S., for example) can be summed up as follows: images created entirely by AI are not protected by copyright. There is no recognized author, no exclusive economic rights – anyone may use them freely. However, this does not imply complete lawlessness: if AI generates an image that closely resembles an existing, protected work, using that output may still infringe the original creator’s copyright (as it would amount to unauthorized distribution or adaptation). In such a case, the infringement concerns the pre-existing work in the training data, not “rights to the output” as such (since no such rights exist). The issue remains complex and lacks clear legal frameworks – as the U.S. Copyright Office noted, the world has yet to develop clear rules governing copyrights in AI-generated content. However, we can explore who might claim rights to generated images and how lawmakers are attempting to address the legal gap.

Who can claim rights to AI-generated works?

Since traditional copyright law requires a human creator, the question arises: what about the user who provided the AI with the idea? Or the developer of the AI model itself? Or the creators of the images used to train the AI? Let’s examine the potential claimants one by one:

The user (prompt author) – in many cases, the person entering the prompt (text instruction to the AI) is the closest to a creative role. However, if their contribution was limited to typing a brief command and the entire composition and details were “imagined” by the neural network, then – as discussed above – that person will not be recognized as the author of the work. Under current law, they did not personally create an original intellectual work but merely activated an algorithm. Therefore, neither the user nor the AI tool is considered the author of the generated image. An exception may apply when the user plays a more extensive role: for instance, providing very detailed instructions, selecting one version from many outputs, and then manually refining it to give it a final character. If it can be shown that the final result reflects human creative decisions (with AI serving as a tool), then there is an argument for recognizing the user as the author of an AI-assisted derivative work. However, this is a legal gray area – each case will be assessed individually to determine whether the human input was sufficiently creative. As of now, Polish and EU practice leans toward a strict approach similar to the U.S. – it will be difficult to claim authorship unless the human contribution is truly significant. By contrast, in China, two recent court rulings have found that AI-generated images can be protected if the human played a key creative role. In a widely discussed case from November 2023, the Beijing Internet Court ruled that the creator of an image titled "Spring Wind Brings Tenderness", generated with AI, owned the rights – because they directed the work and made creative choices (while the defendant, who published the image without permission and removed the watermark, violated those rights).

The AI model developer (programmers, platform provider) – intuitively, one might think that the company that created and “trained” the algorithm to generate images could claim some rights to the output. But copyright law does not recognize such a link. Only natural persons can be authors, and building a tool does not make its creators co-authors of every image generated by users. The company providing the generator (e.g., OpenAI, Stability AI) holds rights to the software and model itself (which may be protected as computer code, trade secrets, or know-how), but not to each image created by users. In practice, many companies even assign or grant rights to the output to the users via their terms of service. For example, OpenAI explicitly states in its usage terms that users own the content they generate – and that they are responsible for ensuring that their outputs do not infringe the law. Similarly, Midjourney allows paying users to treat generated images as their own and use them freely. Such clauses mainly aim to reassure users (providing commercial usability confidence) and to shift legal risk away from the provider, placing it on the user instead. However, it’s worth emphasizing: if an image doesn’t qualify as a protected work at all, then strictly speaking, there are no copyrights the company can own or assign. In such cases, we’re dealing with a license or contractual permission to use the generated content under the platform’s rules. The company may impose certain restrictions (such as prohibiting infringing, pornographic, or abusive content) and enforce them via its terms of service – but not based on copyright in the image itself, as none exists.

**Authors of training materials **(creators of “input works”) – generative AI models are trained on massive datasets: photos, artworks, paintings, illustrations – often scraped from the Internet without artists’ permission. This naturally provokes concern: is AI “stealing” artists’ work to create new ones? From a copyright perspective, as long as the AI-generated image merely draws inspiration from the style or themes present in the training data, but does not directly copy protected elements from a specific work, there is no infringement of any original creator’s rights. The authors of images used to train models do not automatically become co-authors of new images, since the new images are not considered derivative works of their originals (they are either orphaned works or new works – if created with sufficient human input). Of course, if an AI output turns out to be a near-identical copy of an image from the training set (which may occur due to overfitting), then the original author could assert their copyright – but again, this would concern a specific existing work, not “rights to style or inspiration.” Artistic style as such is not protected by copyright (the law protects specific forms of expression, not ideas or conventions). Thus, for example, AI-generated images “in the style of Van Gogh” do not infringe copyright in Van Gogh’s works – though they may raise ethical concerns if they closely mimic the artist’s unique manner. At present, therefore, authors of training data have no legal grounds to claim rights to every new image generated by AI, unless they can prove direct copying of their specific work. Nevertheless, as we’ll discuss later, many artists are calling for new regulations that would give them more control or a share of profits from the use of their work in AI training.

The AI itself – finally, there’s the futuristic idea: could an AI be recognized as an author on par with a human? So far, no jurisdiction in the world grants legal personhood to AI systems for copyright purposes. In the high-profile case of Thaler vs. Perlmutter in the U.S., a federal court in 2023 ruled unequivocally that “copyright law protects only human authorship,” upholding the denial of registration for an image listing only an algorithm as the creator (without human input). The EU likewise adheres to a human-centric model, reflected in the 2020 European Parliament resolution affirming that the originality requirement is linked to the human person, and that there are no plans to grant AI legal personhood in matters of intellectual property. Proposals to establish “electronic persons” for AI have so far been rejected – as such a move could undermine the human-based legal protection and accountability system.

In conclusion, if an image is entirely the product of an algorithm, then no entity (neither the user, nor the algorithm’s developer, nor the AI itself) acquires copyright ownership. Such content may therefore be used relatively freely. However, in practice, this does not mean unlimited freedom – other rules apply (e.g., platform terms of use, liability for infringement of third-party rights, etc.).

New Rulings and High-Profile Cases (2022–2025)

Although the relevant legislation is still in its infancy, recent years have brought several important court decisions and regulatory rulings that are beginning to shape the legal framework around AI and copyright. Below is a summary of the most noteworthy cases across various jurisdictions:

United States – Denial of AI Image Registration and the “Bedrock Requirement”: The previously mentioned Thalera case (regarding an image generated by the so-called Creativity Machine) ended in August 2023 with a ruling by Judge Beryl Howell, who upheld the decision by the U.S. Copyright Office to deny registration. In her reasoning, the judge called human authorship a "fundamental requirement of copyright law”, clearly stating that a work created entirely by AI is not eligible for protection. In another high-profile case, Kris Kashtanova vs. USCO, the author of a comic book generated with Midjourney was granted copyright for the text and panel layout, but not for the images themselves – which were deemed unprotectable due to being AI-generated. The Copyright Office advised Kashtanova to disclose, during registration, which parts of the work were generated by AI – a foreshadowing of new official guidelines. Indeed, in March 2023, the U.S. Copyright Office released official “Guidance on Works Containing AI-Generated Material”, stating that creators using AI may only claim rights to their own human contributions, and must specify which elements originated from a person. Finally, in late January 2025, the Office went a step further by publishing a report summarizing its current stance: “images entirely generated by artificial intelligence remain ineligible for copyright protection” – however, works co-created by humans (e.g., where AI was used as an editing tool) may be protected insofar as the human contribution is evident. The Office emphasized the importance of distinguishing AI used as a creative aid from AI acting as a substitute for human creativity. The latter scenario (AI replacing the creator) is not protected, because in that case the machine is the actual author, not a human.

Europe – First National Rulings: In the European Union, court decisions directly concerning AI-generated works are only beginning to emerge. The 2024 ruling of the Prague court – the first of its kind in Europe – clearly stated that an image from DALL-E was not protected due to the lack of a human author. This judgment could serve as a persuasive precedent for courts in other EU countries. In Poland, there have been no similar cases so far, but it is worth noting that, for example, the French IPO in 2023 refused to register an AI-generated graphic as a design, citing the lack of personal creative input from a designer. In the UK, no major AI artwork lawsuit has yet occurred, although as previously mentioned, British law still allows protection for computer-generated works. An interesting incident took place in Spain, where a photographer submitted a Midjourney-generated image to a journalism contest (without disclosing it) and won an award. Once it was revealed that the image was AI-generated, it sparked a debate about the ethics and protection of such “fake” photographs – although the matter did not proceed to court. Sooner or later, the Court of Justice of the EU will likely have to rule on a case concerning AI-generated works, which would establish a uniform interpretation across the EU.

China – Recognition of Rights and Early Disputes: China has adopted a different approach from Western countries, reflecting both different legal frameworks and a policy of supporting emerging technologies. The aforementioned November 2023 decision of the Beijing Internet Court regarding the image “Spring Breeze Brings Tenderness” was a significant signal – the court granted copyright protection to the AI image and ruled in favor of the prompt creator, whose work had been misused. The judgment emphasized that the human had made a substantial contribution to the final piece, and that AI was merely a tool. A year later, in March 2025, another Chinese court (in Changshu) again held that AI-generated images can be protected, provided they meet originality requirements – meaning that China appears to accept the concept of “co-authorship” between AI and humans. Interestingly, there have also been contrary cases: in 2022, one Chinese ruling denied protection to an AI-generated work where there was insufficient human input – illustrating that even in China, human contribution remains key. Nonetheless, China clearly aims to protect AI works as derivative of human creativity. Simultaneously, the country is attempting to regulate the generative AI market – in 2023, it implemented rules requiring licensing of AI models and filtering of illegal (including pirated) content. By February 2024, the Guangzhou Internet Court had ruled in the first case concerning copyright infringement by a GenAI provider: it held an image generator operator liable for copyright infringement and unfair competition because its model allowed users to generate unauthorized images of the famous Japanese character Ultraman. The operator failed to prevent this despite using third-party IP, and was held accountable. This ruling is likely the first instance where the creator of an AI model was found liable for user-generated content that infringed copyright. It may have global repercussions – raising the question of whether providers like OpenAI or Stability AI should be responsible for how users employ their tools.

First Court Decision on Training AI with Third-Party Content – Thomson Reuters vs. ROSS (USA): In early 2025, a landmark ruling was issued concerning the use of protected content to train AI. The U.S. federal court in Delaware ruled on Thomson Reuters v. ROSS Intelligence on February 11, 2025, finding that ROSS’s use of unauthorized copies of legal headnotes from the Westlaw database to train its legal AI did not fall under fair use. The judge concluded that ROSS had created a competing product (an AI-powered legal search engine) using Thomson Reuters’ copyrighted material, which harmed the original market and was not transformative – since the ROSS AI essentially used Westlaw’s content in a similar way to Westlaw itself. As a result, the court ruled in favor of Thomson Reuters, finding that training AI on that data constituted copyright infringement and rejecting the fair use defense. Although the case involved a specific context (legal databases and a non-generative AI returning existing results), it marks the first clear ruling that AI training on third-party content can infringe copyright. This decision will likely influence ongoing lawsuits (plaintiffs in music and text model cases are already citing it). It opens a new chapter in the debate – until now, many U.S. AI companies had argued that large-scale data copying for training purposes was fair use, but this court signaled boundaries to that exception.

Artist Class Actions and the Getty Images Lawsuit: In 2023, several high-profile lawsuits were launched against generative AI companies. In the U.S., a group of artists (including Sarah Andersen, Kelly McKernan, Karla Ortiz, and Polish illustrator Grzegorz Rutkowski, whose style was heavily emulated by Stable Diffusion) filed a class action lawsuit against Stability AI, Midjourney, and DeviantArt. They accused the companies of copyright and publicity rights violations through training models on millions of images scraped from the internet without consent. The lawsuit alleges that Stable Diffusion effectively stores compressed copies of artworks and can reproduce them, and that generating images "in the style of" an artist results in unauthorized derivative works. The case is ongoing – the companies filed motions to dismiss, arguing that training constitutes fair use and that artistic style is not protected. In parallel, Getty Images (a stock photo agency) sued Stability AI in the UK, and later also in the U.S., claiming that the Stable Diffusion training dataset included over 12 million copyright-protected Getty photos, and that some generated images even included artifacts like the Getty watermark – which Getty cited as evidence of unauthorized copying. That case is also ongoing. Its outcome could be pivotal: if the court rules there was infringement, it could force widespread changes in AI model training practices (e.g., requiring licenses or compensation for creators whose works were used). If, however, training is deemed legal, tech companies will be emboldened – though creators will likely push for legislative intervention.

As we can see, 2023 and early 2025 have been full of precedent-setting decisions. A global consensus seems to be forming around the lack of copyright for purely AI-generated works, while a growing legal dispute revolves around whether training AI on third-party content constitutes legal inspiration or unlawful exploitation. Many of these cases are still pending – their outcomes will undoubtedly influence legal practice both in the U.S. and, indirectly, across Europe.

“Theft” of Images by AI – Controversies and Legal Interpretations

One of the most emotionally charged issues for creators is the use of other people’s works in the training and generation processes of AI-generated images. Many artists accuse AI models of “style theft” and the mass appropriation of their works without consent. Legally, the key question is: does training AI on publicly available images infringe on the copyrights of the original creators? The answer is far from straightforward and varies significantly across jurisdictions.

In the European Union, the situation is partially regulated through the aforementioned text and data mining (TDM) exceptions. In principle, if a creator has not opted out (e.g., by placing a notice on their website prohibiting data mining), and the data was legally accessible (e.g., publicly available online), then copying such data for the purpose of pattern analysis by AI is permitted. The EU deliberately introduced this exception to facilitate the development of AI without forcing the licensing of every individual work. Creators can object, of course – hence the opt-out mechanism. In practice, however, during the peak of model training (2021–2022), very few creators were even aware of the need to register such objections. Massive image datasets (such as LAION-5B, which underpins Stable Diffusion) were compiled from internet scrapes, largely without creators’ knowledge. This was done in a legal grey zone, taking advantage of the leeway granted by data mining rights. Now that creators are more aware, many are beginning to exercise their opt-out rights – for example, ZAiKS has preemptively opted out all works by its members from TDM. It is likely that future generations of models will need to respect such reservations – indeed, Stability AI has announced that it will honor “noAI” tags and lists of excluded works in future training processes (although earlier models have already been trained on unreserved data). Another legal layer in the EU involves related rights to databases – large collections of images, such as stock photo libraries, are protected as databases, and mass harvesting of their content may infringe on the producer’s rights, even if individual images fall into the public domain. So far, this issue has not been addressed by courts in the AI context, but it adds to the complexity.

In the United States, there is no direct equivalent to the EU’s TDM exceptions. The legality of scraping content for training hinges instead on the fair use doctrine. AI companies argue that machine learning constitutes a transformative use of the works – likening it to how a human learns to paint by studying masterpieces. The original image is not published or made available anywhere – it only influences the neural network’s parameters. From this perspective, the process doesn’t constitute commercial substitution of the original but is rather a qualitatively new, scientific use. Until recently, many commentators supported this view. However, the Thomson Reuters vs ROSS decision showed that courts are not always convinced by the “transformative” argument – particularly when the AI’s output closely resembles the original content (in that case, AI-generated text effectively reproduced excerpts of court case summaries). With generative models like Stable Diffusion, a fair use defense may be easier since the output is not a direct copy of any training image but rather a new combination of learned features. Still, challenges remain – if a model generates something that strongly resembles a training image (e.g., a very specific photographic composition), a court may classify it as a derivative work requiring the original author’s permission. In ongoing class-action lawsuits against Stability AI and others, plaintiffs argue that the model stores image templates and can replicate them – if proven, this would imply the presence of copies within the AI’s memory, going beyond abstract “style learning.” These lawsuits are precedent-setting and will determine where the legal line lies between legitimate inspiration and infringement. The outcomes could reshape the AI ecosystem: a ruling against AI developers might compel the creation of large-scale licensing systems for training content, potentially managed by collective rights organizations (somewhat akin to how radio stations pay for music – but scaled up to include thousands of visual works for algorithmic training). Proposals for such frameworks are already surfacing.

Another frequently asked question in the debate over AI “theft” is: does generating an image “in the style” of a particular artist violate their rights? As noted, artistic style is not copyrightable – legally, a creator cannot prohibit others from making similar-looking works, whether manually or using AI. However, they can protect specific characters or fictional universes they’ve created (e.g., generating new adventures of comic book heroes without the author's consent would infringe on the narrative rights). This leaves artists like Greg Rutkowski, whose styles have become widely replicated by AI, in a difficult position – their stylistic approach is “borrowed” without breaching the law, yet they feel exploited and bypassed. In response, some art platforms have banned AI-generated content (e.g., ArtStation initially attempted to block such images following artist protests). Others, like Adobe Stock and Shutterstock, have taken a different approach: they allow AI-generated content for sale, provided it is properly labeled and does not infringe on others’ rights. Adobe even boasts that its Firefly model was trained exclusively on licensed or public domain images, avoiding theft allegations. All this shows that the market is attempting to set standards on its own in the absence of clear legal guidance.

From a legal perspective, another key issue is liability for infringement when using AI. Since a generated image has no formal rights holder, who is responsible if it turns out to be infringing (e.g., the AI reproduces someone’s copyrighted work)? At present, the user is primarily liable. They input the prompt and use the output in their work – so if the result infringes on someone’s rights (e.g., is an illegal derivative work), it is the user who bears the consequences. AI developers – for now – disclaim liability in their terms of service. As noted by Natalia Basałaj, proving fault on the part of the model’s creator in court would be very difficult, if only because there are currently no tools to trace which specific training data led to a given output, or who provided those data. However, the global trend (e.g., the Canton case in China) may move toward greater accountability for platforms, especially if they knowingly enable rights violations. Future regulations (such as the EU’s AI Act, with its administrative penalties) may pressure AI companies to implement stronger filters and safeguards to prevent infringing content generation. Already, tools like DALL-E and Midjourney are blocking obvious piracy-related prompts (e.g., attempts to generate Mickey Mouse will fail due to trademark protections). But as the technology evolves, subtler cases will need to be addressed.

In conclusion, accusations of AI “stealing” images are morally understandable, yet legally – at least for now – partially permissible under current exceptions and loopholes. Creators feel wronged because their work powers commercial models from which they gain nothing. On the other hand, AI developers argue that training is not the same as publication – it’s a form of analysis that the law should encourage (like digital libraries or search engines). This clash of interests is likely to be one of the key drivers of upcoming legislative changes. We can expect the emergence of licensing systems, buyouts for training data, and possibly compensation schemes for creators (similar to reprographic levies on blank media).

Summary

As of 2025, do we have a clear answer to who owns the rights to an image generated by AI? The most honest answer is: it depends – primarily on how much human input is involved in the creation. If the image is almost entirely generated by an algorithm, current law holds that it does not qualify as a protected work. Such an output falls outside copyright frameworks – no one is its author or copyright holder. This grants freedom of use (no monopoly restrictions), but it also means that the user who generated it cannot stop others from copying it. However, if a human has made a discernible creative contribution – using the AI merely as a tool, like a brush or design program – then the human qualifies as the author, with full copyright protection. The boundary between these scenarios is thin and will likely be clarified by future case law.

Poland and the EU are clearly aligned in favor of preserving the primacy of human creativity – rejecting the idea of AI as an author and focusing on regulating the use of copyrighted works in AI training (through transparency and opt-outs). The U.S. also emphasizes human authorship (as confirmed by court rulings), though the debate there centers on the scope of fair use in training models. The UK is at a crossroads between old and new approaches, with a unique provision granting automatic protection to “computer-generated works” – although this is rarely used in practice and likely to evolve toward EU standards. China is following its own path, being relatively open to recognizing AI-created works (at least those with human input) while also enforcing stricter controls against infringement. Other Asian countries, like Japan and South Korea, are also examining the issue – Japan, for instance, has adopted very broad data analysis exemptions, effectively legalizing AI training on all accessible content. This facilitates tech growth but raises concerns among local creative industries.

On the horizon are potential legislative changes. The EU may clarify the legal status of AI-generated works (e.g., through Commission recommendations or updates to copyright law). Already, codes of good practice for AI are being drafted, and patent and copyright offices globally are publishing guidelines for creators using AI – including how to register such works without infringing the law (in the U.S., disclosure of AI involvement is required). We may even see the emergence of a new category of works: “AI-assisted”, with protection based on the level or quality of human contribution.

For now, however, creators and companies must operate within the current legal framework. AI users should be aware that purely AI-generated content grants them no exclusivity – a competitor may legally reuse a similar image. It’s worth considering adding personal creative input (even via post-processing) to obtain protectable results. Original creators, meanwhile, should make full use of available protection tools (opt-outs, anti-scraping measures) and stay informed – compensation mechanisms for AI usage of their works may soon emerge. AI companies should proactively implement safeguards (filters, data transparency) to avoid provoking harsh regulatory backlash.

The year 2025 has already seen major precedents and early regulatory steps, but many questions remain unresolved. One thing is certain: artificial intelligence has upended traditional notions of “creativity”, challenging a copyright system designed in the analog era. Lawmakers face a difficult balancing act: protecting creators’ rights and encouraging innovation, while also supporting AI development as a driver of progress. We are witnessing ongoing debates, court cases, and likely legislative revisions. The story of AI-generated content and copyright is still being written – and 2025 is only the next chapter, one in which the guiding principle seems to be: humans remain at the center of creativity, with AI as a powerful tool requiring new rules and responsibilities. The coming years will likely bring clearer regulations – hopefully benefiting both creators and AI innovators.

Contact us

in order to present me a product offer and for marketing purposes. Spark DigitUP Sp. z o.o. as the Administrator, observing the provisions on the protection of personal data, has informed me of my right to access, delete, forget and transfer information, as well as rectify, supplement and limit the processing of my data in the manner arising from [Privacy Policy].

within the meaning of art. 10 paragraph 2 of the Act of July 18, 2002 on the provision of electronic services (Journal of Laws No. 144, item 1204) to the provided e-mail address and telephone number. Spark DigitUP Sp. z o.o. as the Administrator, observing the provisions on the protection of personal data, has informed me of my right to access, delete, forget and transfer informations, as well as rectify, supplement and limit the processing of my data in the manner arising from [Privacy Policy].

in relation to the phone number and email address I have provided for direct marketing purposes by Spark DigitUP Sp. z o.o., owner of the TrafficWatchdog.pl