Copyright law was written for humans. It assumes a person sat down, made creative choices, and produced something original. AI does none of those things — and the courts, the Copyright Office, and Congress are all now wrestling with what that means.
There are three separate copyright questions in the AI era, and they are often conflated:
- Can AI own copyright? (No — clearly settled.)
- Did AI companies infringe copyright when they trained their models? (Actively litigated — no final answer yet.)
- Do you own copyright in AI-assisted work? (Depends on how you used it — the answer is nuanced.)
Each one matters. Let's go through them.
AI Cannot Own Copyright
This one is settled. Under US law, copyright can only be held by a human being (or a company owned by humans, which legally stands in for them). A machine cannot be an author.
The clearest test of this came in Thaler v. Perlmutter (2023). Stephen Thaler built an AI system he called the "Creativity Machine" and tried to register the artwork it produced with the US Copyright Office — listing the AI as the sole author and himself as the owner. The Copyright Office rejected the application. Thaler sued.
Federal Judge Beryl Howell upheld the rejection, writing that "human authorship is a bedrock requirement of copyright." The ruling quoted a line of cases going back to the 1800s — including a Supreme Court case from 1884, Burrow-Giles Lithographic Co. v. Sarony, which established that copyright protects the "intellectual conception" of the author. A machine has no conception, intellectual or otherwise.
The Copyright Office has also published formal guidance on this (2023 and 2024), landing in the same place: works generated entirely by AI are not copyrightable. No human creativity, no protection.
What this means in practice: If someone generates an image, a story, or a piece of code using AI and contributes no meaningful creative choices of their own, that output sits in the public domain the moment it's created. Anyone can use it, copy it, sell it. There is nothing to enforce.
Did AI Companies Break the Law When They Trained Their Models?
This is where things get genuinely unsettled — and consequential.
Training a large language model or image generator requires feeding it enormous quantities of existing content: books, articles, photographs, artwork, source code. Much of that content is copyrighted. The companies that built these models did not, in most cases, license that content. Their legal argument is that training on publicly available data falls under fair use — a doctrine in copyright law that allows limited use of protected material without permission, under certain conditions.
Several major lawsuits are testing that theory.
The New York Times v. Microsoft and OpenAI (filed December 2023) is the highest-profile case. The Times alleges that OpenAI and Microsoft trained GPT-4 and Copilot on millions of Times articles without permission or payment. The complaint includes striking examples of ChatGPT reproducing Times articles nearly verbatim — which undercuts the "we're just learning from patterns" defense. The Times is seeking billions in damages and, potentially, destruction of the models trained on its content.
Getty Images v. Stability AI (filed 2023, US and UK) takes a similar position in the visual world. Getty alleges that Stability AI scraped millions of Getty photographs — including watermarks — to train Stable Diffusion. The UK case is further along; a UK High Court ruling in 2024 confirmed the case could proceed. The US case is ongoing.
Andersen v. Stability AI, Midjourney, and DeviantArt (filed 2023) was brought by a group of visual artists who argue that image generators were trained on their work without consent, creating tools that can generate "in the style of" specific artists — effectively producing competing product using the artist's own creative output as raw material. The case has had mixed early rulings but is continuing.
The fair use question at the center of these cases turns on four factors courts weigh together:
- Purpose and character of the use (is it transformative? commercial?)
- Nature of the original work
- Amount used
- Effect on the market for the original
AI companies argue training is transformative — it creates something new rather than substituting for the original. Plaintiffs argue the opposite: the outputs directly compete with their work, and the effect on their market is real and harmful. No court has fully resolved this yet. The outcomes will set the rules for the entire industry.
Do You Own What You Make With AI?
If you use AI as a tool — not the other way around — you may well have copyright in the result. But you have to think carefully about how you used it.
The Copyright Office's guidance draws a line around human creative control. If a human makes meaningful creative choices — selecting, arranging, editing, or directing AI output — those choices can be protected. What the AI contributed on its own cannot.
The clearest example of this in action is Zarya of the Dawn (2023). Kris Kashtanova created a comic book where the images were generated by Midjourney and the text and arrangement were written by a human. The Copyright Office initially registered the whole thing, then reconsidered. Their final ruling: the text, story, and arrangement of images are protected by Kashtanova's copyright. The individual AI-generated images are not. The human creative choices are covered; the machine's output is not.
The practical upshot:
- If you write a detailed brief, direct an AI to follow specific instructions, and heavily edit the result, you probably have a copyright claim on the edited, human-shaped final product.
- If you type "write me a blog post about dogs" and publish the first thing that comes back, you almost certainly do not.
- The more of yourself you put in — the more the output reflects your specific choices and voice — the stronger your position.
This matters beyond just pride of ownership. Copyright is what lets you prevent others from copying your work, license it for money, and sue if someone steals it. Without it, your work is free for anyone to take.
What This Means for Creators Now
A few practical realities worth holding onto:
Your existing work has real value — and is at risk. If you are a writer, photographer, artist, or musician, your back catalog may already have been used to train AI models without your knowledge or consent. The legal question of whether that was permissible is unresolved. Some platforms (Adobe, Shutterstock) have set up licensing programs that compensate creators for training use. Others have not.
Opt-out options exist, but are patchwork. OpenAI, Google, and others now offer ways to opt out of having your web content used for future training. Several websites have adopted the robots.txt standard for AI crawlers. These protections are voluntary and inconsistent — there is no legal mandate requiring companies to honor them.
"Style" is not protected — but expression is. Copyright has never protected artistic styles, only specific creative expression. AI generating work "in the style of" an artist is not automatically infringement. This is a gap in the law that frustrates many creators, and some jurisdictions are beginning to look at whether that needs to change.
The EU AI Act includes transparency requirements. The EU's AI Act, which began applying in 2024 and 2025, requires providers of general-purpose AI models to publish summaries of the training data they use. This doesn't give creators a legal remedy, but it creates at least some accountability about what went into these systems.
The legal picture will keep changing. Several of the cases mentioned here will likely reach appeals courts or even the Supreme Court within the next few years. What's clear now: if your work ends up in the output of an AI system, or if you want to claim ownership of something you made with AI help, understanding these distinctions isn't optional.