Mickey vs. Midjourney:
The Copyright Clash That Could Change Everything
The Copyright Clash That Could Change Everything
September 10, 2025
The rapid adoption of generative AI has triggered a wave of litigation that is forcing courts, regulators, and industry to confront fundamental questions about the relationship between intellectual property and emerging technologies. The recent lawsuits against Midjourney, a leading AI image generator, are not isolated disputes. They represent a turning point in how copyright law will be applied to artificial intelligence—and how companies must manage risk in a landscape that blends innovation with legal uncertainty.
In June 2025, Disney and Universal filed suit against Midjourney, alleging that the company scraped copyrighted works—including characters central to billion-dollar franchises—to train its AI models and then monetized the results. The plaintiffs argue that Midjourney’s reported $300 million in revenue for 2024 was built on misappropriated creative assets.
Meanwhile, a class action was brought by artists in 2023, with courts allowing copyright and trademark claims to proceed on the theory that diffusion models may embed “compressed copies” of copyrighted works. These decisions are particularly important because they suggest that judges are willing to consider the technical underpinnings of AI models as more than abstract processes, potentially treating them as repositories of protected expression.
Midjourney has mounted a strong defense, asserting that its practices are protected under the doctrine of fair use and that its models transform data into new expression rather than reproducing the original works. It has also argued that disputes should be resolved through takedown requests under the Digital Millennium Copyright Act (DMCA), not through sweeping claims of systemic infringement. The tension between these positions underscores the unsettled state of copyright law when applied to AI, and it places courts in the difficult position of determining whether doctrines developed in the analog and early digital age can be stretched to cover modern machine learning.
While these lawsuits unfold, regulators are signaling that reliance on litigation alone will not be sufficient. The U.S. Copyright Office has emphasized that purely AI-generated works cannot qualify for copyright protection without substantial human authorship, and it continues to study whether disclosure requirements should apply to training data. The Federal Trade Commission (FTC) has begun scrutinizing claims about AI outputs for potential consumer deception, warning that misuse of copyrighted content could implicate unfair trade practices.
Congress has also begun floating proposals to require transparency in training data and to establish licensing frameworks. Across the Atlantic, the European Union already moved ahead with its AI Act, finalized in 2024, which obliges providers of general-purpose AI models to disclose whether copyrighted material was used in training. Companies deploying AI across borders will need to anticipate and comply with these obligations, even if U.S. courts take a more permissive approach to fair use.
For rights holders such as Disney and Universal, these cases are about maintaining control over the value of their intellectual property in an era when AI could otherwise dilute or displace it. For AI platforms, the litigation tests whether business models built on scraped internet data can survive. And for companies adopting generative AI in everyday operations, the risks extend beyond the legal. Questions about ownership of outputs, reputational damage from infringing uses, and shifting compliance obligations could disrupt workflows and increase costs.
The path forward requires preparation rather than passivity. Companies should examine their use of AI tools now, understand how they are trained and how outputs are deployed, and develop clear internal policies for responsible use. Agreements with AI vendors should address ownership, licensing, and indemnification in a way that balances risk fairly. Organizations should also remain alert to changes in case law and regulatory standards, both in the United States and abroad, as these developments will inform how liability and compliance obligations evolve. Forward-looking businesses are already exploring alternatives to scraped-data models, whether through licensed platforms or proprietary approaches, to avoid becoming dependent on tools whose legal foundations may erode.
The Midjourney lawsuits are not simply about one platform or one set of plaintiffs. They are shaping the contours of how courts, regulators, and industries will treat intellectual property in the age of artificial intelligence. The outcomes will influence whether training data must be licensed, how outputs are owned and enforced, and how businesses of every size can responsibly integrate AI into their operations.
For companies, the challenge is to innovate without overstepping into legal and reputational risk. That requires more than watching cases from the sidelines. It means assessing today’s practices, tightening contracts and compliance, and preparing for tomorrow’s licensing and disclosure frameworks. Businesses that act now will be better positioned not only to avoid disruption but to lead in a marketplace where trust, transparency, and intellectual property protection are becoming competitive advantages.
At Daly Law & Strategy, we partner with clients to bridge the gap between technological opportunity and legal responsibility. Our combination of intellectual property expertise, business insight, and scientific training allows us to see around corners, helping clients safeguard their assets while embracing the potential of AI. The companies that thrive in this new era will be those that take proactive steps now—and we stand ready to guide you through that process.