There are currently two legal proceedings between Getty Images and Stability AI. One in the United States , Getty Images (US), Inc. v. Stability AI, Ltd. (3:25-cv-06891) before the District Court, N.D. California, and another in the UK, Getty Images v Stability AI ([2025] EWHC 2863, which we wrote about in early 2025. At the heart of the disputes is Stability AI’s use of unlicensed imagery from Getty to train their diffusion AI models. Getty Images sued Stability AI, alleging that Stable Diffusion had been trained on millions of Getty visual assets scraped from Getty websites and thereby had infringed upon numerous IP rights own by Getty.
In the latter proceedings, Getty Images v Stability AI ([2025] EWHC 2863 (Ch), the UK High Court delivered a judgment. Getty Images advanced a broad case (primary and secondary copyright infringement, database rights, trade mark infringement and passing off) arising from the use of large scraped image datasets to train Stable Diffusion. By trial, Getty had narrowed its case considerably.
What was not decided in the U.K. judgment
Given that the training and development of the AI model did not take place in the U.K., Getty dropped its primary copyright infringement claim in the U.K., leaving this aspect currently only before the U.S. court. While dropping this claim appears to be a sound strategic legal decision, it is unfortunate to not have more solid U.K. case law on this matter.
What was the U.K. judgment about?
After the dropped claims, Getty therefore focused on trade mark infringement / passing off arising from synthetic images bearing Getty watermarks in supposed violation of Section 10 of the U.K. Trade Marks Act 1994 (TMA), and secondary copyright infringement. It claimed that Stable Diffusion itself is an article which constitutes an infringing copy under the U.K. Copyright, Designs and Patents Act 1988 (CDPA) Section 27 and therefore importing the AI model into the U.K. amounted to a violation of Section 22 and 23 of the CDPA, which outlaws such importation, possession and distribution.
The models at issue included v1.x, which is the early open source checkpoints made available via CompVis/ Hugging Face, v2.x and later SD XL / v1.6 variants. The LAION datasets were central to how these models were trained. Stability conceded some Getty images were in LAION subsets and that models can produce synthetic “watermark-like” outputs, but it argued liability does not follow because models learn statistical patterns rather than store image files, outputs are stochastic and largely user-generated, and the relevant training and hosting largely occurred outside the UK.
The Judge’s Technical and Factual Findings
The Court relied on expert evidence about how diffusion models work, such as that models learn statistical patterns from training data inclduding the distribution of features and correlations and do not store literal copies of training images in the model weights.
Inference (generation of images) does not require the training dataset. While it is possible for over-fitting/memorisation to produce almost identical reproductions in narrow circumstances, the expert evidence was that, for large modern models trained on billions of examples, the model stores patterns rather than images and that outputs are produced by sampling from a learned distribution. The stochastic nature of outputs means watermarks may appear in distorted, partial or recognisable forms and liability depends on how clear and recognisable the synthetic watermark is in practice.
The Trademark Infringement Claims
The Court performed the usual trade mark assessment of likelihood of confusion, considering the average consumer and context). The key practical conclusions were:
- For the s.10(1) TMA stipulation (trademark use in relation to goods/services): Getty succeeded only in respect of ISTOCK watermarks generated by v1.x when accessed via DreamStudio/Developer Platform (specific images relied upon). There was no evidence that Getty Images word/logo marks were reproduced in a sufficiently clear form across the real-world sample to succeed under s.10(1) TMA.
- For the s.10(2) TMA stipulation (use in the course of trade, likely to affect origin): Getty succeeded in respect of iStock watermarks produced by v1.x (as above) and succeeded in respect of a Getty Images watermark produced by v2.x (the “First Japanese Temple Garden Image” example). Again, the findings were fact-specific and limited to those example outputs.
- For the s.10(3) TMA stipulation (detriment to distinctive character / free-riding): Getty failed. The judge concluded there was no evidence that Stability intended to benefit from Getty’s reputation, nor that the appearance of signs on outputs transferred goodwill to Stability; users would generally discard watermarked outputs, and filters had been employed to reduce watermark generation from v2.x onwards.
- Passing off: The judge declined to address this fully, noting that Getty’s passing off case largely tracked the trade mark claim and the court was not invited to make further submissions.
Practical note: The trade mark successes are tightly circumscribed. However, they demonstrate that synthetic outputs that closely reproduce a distinctive watermark or logo can infringe trade mark rights even if produced by a generative model. That creates a practical compliance risk for model hosts and service providers where easily-recognisable marks are reproducible!
The Secondary Copyright Infringement Claim
As noted above, Getty’s case was that Stable Diffusion being an article and thereby constituted an ‘infringing copy’ for the purposes of s.27(3) CDPA and therefore importing and distributing the model in the UK would amount to secondary copyright infringement under ss.22–23 CDPA. Stability in response advanced two key construction points: Firstly, that an ‘article‘ should be confined to tangible objects and secondly that the model could not be an ‘infringing copy’ because it did not store copies of the works and was trained outside the UK.
The High Court clarified that:
- CDPA should be interpreted in light of modern principles (the “always speaking” principle): Statutes may cover modern/intangible technologies unless the statutory language or context requires a frozen interpretation. The Court accepted that ‘article‘ can include intangible objects in principle.
- On the facts the model itself was not an ‘infringing copy‘. The evidence (experts and factual material) showed that Stable Diffusion did not store Getty’s images in the model and that its weights encode statistical information rather than image files. Accordingly, the judge accepted that the model learns patterns rather than storing or reproducing copies, and so it is not an ‘infringing copy’ as pleaded.
- Jurisdiction matters. The Court observed that the training took place outside the UK (Getty had abandoned the UK training allegation as noted above) which further weakened any UK secondary-infringement case.
The judge dismissed the secondary infringement claim.
Why this matters: The decision narrows the immediate risk that distributing a model will automatically be treated as distributing infringing copies where the model does not contain literal reproductions.
It is not an all-purpose immunity: the Court left open (and emphasised) key factual scenarios where different findings could be reached, i.e. if a model does literally memorise and reproduce substantial parts of copyrighted works, training/making occurred in the UK, or a different statutory construction had been established on fuller argument.
The Licensing Issues
Getty further relied on numerous contributor and exclusive licence agreements and sought to litigate the ‘licensing issue’, dealing with the claim that certain agreements were exclusive licences under s.92 CDPA. This in turn would have meant that there could be consequences for concurrent actions by licensors. The Court sampled agreements for trial and addressed questions such as whether electronic “I agree” or “I accept” clicks constituted signatures under s.92 CDPA. The judge accepted that electronic acceptance after 2012 could amount to a signature, but stressed that each sample agreement must be looked at on its own facts and in some pre-2012 cases there was insufficient evidence of such signature.
Takeaway for rights-holders and platforms: Ensure that licence terms are unambiguous on exclusivity and sub licensing, thatelectronic acceptance processes create clear evidence of assent, such as time-stamped records, robust UX to avoid any ‘deemed’ acceptance gaps) and allocation of rights in so far that any machine learning or AI model training is always explicit!
What was the U.K. judgment not about?
Joanna Smith J declined to make wide factual findings about the scale of Getty content used for example as to the number of Getty images in LAION subsets and was reluctant to extrapolate from experiments to general prevalence of watermarks in live outputs. The Court also recognized limitations of the evidence on certain points such as missing witnesses, training location or complexity and therefore made narrow, fact-specific decisions rather than broad doctrinal statements.
Therefore: the U.K. judgment is a significant datapoint but not an across-the-board license to build/train large models irrespective of IP.
Areas likely to generate future litigation include:
- Memorisation/over-fitting reproducing copyrighted works verbatim
- Territorially sensitive training/activity
- Derivative uses (eg, datasets combining copyrighted works where training produces very close approximations) and
- Contractual/licensing claims that have clearer documentary foundations.
What Happened in the U.S. Proceedings in the Meantime?
From publicly available U.S. Court documents, the parties involved appear to proceed in Alternative Dispute Resolution in the U.S.
Conclusion
The AI model training data matter remains highly dependent on minute details, facts and also on jurisdiction. For example, EU law does provide certain exceptions allowing data mining subject to rights reservations under Articles 3 and 4 of the 2019 Copyright in the Digital Single Market (CDSM) Directive, and the U.S. provides certain premissions under it’s ‘fair use‘ doctrine. There is no one-size-fits-all approach and both rights owners as well as AI model developers should seeks legal advice before proceeding.


