top of page
AI Video, Copyright, and the Turning Point No One Wanted to Talk About

AI Video, Copyright, and the Turning Point No One Wanted to Talk About

19 February 2026

Paul Francis

Want your article or story on our site? Contact us here

For years, artificial intelligence has been quietly absorbing the creative world.

Illustrators watched as models produced images in their style. Writers saw language models trained on books they never licensed. Voice actors heard digital replicas of their tone and cadence. Photographers discovered fragments of their work embedded in datasets they never consented to join.


Close-up of a person in a red and black spider-themed suit against a dark background, showing a spider emblem on the chest.
Photo by Hector Reyes on Unsplash

The arguments were loud, emotional and often messy. Creators warned that their intellectual property was being harvested without permission. AI companies insisted that training data fell within legal grey areas. Lawsuits were filed. Statements were issued. Panels were held.


But systemic change moved slowly.


Then Spider-Man appeared.


Not in a cinema release or on a Disney+ platform, but inside a viral AI-generated video created using ByteDance’s Seedance 2.0. Within days of its release, social feeds were filled with highly realistic clips showing Marvel and Star Wars characters in scenarios that looked convincingly cinematic. Lightsabers clashed. Superheroes fought across recognisable cityscapes.


And this time, the response was immediate.


Disney sent a cease-and-desist letter accusing ByteDance of effectively conducting a “virtual smash-and-grab” of its intellectual property. Other studios followed. Industry bodies demanded the platform halt what they described as infringing activity. Even the Japanese government opened an investigation after AI-generated anime characters began circulating online.


ByteDance quickly pledged to strengthen safeguards.


The speed of that reaction stands in sharp contrast to the drawn-out battles fought by independent creatives over the last several years. And that contrast raises a difficult but necessary question: why does meaningful pressure seem to materialise only when billion-dollar franchises are involved?



The Uneven Battlefield of Copyright and AI

The legal tension around generative AI has always centred on training data. Most AI systems are built on enormous datasets scraped from publicly available material. Whether that constitutes fair use or copyright infringement remains one of the most contested questions in modern technology law.


When the alleged victims were individual artists or mid-tier studios, the debate felt theoretical. There were court filings and opinion pieces, but not immediate operational shifts from the tech giants.


Now the optics are different.


Seedance is not accused of vaguely echoing an artistic style. It is accused of generating recognisable characters owned by one of the most powerful entertainment companies in the world. Spider-Man is not an aesthetic. He is a legally fortified intellectual property asset supported by decades of licensing agreements, contractual protections and global brand enforcement.


That changes the power dynamic instantly.


Where independent creators struggled to compel transparency around training datasets, Disney commands it. Where freelance illustrators waited months for platform responses, multinational studios can demand immediate action.


The issue itself has not changed. The scale of the stakeholder has.


What This Means for AI Video

AI video is still in its infancy compared to image generation, but the implications of this dispute could accelerate its regulation dramatically.


If platforms are found to be generating content too closely resembling copyrighted franchises, expect tighter content controls. Prompt filtering will become more aggressive. Character names will be blocked. Visual similarity detection tools may be deployed to prevent outputs that mirror protected designs.


In short, the open playground phase of AI video may end sooner than expected.


There is also another path emerging: licensing.


Disney’s existing billion-dollar partnership with OpenAI signals a model where AI tools are not eliminated but contained within approved ecosystems. Rather than preventing AI from generating Marvel characters altogether, studios may instead seek to monetise that capability under strict agreements.


That would create a bifurcated future for AI video. Corporate-approved generative systems operating inside licensing frameworks on one side, and heavily restricted public tools on the other.


Independent creators could once again find themselves navigating a more tightly controlled environment shaped by corporate negotiation rather than broad creative consensus.


The Transparency Question

One of the most significant unknowns in this entire situation is training data.

ByteDance has not disclosed what Seedance was trained on. That silence is not unusual in the industry. Most generative AI companies treat training datasets as proprietary assets.

But as legal pressure increases, so too does the demand for transparency. If studios begin demanding to know whether their content was scraped, regulators may soon follow.


For years, artists have asked for opt-in systems, compensation structures and dataset audits. If this moment forces platforms to adopt more transparent practices, it may indirectly validate those earlier demands.


It would be a bitter irony if the turning point for creator protection comes only once global media conglomerates feel threatened.


A Defining Moment for AI and Creativity

There is something symbolic about this dispute.


AI innovation has been framed as disruptive, democratising and unstoppable. Copyright law, by contrast, is territorial, slow-moving and rooted in decades-old legal frameworks. For a time, it appeared that generative AI might simply outpace enforcement.


But intellectual property remains one of the strongest legal shields in modern commerce. When AI tools move from stylistic imitation to recognisable franchise replication, the shield activates quickly.


This is not necessarily an anti-AI moment. It may instead be a recalibration.


The creative economy depends on ownership, licensing and consent. AI systems that ignore those principles are unlikely to survive prolonged legal scrutiny. The question is whether reform will apply evenly across the creative landscape or remain reactive to whoever has the loudest legal voice.


If the Seedance dispute leads to clearer boundaries, transparent datasets and fairer licensing models for all creators, it could mark a maturation phase for AI video.


If it simply results in selective enforcement that protects corporate assets while leaving independent creators in grey areas, the imbalance will persist.


For now, one thing is certain.


AI video has crossed from experimental novelty into serious legal territory.


And it took a superhero to force the conversation into the open.

Current Most Read

AI Video, Copyright, and the Turning Point No One Wanted to Talk About
Measles Is Rising Again: What Is Happening in London and Around the World
The UK’s new deepfake laws: what is now illegal, what it means in practice, and what could come next

After the Machines: Can Creative Work Survive the AI Age?

  • Writer: Paul Francis
    Paul Francis
  • Jul 30, 2025
  • 3 min read

It started with a row of birthday cards.


While shopping at a local Tesco, I spotted a display full of birthday cards that didn’t look quite right. At first glance, they seemed like any other range of quirky illustrations and sentimental messages, but something was off. The characters had odd expressions, the hands and proportions weren’t quite human, and there was that unmistakable uncanny quality that comes from AI-generated art.


Greeting cards on display feature animals, kids, and humorous themes. Categories include "Almost Funny," "Get Well," and "Thank You."

I work in the creative industry and regularly use tools like Leonardo AI. I recognised the signs immediately. Every single one of those cards had been made by a machine.


It was a quietly shocking moment. Not because AI art exists, we’ve all seen it by now, but because it has gone mainstream, tucked into a supermarket aisle where once there had been work by real illustrators and designers. The thought struck hard: this is already happening, and it’s happening faster than people realise.


But as creative work becomes cheaper to generate, a bigger question emerges: when most people have lost their jobs to AI, who will still have the money to buy what these companies are selling?


The Jobs at Risk

Freelance illustrators designing cards and similar products might typically earn between £30 and £250 per piece, depending on the client and usage. Over the course of a year, a dedicated freelancer might bring in between £25,000 and £35,000, though that varies with commissions and demand.


It’s not a high-income job, but it supports a wide network of creative professionals, from recent graduates to long-time freelancers. These are the very people now being undercut by companies using generative AI tools to produce hundreds of designs in hours.


AI-generated content is already appearing in online marketplaces, book covers, and even music videos. It’s a quiet revolution, and not one that has left much time for retraining or regulation.


Surreal cityscape with geometric buildings, pastel colors, floating spheres, and sketched figures. The mood is dreamlike and tranquil.

If Jobs Go, What Happens Next?

The reality is simple: if creative workers lose their incomes, their ability to participate in the economy vanishes with it.


One widely discussed solution is Universal Basic Income (UBI). The concept involves giving every citizen a regular, unconditional payment to cover essential living costs. Trials in Finland, Canada and the United States have shown promising results. People were able to focus on long-term goals, retrain, or pursue creative work without the pressure of living month to month.


However, critics argue UBI could be expensive to sustain and difficult to fund without significant changes to taxation. Even so, in a world where AI threatens jobs across multiple industries, such support systems may soon become a necessity.


New Creative Roles With AI in the Loop

Some companies are working towards new hybrid roles. Instead of replacing creative professionals, they aim to involve them in the AI process.


Examples include:

  • AI Prompt Artists, who specialise in writing detailed inputs to guide AI tools.

  • Creative Curators, who review AI-generated work and refine it for production.

  • AI Trainers, often artists themselves, who help improve how generative models understand style and composition.


While these roles are still emerging, they offer a glimpse into a future where creativity doesn’t disappear, but shifts into new forms.


Protecting the Artists Who Came First

There’s growing pressure on governments and platforms to protect the rights of original artists. Most AI tools are trained on vast datasets scraped from the internet, often without consent.


Several lawsuits are already underway, challenging the legality of this training data. In response, the EU’s AI Act and similar legislation in the UK may soon require greater transparency, and even give artists the option to opt out of training datasets.


Some creatives are also calling for a royalties system. Just as musicians earn money when their songs are streamed, visual artists could receive micropayments when their style or content is used in an AI-generated image.


Consumer Power and the "Human Made" Movement

A growing number of consumers are beginning to notice when something is made by AI. In response, some companies are experimenting with Human Made labels, signalling when a product or design is created without AI tools.


This shift could give consumers the power to support real artists directly. Subscription platforms like Patreon and Ko-fi already allow for fan-driven support, and ethical marketplaces are beginning to highlight human creators.


But the movement needs wider awareness to have a lasting impact.


The Bigger Picture

No technology arrives in isolation. AI isn’t just changing how we work; it’s changing how we value work.


If companies can produce products without human labour, but also eliminate the spending power of the people they replaced, they risk breaking the cycle that keeps economies turning.


The Tesco card display was a small moment, but it points to a much larger shift. As a creative, it made me question where things are heading, and what it might take to ensure there’s still room for real human talent in the world ahead.


The machines are here. What happens next is up to us.


bottom of page