As many artists and creators know, copyright enforcement can be time-consuming.
Dealing with infringements can be exhausting. One must seek out infringements, file notices, check for compliance and follow up. Each step drains time, energy and other resources.
That is why many creators pay for services to automate as much of the process as possible or even do it for them. But that desire to automate has always butted up against a stark reality; filing false copyright notices can have serious legal implications.
Under the Digital Millennium Copyright Act, knowingly filing false DMCA notices “shall be liable for any damages, including costs and attorneys’ fees, incurred by the alleged infringer… or by a service provider.”
In what is perhaps the poster child of these cases, in 2004, the voting machine company Diebold was ordered to pay $125,000 to a group of college students who had published leaked emails from the company.
However, that case is now twenty years old. Much has changed in this space, and the DMCA’s protections against false notices have been significantly weakened.
That now collides with another reality on the web: AI systems’ growing use and popularity.
So, what happens when an AI sends a false DMCA notice? The answer isn’t that simple.
A Long History of Automation
Disclosure: Through my consulting firm, CopyByte, I provide DMCA takedown and DMCA agent services. However, both are human-driven and only use automated tools to assist in detection. No AI tools are used.
To be clear, automated (or largely automated) DMCA takedowns have been around for a long time. When organizations file millions of URLs with Google daily, it’s safe to assume that not every URL was vetted by humans.
However, these systems are relatively “dumb” in nature. They seek out titles or content matches, usually focusing only on sites and services already known to be infringing. They target pirate websites and focus on a narrow kind of infringement.
But, despite that, mistakes happen. Such efforts often snag legitimate content platforms, news coverage about the topic and, in one particularly embarrassing case, information about a NASA mission that shared its name with a model.
Given the number of takedowns filed, these mistakes are still very rare. They make the news precisely because they are so unusual. They are usually attributed to a combination of bad data and lackluster human oversight. Fortunately, they rarely do any significant harm and are fairly quickly remedied.
But if these mistakes happen when automating relatively straightforward copyright claims, what happens when we try to automate more complex cases? We may soon find out.
No Humans Allowed
In September, Wes Davis at The Verge wrote an article about a company named Tracer. According to their website, the company uses an AI assistant named Flora. They claim that Flora is ” an AI-powered assistant designed to simplify and automate the vital task of protecting brands and consumers online.”
Tracer drew The Verge’s attention for filing notices on behalf of Nintendo, targeting AI-generated images of Mario and other Nintendo characters. They claim that the system works by detecting potential issues, gathering evidence on them and making recommendations for action.
Flora and Tracer, more broadly, are still “human-in-the-loop AI,” meaning that humans approve all actions.
However, as we’ve seen elsewhere, it’s only a matter of time before humans seek to remove themselves from the equation entirely. Humans will take a backseat, whether due to a volume that makes human involvement impractical or just seeking maximum efficiency.
But these aren’t simple cases of anti-piracy enforcement. They raise a slew of complicated copyright, trademark and other legal questions. Even well-intended humans struggle with them.
So, what happens when an AI system files a false DMCA notice? Probably not much.
Bad Faith or No Faith
Regardless of how automated a takedown notice is, a human has to sign it. Theoretically, if the notice is false, it doesn’t matter whether or not it was automated. The person whose name is on it is the one liable.
But, functionally, that is not likely. The problem is simple: Over the past 20 years, it has become increasingly difficult to hold anyone, human or robot, accountable for bad DMCA notices.
Under the law, someone suing over a false DMCA notice has to show that the filer “knowingly materially misrepresented” something in the notice. That is difficult because it is held to a subjective standard. Innocent mistakes are not actionable.
In that respect, AI may give humans a level of plausible deniability. If they can say that they set up the system as best they could and did not intend to file a false notice, it’s going to be difficult to prevail.
The “Dancing Baby” case (Lenz v. Universal) from 2015 exasperated this. Though the appeals court found that filers needed to factor in fair use, it opened the doors to automated takedowns as long as the algorithm considers fair use in some ways.
So, as long as an AI system has a way to consider fair use, it’s likely acceptable. But even if it isn’t, the damages are likely not worth filing a lawsuit anyway. Instead, this is most likely an issue for the Copyright Claims Board, provided that the filer is based in the United States.
However, even that may be moot. Today, in many places, most copyright takedowns are not through the DMCA process. They’re through automated content-matching tools.
For example, 99% of all copyright actions on YouTube are through its Content ID system. Once again, these systems are fairly “dumb” and seek matches against a predetermined library.
However, given Google’s heavy investment in AI, it’s likely only a matter of time before Content ID makes more complicated decisions.
When these systems are wrong, users have virtually no recourse. YouTube’s terms of service protect them, no matter how bad their judgment is.
Bottom Line
In the end, it’s most likely that AI is already sending DMCA notices on behalf of creators without human intervention. But, as AI makes more and more complicated decisions, it’s going to make more and more mistakes.
Right now, the only recourse for someone falsely targeted is to file a counternotice. However, that is a complex process and carries a degree of legal risk.
In the end, regardless of how effective it is, there is a huge push to use AI to replace human effort. While much of the focus has been on artistic fields, it’s also happening in the legal field. One of the areas most ripe for AI automation is copyright enforcement.
However, we need to ask ourselves: What happens when AI makes mistakes? Do we apply the same standard to AI notices that we do to human ones, or do we hold automated notices to a higher standard?
These aren’t easy questions. But they are also largely moot. With Content ID and similar systems, the DMCA process has taken a backseat on most larger platforms.
Most users are already living and posting under the watchful eye of the bots, as they have for many years. But now those bots are going to start tackling more difficult questions and it remains to be seen how well they will handle them.
Want to Reuse or Republish this Content?
If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.