In-house intellectual property counsel who spend their days harvesting inventions from engineers and choosing innovations to patent have been tasked with new duties surrounding generative AI and more and more IP professionals are crafting generative AI policies to guide their engineers to use AI tools in innovation responsibly, writes IAM Deputy Editor Angela Morris
Gunnar Heinisch, managing counsel of Toyota Motor North America Inc, tells IAM that traditionally his role centred on managing the carmaker’s connected vehicle patent portfolio, working with engineers on brainstorming and innovation workshops to generate intellectual property, and deciding which innovations to patent. But recently his job duties expanded to cover artificial intelligence policy too.
“We are trying to get our arms around how we use AI responsibly within the company. That has been a big focus lately – working with our business partners to make sure we are doing it in a way that helps Toyota, is responsible, and we are always thinking of what provides the best experience for customers,” Heinisch comments.
John Tsai, head of patents, IP litigation, open source and standards at Stripe, the payments company, says he now spends more and more of his day on AI questions because of an “explosive growth in how we use generative AI in all our services”. Tsai said during IPBC Global in San Francisco in June: “I am getting a lot of requests and novel use cases presented and we compare them and advise the teams to help us stay clear of any potential issues.”
Tsai works with Stripe’s commercial team to review the terms of service of generative AI tools that engineers are using. Leadership deliberated how to negotiation with enterprise-level generative AI companies, forming a common strategy to follow for consistency doing deals going forward. He notes that he is afraid engineers do not understand what they are signing up for when they experiment with a new AI tool.
“There is a lot of educating our engineers and business folks about the risks,” Tsai says. “You have to do it in a thoughtful way because if you just say no to everything, they are going to stop coming to you. You have to think about how to categorise the sets of use cases into different buckets of risk levels, give some operating guidelines and lanes to operate in, because otherwise there are too many barriers and they will just go around them.”
But how should IP counsel find the perfect balance between allowing engineers to use AI tools to innovate, and protecting the company’s intellectual property? There are a lot of considerations to keep in mind. First, let’s explore the IP risks that any generative AI policy should guard against.
Risks to patents, trade secrets, copyright
Michael Borella and Joshua Rich, partners in McDonnell Boehnen Hulbert & Berghoff, have helped companies write generative AI policies for their employees. Borella says large companies were the first to approach them to craft policies for employees to use generative AI image generators. Then interest in generative AI policies exploded in 2023 with ChatGPT’s release to the public, Rich mentions.
They break down how the use of generative AI tools might compromise IP rights:
- Patents – An engineer’s innovation may be patentable, but if they conduct research and asks ChatGPT many questions, the prompts might disclose the ideas and concepts that the company wants to patent. Borella says: “We do not know what OpenAI is doing in the cloud with this information. We can presume they are keeping a log of these prompts and responses so they can do quality assurance. … You have a third party now with access to your confidential data which you want to use to generate a patent application and you have disclosed it so now the patentability of that invention is in question.” Borella adds that the US Patent and Trademark Office’s policy on AI inventorship states that a human inventor must have a substantial contribution to an innovation for it to be patentable.
- Trade secrets – Rich states that one big risk to a company is that an employee will disclose critical trade secrets and highly confidential information to an AI tool, without comprehending that they have exposed the information to a third party. US trade secret law requires taking reasonable steps to maintain confidentiality, which may be harmed if someone disclosed it to ChatGPT. Plus, Rich points out that the companies running generative AI tools – Google and Microsoft (an OpenAI investor) and others – are involved in many types of technologies. Rich wonders whether they put more review to the queries that competitors’ employees input into their generative AI tools. Borella adds: “It could in theory – and we have seen in The New York Times case – end up coming out verbatim for one of your competitors when they put in a prompt.”
- Copyrights – If an employee uses a generative AI tool to turn concepts into prose, Rich says there is a question of who owns the copyright in the work. No one wants to share or lose their copyright to the AI company, he says. Borella adds that the images created with generative AI image generators are not copyrightable. The US Copyright Office policy is that generative AI images are not created by a work of human expression, although there is a grey area if a person uses a sufficiently detailed prompt.
Guiding principles for AI-use policy
The World Intellectual Property Organization published a guide, “Generative AI: Navigating Intellectual Property”, that mentioned the IP questions arising as businesses start using the technology to generate content. The paper established guiding principles and a checklist for companies that need to write generative AI policies for employees.
Keeping secrets
It suggests that companies must institute safeguards to prevent employees from inadvertently giving away trade secrets and confidential information when they are training or prompting AI tools. To mitigate the risks, companies should examine the settings on generative AI tools, consider using tools operating on a private cloud and ask AI tool providers to create protections for confidential information. It is a good idea to only allow authorised staff to use a generative AI tool that uses confidential information, and to provide training for all staff about the risks of leaking confidential information, says the WIPO paper.
Emilie Lavirotte, senior advisor of intellectual property at beauty company Sephora mentioned at IPBC Global that her problem with employees using AI tools is that when they input prompts into a tool, it is then used for training data purposes. It is possible for the AI tool to share your information with the world. Lavirotte noted: “I would love doing a survey of my internal clients, asking them if they are aware of this. … Training teams is key to make sure that they are not sharing important information or even copyrighted material.”
Borella says that employees must get training that is similar to the email security training that is prevalent in many companies, which has taught people not to click links from unknown senders. “That is the kind of thinking you need to instil in individuals who are using these tools. Say, ‘Hey, every time you type in something, think about what you are disclosing and whether that is something the company really wants someone else to know’,” Borella says.
The WIPO publication notes that there are ongoing copyright lawsuits alleging that AI companies scraped and used copyrighted works to train their AI tools and that the tools in turn infringe the works in their outputs. The guide suggested that companies should protect themselves by using tools that respect IP rights, seeking indemnity agreements and vetting datasets by verifying IP ownership. Staff policies and training should focus on guiding employees to minimise the risk of using infringing outputs. It is good practice to check for infringement before using an output.
Open-source issues
For software companies whose developers use AI tools, it is critical to examine open-source requirements. If the AI tool has lifted open-source code and then an employee utilises it in a company project, that could impose the same open-source licence on the company’s product. To mitigate this risk, the WIPO paper recommends that companies only use AI tools from providers that use licensed training data, or are willing to offer indemnities against open-source infringements. When a firm is producing a project and it is vital to avoid open-source obligations, the company should consider prohibiting staff from using AI tools on that project.
Borella mentions that some companies have entered contracts with one of the major generative AI companies and they limit their engineers to using those AI tools. Rich adds: “If your engineers are going to be using it for data analysis you are going to want the strongest possible security – completely sandbox any product that you are using.”
Kyle Barlow-Williams, intellectual property counsel at Aurora, the self-driving truck company, stated at IPBC Global in San Francisco in June that generative AI models come with conditional open-source licences with custom terms and conditions. His view is this creates legal uncertainty and legal risks, because unlike with open-source licences, their custom terms have not yet been litigated and are not fully understood. Aurora’s engineers and other employees want to use third-party generative AI tools, and it has created work for the firm’s legal team and has impacted Barlow-Williams’ day-to-day work.
Some AI tools’ terms of service say the user owns the content they put into the tool and the content that the tool generates for them. But other terms state that the AI company owns everything, and the user receives a licence back to it. Plus, two companies can get two versions of the terms of service depending on which package they buy. Some say this issue is a minefield and requires in-house counsel to stay on top of current terms of service for all AI tools that their company’s employees are using.
The WIPO paper mentioned that the idea of AI-generated content is so new that it is not yet clear whether it can be protected by IP rights. WIPO suggests that if IP ownership is critical to a company’s business model, it is wise to be very careful. Some AI tools have terms and conditions that say the AI company owns the IP in its outputs. A company might negotiate to ensure it can own the copyright in outputs when an employee uses an AI tool. Even when someone uses AI, the human should use his or her creativity to edit the output and the person’s role should be documented.
Write a policy people will follow
The critical consideration when writing a generative AI policy is “to make sure that people are going to follow it”, says Rich. IP counsel should talk with the people whom the policy will affect. Engineers and scientists must understand the implications of using the AI tool. The policy should find the balance between allowing them to do their jobs effectively and protecting IP.
“It is one of those really difficult things where you do not want to say to your software engineers to never use an LLM,” Borella says. “They may actually be many advantages to having them use an LLM. But you have to get them to understand there are certain things you should never use an LLM for unless we have an absolutely airtight agreement with the provider.”