An intelligent approach to AI and intellectual property

In just a few short years, artificial intelligence (“AI”) has gone from science fiction to a commercial (and legal) reality. Whilst the widespread use of driverless cars and the rise of robot butlers may still be some way off, the truth is that, in the tech sector, AI has gone beyond a buzzword and is now a key element of many businesses’ product offerings. This means that suppliers and consumers alike are already having to grapple with the unique commercial and legal issues it raises.

The term “AI” is a bit of a moveable feast. A purist would argue that true AI requires a combination of four things: machine processing; machine learning; machine perception and machine control (i.e. a powerful computer, which can teach itself, absorb information from the world around it and move within that world to aid its task). However, the term is more often used to refer to a system that has just the first two elements — a computer program that can analyse and process large quantities of data, extract and learn patterns from that data, and then use what it has learnt to provide “intelligent” responses to specific requests.

To a certain extent, therefore, recognising that AI is often just applied software helps one to understand how it can be monetised in business. However, the way AI systems are created and used does give rise to issues that “traditional” software licensing is ill-equipped to address.

The legal implications of ‘creative’, artificial intelligent robots

Bertrand Liard, Partner at White & Case, discusses artificial intelligence in the context of copyright, patents and existing IP rights. Read here

Ownership

One of the principle problems posed by AI in business is the difficulty surrounding ownership, both of the AI system itself and its outputs.

A common approach to AI creation is a collaborative one, whereby an AI developer partners with an organisation that has the large quantities of relevant data needed to train the system. Whilst the developer brings the expertise, the system will go nowhere without that data. Generally speaking, the developer will “own” the resultant system, but will grant its data provider a right to use it in return. However, is the data provider happy for the developer to share the system with competitors? Will the data it has provided form part of the outputs of the AI? Does the data provider own all of the data it is providing? All of these issues need to be considered carefully at the outset of a project and ideally agreed in writing.

Another, increasingly common, scenario is where an AI developer offers a “finished” AI system as a commercial product and the customer uses its own data to train the AI system specifically for its needs business’s needs. Whilst this does not generally give rise to questions over the ownership of system as a whole, issues can still arise over the specifically tailored AI. Is it a “new” system? Who has liability for the outputs if they infringe on someone else’s rights, and what happens if the customer wants to move to a different AI provider — can it extract is data and/or the specifically “trained” element of the system? Again, there is no “right” answer, but all of these issues should be addressed as early as possible.

AI neural network health-tech tool may solve health tech IP/collaboration dilemma

A new AI neural network uses cryptographic technology to support collaboration in health tech, without compromising intellectual property. Read here

Liability

The other potential pitfall in implementing an AI solution is that of liability. As an AI system lacks legal personality, it cannot incur liability and brings with it the so-called “black box” problem. Unlike code written by a person, some AI systems store their information in a form that cannot easily be read by humans or reverse engineered. It can therefore be impossible to discover why a system made a particular decision or produced a particular output. In such cases, liability is likely to fall upon the person or entity who controls or directs the actions of the AI. However, as explained above, often that is not clear where one party has created the AI and another has decided what data to put into it or what questions to ask it.

Businesses should ensure that there are contractual indemnities in place for any actions of the AI that infringe copyright work. Similarly, it is important to be aware of where data used in the AI system is coming from, to avoid infringing third parties’ IP rights or misusing confidential information.

The tech Budget? Is Crypto Phil Hammond doing enough to create an AI revolution in the UK?

Was it really a tech Budget? The Chancellor did a lot of talking about AI and other new technologies in his Budget 2018. The more detailed Budget Red Book even had a section on cryptocurrencies and blockchain. But is it enough? Read here

AI and intellectual property: the future

Driven by the continual advancement of big data and ever better computer processing capabilities, the use of AI is set to continue its exponential growth. This will be undoubtedly positive for business, with recent research by Accenture suggesting that it will boost productivity in developed economies by up to 40% by 2035. The Chancellor also promised a £1 billion investment in last year’s Budget in the hope that the UK will become a world leader in AI technology. However, businesses should also be mindful of the need for an accompanying increase of awareness in the legal and ethical issues AI presents.

Written by Tom Lingard, partner at Stevens & Bolton LLP

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com

Related Topics

Intellectual Property
ip