Phillip Kelly: Business use of AI – key considerations

Phillip Kelly: Business use of AI – key considerations

Phillip Kelly

With the business use of artificial intelligence (AI) on the rise, there are key legal and contractual risks that businesses using, or supplying, AI need to consider, writes Phillip Kelly.

As with most contracts for the sale of products, any contract for the supply or provision of AI is likely to contain supplier or developer–favoured allocations of risk.

Businesses supplying AI will try to protect themselves from potential liability by including a provision excluding liability for defective AI. The effectiveness of such clauses has not yet been tested, so the task will eventually fall to the courts to assess whether such a clause is reasonable.

The absence of case law makes it difficult to predict how a court would strike this balance, and this is a significant area of risk for suppliers looking to rely on an exclusion clause or, more importantly, purchasers hoping to overcome the clause to recover loss and damage.

Another likely avenue for a potential claimant would be the argument that there is fault with the quality or fitness of the AI, and that this does not satisfy the requirements of the Sale of Goods Act 1979 (SGA 1979) or Consumer Rights Act 2015 (CRA 2015).

These arguments are not without their own difficulties. Crucially, claims under both the SGA 1979 and CRA 2015 would be based on the classification of AI as a “good”, which is contentious. For instance, in the context of computer software, the English courts have ruled that intangible computer software does not constitute a “good” for the purposes of the SGA 1979. Therefore, it is unclear whether the code underpinning an AI process would be similarly categorised.

In cases where there are barriers to relying on contractual liability, a potential claimant will usually look to the law of delicts to try to bridge this gap, and negligence is often viewed as an opportunity to impose liability on a party outside the reach of the contract.

The law on negligence is rooted in the principle of foreseeability of the loss and proving a chain of causation between the loss and the party being sued. While each case will be determined on its facts, the features of AI pose challenges in establishing a claim under this branch of law. There may be significant problems for a claimant establishing foreseeability or a causal nexus between the conduct of the suppliers and an outcome caused by aspects of AI developed by machine learning after the AI was initially programmed.

This uncertainty will no doubt benefit suppliers and developers in the short term, but there is a significant risk the courts will look to adapt the principles of negligence to fit the new paradigms created by AI; the existing product liability regime will likely come into play. However, difficulties in determining exactly where the defect occurred in the supply chain will be problematic if the complaint has stemmed from a feature of autonomous machine learning.

We expect the measures most likely will include introducing a strict liability regime to cover situations where remoteness or causation might otherwise prove a barrier to recovering against a supplier or developer; an adapted duty of care, for example obligations on a supplier of AI systems to monitor and maintain those systems to control for unexpected outcomes due to machine learning; express allocation of liability on manufacturers for damage caused by defects in the products (even if those defects resulted from changes to the AI product after it had left the manufacturers control); joint and several liability between manufacturers, developers, suppliers and retailers; and reversing the burden of proof, requiring the manufacturer or supplier to prove that the AI product was not the cause of the harm.

The UK has already taken steps to address some of the uncertainties around AI by introducing the Automated and Electric Vehicles Act 2018 which attributes liability to the insurer where damage has been caused by an automated vehicle driving itself.

We can expect that the law of delict will continue to be shaped by legislative and regulatory reform, not to mention the more immediate prospects of legal developments in the courts as they develop the existing common law principles by dealing with novel cases on a day to day basis.

Phillip Kelly: Business use of AI – key considerations

Phillip Kelly is a partner at DLA Piper

Share icon
Share this article: