Steph Innes: AI kills the influence-star? Using AI in influencer marketing
When asked about the impact that artificial intelligence (AI) is having on influencer marketing, my first thought was to speak to an interested party. Seeing as I don’t know any influencers personally, I decided to ask an AI chatbot, writes Steph Innes.
“Write a short article on the legal issues relating to the use of AI in influencer marketing… please” I ventured, in the hope that the output would be improved by my manners.
Firstly, the response noted that AI has emerged as a “transformative force in the dynamic world of influencer marketing” – fair enough, and so far, so good. My own reading had told me that the content creator economy is estimated to be worth US$21 billion.
What followed was a summary of various areas of law – consumer protection, intellectual property, data protection and discrimination. All very relevant and correct – but rather than restate that here, what are the key takeaways?
Practically speaking, AI is disrupting the influencer space through the creation of hyper-realistic “virtual influencers”. Examples include Lil Miquela, “who” was active for two years before disclosing that “she” was in fact not a human at all. The associated Instagram account now has over 2.6 million followers. Similarly, analysis of an Instagram ad for H&M featuring virtual influencer Kuri found that it reached 11 times more people than a traditional ad, with a 91 per cent decrease in the associated costs. Greater exposure at a lower cost of course sounds good, but will the lack of transparency around AI use in this space continue, and what are the legal implications?
There are varying uses for AI in influencer marketing – from dynamic ad targeting using AI to ensure that ads reach the most relevant audiences through to content creation itself, supported or directed to varying degrees (or not at all) by humans. The legal issues that arise vary accordingly.
The UK’s Advertising Standards Agency has to date declined to support the introduction of AI- specific rules. The UK’s CAP code is media-neutral, focussing on the impact of the ad, rather than the manner of its creation or delivery. So what does this mean for the use of AI in this space?
Whilst some stakeholders are calling for advertiser commitments on ethics and transparency, others are calling for legislation to require the watermarking of AI-generated influencer content to highlight the use of AI i.e. to make it clear that what you are seeing is not a real person. In some territories, steps like this have been taken – in India, rules require clear disclosure of the virtual nature of an influencer.
In the UK, even in the absence of AI-specific rules, there are of course both legal and commercial benefits in ensuring that advertising is transparent and not misleading. Advertisers will remain responsible for their ads, and won’t be able to hide behind cries of “the AI did it and I don’t understand how it works…”. Key risks for advertisers using AI in the virtual influencer space are:
- Misleading advertising will fall foul of the CAP Code, even though the Code hasn’t been altered to specifically refer to AI.
- AI can perpetuate bias and discrimination – AI trained on a limited data set might associate certain characteristics with a particular group of people, and therefore generate content which is discriminatory.
- Content generated by AI can create IP infringement risks if it features third party trademarks or copyright works.
- AI-generated ads can also “infringe” image rights where images of individuals are used – not an IP right itself, rather based in a combination of copyright, privacy, trademarks, confidentiality, data protection and defamation.
Advertisers will be responsible, irrespective of the extent of use of AI or the extent of their understanding of how it works. Falling foul of existing rules could cause advertisers reputational damage, amongst other sanctions. Moving forward, with the anticipated passing of the Digital Markets, Competition and Consumer Bill, the landscape will shift as the UK’s Competition and Markets Authority gets more enforcement powers.
So, in essence, whilst there’s currently no express obligation to watermark AI-generated content in the UK, advertisers should consider how they nevertheless ensure their ads are not misleading – this might take some to the same place in effect. Consideration should also be given to how well AI products used in this space are understood, and therefore how the potential risks such as IP infringement and discrimination can be mitigated.
Steph Innes is a partner at Dentons