Chris Elwell-Sutton: Scotland’s AI strategy could give it the edge
Scotland could turn its vision for ethical regulatory principles into a competitive advantage, writes Chris Elwell-Sutton.
According to the International Monetary Fund’s latest report, AI could boost the UK’s uninspiring productivity by up to 1.5 per cent annually. Analysts estimate the sector could add £520 billion to the economy by 2030, realising the government’s aim of creating a “tech superpower”.
Foundation models are developing quickly, often surpassing human performance and at times exhibiting a disturbing capacity for deception. The potential human costs include discrimination, unemployment, privacy and IP violations, and (depending who you listen to) extinction-level catastrophe. These risks were deemed so serious that tech leaders called last year for a six-month moratorium on AI development.
Last November’s Global AI Safety Summit reflected these concerns, albeit through a business-friendly lens; 28 world leaders signed the Bletchley Declaration, committing to international cooperation and a “pro-innovation and proportionate regulatory approach”.
So are harmonised global AI standards around the corner? Not quite. Despite agreement on big ideas, the question of regulation has driven significant divergence between the EU and UK.
With enforcement expected in early 2026, the EU AI Act is guided by a risk-rating system, banning certain AI uses outright, including subliminal AI and social credit scoring. Others are subject to strict compliance criteria, while a light-touch approach applies to “low risk” uses.
The UK is adopting an agile approach (explained in a 2023 white paper and subsequent response in February) anchored around globally recognised governance principles including safety, transparency, fairness and accountability. With no new law in place, regulators give effect to the principles through sector-specific guidance.
Challenges have been raised over clarity, enforcement and protections for individuals, but progress is being made. The Information Commissioner’s Office released guidance for businesses scrambling to square AI with GDPR, while the new Digital Regulation Cooperation Forum is streamlining regulators’ efforts.
How long this non-statutory position will last is unclear. A governance-heavy private member’s bill presented by Lord Holmes revealed what future legislation might look like, while recent reports teased a sooner-than-expected cross-sector regime.
Scotland’s AI framework might also be affected by politics, given the unravelling of the SNP/Green coalition. For now, Scotland has a clear vision, having been first to release a National AI Strategy in 2021, focusing on trustworthy, ethical and inclusive AI. Direct legislation on AI sits outside Holyrood’s devolved powers, but the Scottish government has other tools at its disposal.
The job of implementing the AI strategy sits primarily with the Scottish AI Alliance, a partnership between The Scottish government and Data Lab, Scotland’s centre for innovation, data and AI. A feature of the emerging Scottish AI landscape is the AI Register, delivering on transparency and accountability principles by requiring Scottish public sector bodies to publish details of their AI deployments.
Steph Wright, head of the Scottish AI Alliance, said: “This strategy ensures that AI technologies serve as a common good for the people of Scotland. A business environment that minimises harm is inherently more attractive and socially sustainable.”
Can human-centricity set Scotland’s AI sector up for success? Nothing is certain, but similar questions were asked when Tim Cook controversially placed consumer privacy rights at the centre of Apple’s philosophy – ultimately viewed as a commercial masterstroke.
At a time when boards and investors look to ESG and human rights to operate as business enablers, now may be the right time for Scotland to turn its principles into a competitive advantage.
Chris Elwell-Sutton is a partner at TLT LLP. This article first appeared in The Scotsman.