Wednesday, March 12, 2025
Google search engine
HomeGadgetsOur data, our decisions, our AI future: why we need an AI...

Our data, our decisions, our AI future: why we need an AI Regulation Bill


There were many consequences of the extraordinary timing of last July’s General Election.  One was that my AI Regulation Bill, which had made its way through all stages in the House of Lords and was just about to go to the Commons, was stopped in its tracks. Almost a year later, a new government and another Parliament has provided the opportunity to reintroduce my AI Bill, as I did last week.

If the need for artificial intelligence (AI) regulation was pressing in November 2023, when I first brought my Bill to bear, that need is now well past urgent and, it seems, even further from fruition.

How the sands have shifted, both domestically and internationally.  A UK government, keen on AI regulation while in opposition, slated an AI Bill in the King’s Speech last summer. Now, some eight months later, there is still no sign of a Bill and what appears to be an increasing reluctance to do anything much until they have squared it with the US. 

Making the case for regulation

At the Paris AI Action Summit earlier this year, a declaration for inclusive and sustainable AI was signed by international participants, although both the UK and US decided not to put their pens to that paper. 

Further, the AI Safety Institute has been renamed the AI Security Institute signalling a definite shift towards cyber security rather than a broader focus on “safety” that would include mitigating risks associated with societal impacts of AI models

All of this makes the case – the more than urgent case – for UK AI regulation. It seems we still have to slay that falsehood which recurs with tedious inevitability – that you can have innovation or regulation but you can’t have both. This is a false dichotomy. The choice is not between innovation or regulation. The challenge is to design right-sized regulation – a challenge that has become much more pronounced in the digital age.

With no current AI-specific regulation, it is us, as consumers, creatives and citizens who find ourselves exposed to the technologies
Lord Chris Holmes

Every learning from history informs us, right-sized regulation is good for citizen, consumer, creative, innovator, and investor. We all know bad regulation – sure, there’s some of that around but that’s bad regulation, that in no sense says to us regulation of itself is bad. 

Take the UK approach to open banking as an illustration, replicated by over 60 jurisdictions right around the world.  A determined, thought-through regulatory intervention created in the UK – good for consumer, good for innovator and investor.

We know how to get right-sized regulation, well, right. This could be no more important than when it comes to AI, a suite of technologies with such potentially positively transforming opportunities – economic, social, psychological.  All potentially positive if we regulate it right.

A regulatory approach

My attempt to design a flexible, principles-based, outcomes-focused and inputs-understood, regulatory approach for AI is set out in the provisions of the Bill.

First, an AI Authority.  Don’t think of a huge bureaucratic burdensome behemoth – not a bit of it. We need an agile, right-touch, horizontally focused, small “r” regulator, intended to range across all existing regulators to assess their capacity and competency to address the opportunities and challenges AI affords.  Through this, crucially, to identify the gaps where there exists no regulator or regulatory cover, recruitment being one obvious example. 

The AI Authority would stand as the champion and custodian of the principles set out for voluntary consideration in the previous government’s whitepaper – those principles, put into statute through this Bill.

The Bill would also establish AI responsible officers, to the extent that any business which develops, deploys or uses AI must have a designated AI officer. The AI responsible officer would have to ensure the safe, ethical, unbiased and non-discriminatory use of AI by the business and to ensure, so far as reasonably practicable, that data used by that business in any AI technology is unbiased. 

Again, don’t think unnecessarily bureaucratic and burdensome. Proportionality prevails and we already have a well-established and well-understood path for reporting through adding to the provisions set out in the Companies Act.

With no current AI-specific regulation, it is us, as consumers, creatives and citizens who find ourselves exposed to the technologies. Clear, effective labelling, as provided for in the Bill, would hugely help. 

It holds that, any person supplying a product or service involving AI must give customers clear and unambiguous health warnings, labelling and opportunities to give or withhold informed consent in advance. Technologies already exist to enable such labelling.

Similarly, the Bill supports our creatives through intellectual property and copyright protection. No AI business should be able to simply gobble up others property without consent and, rightly, remuneration.

Public engagement

The most important provisions in the Bill are those around the question of public engagement. The Bill requires the government to “implement a programme for meaningful, long-term public engagement”. It is only through such engagement that we are likely to be able to move forward together, cognisant of the risks and mitigations, rationally optimistic as to the opportunities. 

When the Warnock inquiry was established to do just this as IVF was being developed in the 1980s, we had the luxury of time. The inquiry was set up in 1982 and the Human Fertilisation and Embryology Act came into force in 1991.

Technologies, not least AI, are developing so rapidly we have to act faster. The technologies themselves offer some of the solution, enabling real-time ongoing public engagement in a manner not possible even a few years ago. If we don’t address this, the likely outcome is that many will fail to avail themselves of the advantages while simultaneously being saddled with the downsides, sharp at best – at extreme, existential.

To conclude, we need regulation – cross-sector AI regulation for citizen, consumer, creative, innovator, investor.  We must make this a reality and bring to life, for all our lives, that uniting truth – our data, our decisions, our AI futures.



Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments