Category
Artificial Intelligence

The Future of AI Regulation

Let's take a look at the legislative landscape around AI and examine whether current laws will be sufficient for the algorithmic age.

Disruptive technologies tend to arrive in a blizzard of related developments and innovations, far in advance of any regulations that may eventually govern them and initially striking fear and foreboding into governments and peoples alike.

It was so for the printing press1, the industrialization of drug production in the 19th century2, and, more recently, the emergence of GPU-powered cryptocurrency, which has stormed ahead of regulatory oversight as a threat to governmental rule and traditional economic models3.

And now, after more than fifty years of false starts as described in our machine learning overview, the current boom in artificial intelligence has gained enough credibility and market traction to similarly challenge lawmakers, threaten historic systems of production and consumption, and embed itself into a society that's struggling to understand its workings — and which lacks laws modern enough to address the possible significance and reach of an emerging 'algorithmic age'.

In this article we'll examine some of the approaches and solutions that various governments are taking to develop meaningful legislation, along with the central issues that are driving public and industry pressure for increased regulation and oversight of AI software development.

Turn to Iflexion for more AI guidance
to develop your solution safely

The Nascent State of AI Legislation

Nearly all democratic governments are currently on the back foot regarding the regulation of AI, since the technologies under discussion are proliferating either from the private sector, where regulatory interference has long fallen out of favor4, or from China, which has not relinquished state control of its national technological base in the same way that the west has5 (see below).

A resolution of this tension is necessary, partly because post-facto regulation tends to be driven by public demand for state action after damagingly controversial incidents, but also because the lack of legal clarity can be an inhibiting factor for long-term investment6,7.

A Global Wave of Ethical Treatises On AI

Notwithstanding that commercial interests may prevail over ethical consensus, we can perhaps discern the trends of future machine learning regulation from the 100+ ethical guideline documents that have emerged from academia, governments, and government-affiliated think-tanks over the last five years8.

Most of these guidelines are from the west, with a quarter proposed by the USA, nearly 17% from the UK, and at least 19 recommendations from the European Union.

Distribution of issuers of ethical AI guidelines by number of documents released

In descending order of prevalence among the ethical reports and roadmaps studied, the core topics common to all are:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom and Autonomy
  • Trust
  • Sustainability
  • Dignity
  • (Social) Solidarity

Fears That AI Regulation Will Impede Innovation

In general, the governmental guidelines and working papers that have emerged so far express high levels of concern that premature regulation may stifle innovation in the machine learning and automation sector.

In 2020, a White House draft memorandum on guidance for AI regulation concluded that 'Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth’9.

Likewise, the UK AI Council's roadmap report10 typifies the UK's longstanding enthusiasm to gain ground in the AI sector, while maintaining a very circumspect approach to the development of new legislation around AI.

In the press, the UK's departure from the European Union has been seized as an opportunity to abandon the EU's greater commitment to AI regulation in favor of the more laissez-faire policies of the US11 , an approach that has been criticized as irresponsible12.

Pressure from China

This fractured landscape of ethical and legislative timidity might seem surprising, since, except for China, the major AI powers all signed an OECD accord for a global governance framework in 201913.

In reality, the long-term stability of China's political administration, together with its leadership in AI venture capital investment14, a deeply state-driven economy, and an avowed determination to lead the world in AI development by 203015, is bringing competing democratic nations to a 'guilty envy', which increasingly sees oversight and regulation as a significant competitive disadvantage16.