Striking a Balance: The Quest for Optimal AI Control
Over the past year, AI has been the hot topic for all of us in the tech industry and beyond. We’ve seen everything from speculative documentaries, dystopian predictions, and buzzword-filled marketing campaigns for companies showing off their skills at harnessing the power of artificial intelligence.
To quote the director of the MIT Initiative on the Digital Economy, Erik Brynjolfsson, “This is a moment of choice and opportunity. It could be the best 10 years ahead of us that we have ever had in human history or one of the worst, because we have more power than we have ever had before”.
It’s important to listen to this quote and, in doing so, take a step back from the here and now to think about the major technological changes we’ve had in the past couple of hundred years since mass industrialisation of the first world.
It is useful to think of technical changes as having come in four waves since the 19th Century, brought about by a sequence of “general purpose technologies” (ever wondered where that famous acronym came from?).
GPTs are best described by economists as “changes that transform both household life and the ways in which firms conduct business”, and the four most important GPTs of the last two centuries were the steam engine, electric power, information technology (IT), and now artificial intelligence (AI).
These changes also come with a set of three rules which govern, in a sense, their pace, prerequisites, and problems. Firstly, the technical adoption rate has been increasing. Technological innovation itself may be at a similar rate as it has been, but the time between invention and implementation has significantly decreased in recent decades.
Secondly, leapfrogging is impossible. In order for one country to take advantage of new technologies, they must first catch up to the previous gen — to use a 21st century phrase — in order to move onto the next gen. For example, widespread electrification was a prerequisite for the IT revolution.
Finally, automation is labour-share reducing, not labour-displacing. This effectively means that large scale automation, as has happened in the past, has not directly led to mass unemployment as many feared, but there is evidence to suggest that it has led to a squeezing of wages due to more focus from businesses being on capital rather than labour.
Taking all of this into account, AI certainly is the next wave of technical change in society.
So, who has the advantage, and how can we ensure that Mr Brynjolfsson’s concerns that, if handled incorrectly, the AI revolution could give us a disastrous next decade doesn’t come true?
It was none other than Vladimir Putin who said in 2017 that the country who wins the AI race will rule the world. Russia now, thankfully, is a minor player in this 21st century space-race, with the majority of the power and investment being centred in China, the USA, and the European Union.
China and the US are well in front, but the EU is catching up and has a lot of things going for it in terms of being a future superpower on this front in the same way the other two nations are.
Before continuing, it may seem strange that we’re comparing two countries with a union of 27, however the economic output and population are what are important in this instance, and the EU sits in the middle of the US and China on both counts.
This Cold War imagery isn’t supposed to cast AI in the same light as nuclear weapons; far from it. Technology itself, and AI in particular, aren’t inherently good or bad things, they are neutral and it’s the actions of the user that determine its effects on society. It therefore stands to reason that we should enforce some sort of accountability.
So what ways can we look to control this when it’s already so decentralised?
As with most things of this magnitude, there are the three recognised ways of doing it: total private control, meaning financial gain is the main focus of AI, more than likely at the expense of the public good. Total public control, meaning an increased likelihood of innovation stagnation and potential politically-fuelled conflicts over it, but also total accountability.
Finally, probably the most likely option, a mix of the two, which would require strict regulation across borders with near total regulatory alignment in order to be effective.
The issue with keeping AI in solely private hands would be that accountability would be incredibly difficult to enforce, given that private companies are not democratically elected and have no particular loyalty to the public.
The other issue, of course, would be that AI would be used for financial gain rather than societal good, leading to some potentially disastrous situations that would be completely out of the people’s control.
On the other hand, total control by a governing body would have the usual implications on innovation and technical adoption — that which was mentioned earlier measuring the time between invention and implementation.
However, it could be argued that this problem could be circumvented by the very accountability this sort of control would provide. If the governing body’s electoral future depends on continuous and fruitful innovation, they would be forced to allow for more free development.
Another better argument against total public control is that technology is something that can be copied and used by other people at any time, so it becomes impossible to legislate for.
The problem with this argument is that it provides a false indication of what they do; governing bodies of any kind are there to minimise risk, not completely eradicate it, because that’s largely impossible. This goes for any issue it has to draft legislation for.
Illegal streaming is a good example of this, because governments weren’t able to act in time in order to stop the film and music industries haemorrhaging money, so the private sector (Spotify and Netflix for example) made a more convenient, and nearly as cheap, version of it to suppress the illegal market and capitalise on it. This arguably saved the modern music and film industries.
Which brings us to the final eventuality and, as previously mentioned, the most likely outcome: a mix between private and public sector involvement with strong regulations to ensure that the public good is protected.
The main issue with this approach is that we would likely have to have regulatory alignment across all nation-players in this field, meaning the US, China, and the EU, in order for this to be effective.
In today’s current political climate, however, this seems unlikely. The best it seems we can hope for is for at least the US and EU to regulate our AI industries fairly and effectively to ensure both the maximum benefit for the public and the best possible avenues for innovation from the private sector.
What is certain, however, is that how we go about controlling this latest wave of technical change will determine our fate over the next decade or so.