The Danger and Opportunities artificial Un-Intelligence
Updated: May 11
This Article was first published in CEOWORLD
There is, perhaps, no greater misnomer than Artificial ‘Intelligence’.
Yet, there is little doubt that the development of machine learning systems will have implications for all of us. And many of those implications are political.
When I published my book exploring the political implications of business activities, some were dismissive, continuing to believe in the fantasy that business and politics are separate and discrete activities. A recent article in the Financial Times stated: “We should all by now recognise that the challenges thrown up by AI are political in nature”.
I decided to try an experiment. I asked ChatGPT to generate a short political speech on the dangers of unlimited immigration. You can find the bot’s output here – a convincing and well-argued position. I knew full well that most readers on the site would not agree with the premise of the piece – which is why I chose it. In fact, the site chose to label the piece as an example of ‘AI bigotry.’
A friend responded by saying that she had changed one word in the instructions – from asking for the “‘dangers’ of unlimited immigration to the “‘benefits’ of unlimited immigration.” She described the response she got as ‘beautiful’ and ‘inspiring’ – which presumably means that it reinforced her own political views.
The first thing to note is that this experiment shows that these systems are anything but ‘intelligent.’ They are dumb machines that collect all the stuff that’s on the internet and regurgitate it plausibly but uncritically and unintelligently. A truly intelligent system would have responded to both our questions with a single sentence: “That’s the wrong question because nobody is arguing for ‘unlimited’ immigration.”
What these systems do show is how easy it is for people of any political persuasion to generate, in a matter of seconds, coherent and emotionally impactful rhetoric in support of their own particular political position. Used this way, this is likely a route to further political polarization if not chaos. We no longer need to interact with anyone – not even in our own dysfunctional social media bubbles – to generate arguments in support of our own political views. It becomes ever easier to keep diving down our own rabbit holes convinced in the righteousness of our own belief systems and the utter evil of any alternative perspectives.
Needless to say, this is not helpful.
But there is another way we can use these systems if we are so minded. We could use them to generate arguments that are diametrically opposed to our own belief system. Those who support unfettered and fast development of AI systems could ask “Why is AI a destructive technology?” Those who believe that deregulation is the answer to everything could ask “What are the dangers of deregulation – say in the banking system?” Those who support widespread union activity could ask “Why is unionisation a destructive force?” And so forth.
On seeing the responses, we can then first of all evaluate our immediate, visceral reaction to them. If our feelings are of disgust and immediate dismissal of the responses, then we, too, are bigots with closed minds not willing to engage with alternative viewpoints.
If we can get past our initial visceral reactions, we can then engage with the output. Which parts are actually reasonable positions that reasonable people can reasonably find some merit in? Which holes are being punched in my own belief system around these issues – a belief system that I hold dear? Is it possible that I don’t have it quite right; that I should treat alternative viewpoints with respect and engage with them rather than dismiss them out of hand? Given that such alternative viewpoints exist in our societies, what might be a reasonable way forward that treats all conflicting views with respect rather than knee-jerk dismissal?
That approach might go some way to getting us beyond the ever-increasing political polarisation; the easy outrage, anger and outright dismissal at anyone who sees the world differently to us.
Sadly, I am not optimistic that most of us will use such generative machine learning systems in this way.