The end of history and the last bot

philwoodford
6 min readJun 12, 2023
Does AI leave society on permanent repeat? (Image: Pixabay)

The end of the 1980s might have been the ‘end of history’.

Political scientist Francis Fukuyama wrote a paper which subsequently became a celebrated book. His argument was essentially that liberal democracy had triumphed in an evolutionary battle over communism and that a new era of rather mundane Realpolitik was set to open up.

In the light of all the extraordinary events that have unfolded since — from the terror campaigns of Islamist fundamentalists and the continuing growth of authoritarianism in Russia and China, right through to gradual erosion of democracy in the US and parts of western Europe — the Fukuyama thesis has understandably been tested to its limit and been much derided by critics.

In 2023, however, perhaps we are glimpsing a very different end to history through the explosion we’re seeing in artificial intelligence.

Don’t worry. I’m not one of the doom-mongers predicting that the end of world is nigh or… errr… A-nigh. I mean, we may end up falling prey to some malign superintelligence and biting the dust, but I suspect the percentage chance is small-ish and it should still be a way off. Fingers crossed and all that.

No, I’m speculating that we are potentially facing another type of Big Crunch. You see glimpses of it in articles tackling artificial intelligence from a variety different angles.

Tristan Cross writes here in The Guardian of how he retrained to be a web coder and now sees this work threatened by AI. He talks of capital becoming detached from labour — a big idea in anyone’s book — and the fact that everything from now on will be produced by the all-powerful tech. And then there’s a very interesting passage in which he refers to ‘humanity’s collective recorded cultural output’ being ‘concluded, bottled and corked at this specific point in history’ (my emphasis).

This exact issue has been troubling me for a little while now.

Working as a freelance writer and copywriting trainer, I’ve obviously been involved in quite a few discussions about the impact of generative AI in recent months. The large language models, such as #ChatGPT, work through prediction. They are fed an enormous corpus and then make judgments about the most likely and plausible responses to particular prompts. If asked A, B or C, the first word in response would most likely be X, which would be followed by a Y and then a Z etc.

Inevitably, their understanding of the world today is built on the world as it was yesterday. The free-to-use version of ChatGPT is always at pains to point out that it knows little of events beyond its cut-off point for training data in 2021. (I’m always puzzled by what the ‘little’ that it does know actually is, or how it comes to know it. But that’s by the by.)

And an obvious response to all this is to say, well, the training data gets updated. Or — as with GPT4 — we give LLMs access to the web and ask people to cough up some dosh to access this supercharged platform. In this alternative universe, the AI has source material and input that is contemporaneous to the prompts it receives.

But what if (and this is, I think what Tristan is suggesting) the new material is increasingly of non-human origin?

For example, I want ChatGPT to create marketing copy in 2023. It draws on everything it has seen in terms of structure, approach and style and turns out something that is broadly plausible, albeit unoriginal. It is effectively a pastiche of how marketing copy should read. And how marketing copy should read is something that has hitherto been based on human interpretation. Yes, data and test results have played a big role in shaping technique, but so has subjectivity and artistic interpretation.

Now let’s imagine we’re in 2027 or 2028 and that, in the intevening period, more and more marketing content has been created by AI platforms. The training data is expanded, but with each passing year, an increasing percentage of the new content has been produced by AI. Increasingly, we start to see a pastiche of the original pastiche.

Of course, human beings can experiment with new approaches to the content. Some organisations and brands will opt for authenticity and declare themselves #GPT-free. But the chances of this kind of human endeavour competing successfully against the likely algorithmic tsunami seem pretty slim, don’t they?

Marketing schmarketing, I guess. The issue described above troubles me as someone who works in the field, but maybe wouldn’t rank highly on the agenda of the general public as a major concern. But what if other things start getting pickled and stuck permanently in the past?

If you ask DALL.E, the text-to-image generative AI, to create a picture of a CEO, the pictures that come back are overwhelmingly of white men. This is hardly surprising because, in the past, CEOs were pretty much all white men. And even though there may have been some improvement in recent years, the raw training data for the model is the bank of images that have existed to date. The sexism and racial bias that has been such a feature of the past is preserved in the present and amplified for the future via AI.

Here, there may be a little more hope that human intervention can shift the balance. Some brands have been prompting AI to create models who don’t exist in order to counter-balance the lack of diversity in their marketing and advertising campaigns, although cynics might argue that they would be doing more of a service to humanity by actually employing models from more diverse backgrounds and paying them a fee.

In the world of recruitment, certain types of people have been historically more likely to apply for particular jobs and to be appointed. Tech roles have been predominantly male, for instance, whereas caring professions have seen a much higher representation of women. What if the algorithmic targeting of advertisements to prospective applicants continues to reflect the world as it was, rather than as it is or as we’d like it to be? It’s a live issue, as this CNN report reveals.

Of course, we’re scratching the surface here. Artistic and cultural output is particularly vulnerable to the ‘frozen time’ phenomenon. As AI-produced video content and music become more and more common, the plot lines and aesthetics are inevitably reflective of the creative ideas and inspirations of generations of artists, performers and directors. This not only has huge implications in terms of intellectual property, but it also creates a self-perpetuating, self-referential cultural bubble.

Imagine you’re an ambitious actor. You might still, in theory, be able to influence the direction of thespianism by giving the AI new training data with which to work. But this presupposes that you’ll be asked to act at all. Industry insiders are worried that acting jobs will soon disappear, while Hollywood script writers have, in part, been prompted to strike because of generative AI.

Let’s say AI had — rather implausibly — taken off in 1990s rather than the 2020s. To what extent would our musical, movie-going and cultural tastes today be determined by the ‘cut-off’ point back then? How far would our society be stuck with the prejudices and Weltanschauung of that particular era? The Spice Girls might never disappear, but be reincarnated endlessly in different guises. Like a CD that was permanently stuck.

Related articles

Beware the bot whisperers

Who knows the future of the knowledge worker?

--

--

philwoodford

Writer, trainer and lecturer. Co-host of weekly news review show on Colourful Radio.