Who knows the future of the knowledge worker?

philwoodford
5 min readJun 7, 2023
A photo-realistic doctor ponders (Source: DALL.E)

When my wife’s dad was a kid, growing up in a village on the outskirts of Sunderland in the North-East of England, the boys in his class were treated to a pretty unique experience. They went en masse to the local mine where they were destined to work, joined the colliers in the lift and headed underground from the pit head. It was supposed to be a trip that cemented their relationship to the hard manual labour that lay ahead for the rest of their days.

If any of the lads had qualms about the dust, sweat and grime, there was always the option of the deafening shipyards a few miles away.

My father-in-law took one look at the claustrophobic working conditions in the wartime mine and vowed to make his escape. And that’s exactly what he did.

In the 1950s, he headed south and eventually studied to become an architect with a local authority in London. He became a ‘knowledge worker’, abandoning his blue collar in favour of a white one. Or, as Robert Kelly labelled it the 80s, a gold collar because of the decent salary and pension that went with it.

Since then, we’ve lived in an era where so-called ‘knowledge workers’ have been highly prized. Their ability to solve problems and create economic, technological and social value was, in many ways, seen as the ultimate in career aspiration for most people. The old Clause IV of the British Labour Party’s constitution demanded that people received ‘the full fruits of their labour’ whether working ‘by hand or by brain’. With each passing decade, it seemed there was less of the hand and more of the brain in developed economies.

Now, human knowledge appears to be threatened in the 2020s by an explosion in artificial intelligence.

Of course, it’s already been happening by stealth for a number of years, with algorithms doing a whole host of jobs that humans might once have done. The buying and selling of advertising space. Transactions in financial markets. Planning and logistics in supply chains. What’s shifted the dial is the emergence of impressive generative AI, which allows us to create imagery from text and — perhaps more critically for knowledge work — plausible text from simple conversational prompts.

So AI is now in the hands of everybody from their phone or desktop. Social media managers, video editors, copywriters and journalists are under threat. So are translators, administrators, customer service representatives and telemarketers.

There are broadly three stages of response of these knowledge workers to the Damoclean cyber sword that hovers so ominously close to their necks. We’ve seen them all in the short period since the killer version of ChatGPT emerged late last year.

Stage one is, of course, denial. This technology could never replace me.

This is the ultimate in baloney. People have already been replaced.

Stage two is the belief that the knowledge worker can reinvent him or herself as an interlocutor with the AI, perhaps interpreting and repackaging its wisdom. But as I’ve written before, beware the bot whisperers. There are plenty of people who’ll tell you that only they can prompt the oracle. Hmmm. They use their decades of clandestine prompting practice to run training courses and offer consultancy.

The reality is that school kids can use these platforms with very basic conversational English. And the bots themselves are already suggesting refinements for wayward human prompts.

And, of course, we’ve already seen the perils of knowledge workers relying on the tech to help them do their own jobs more efficiently. A lawyer in New York filed a submission to a court which cited plausible precedents from non-existent legal cases. (He seemed unaware that ChatGPT is prone to what the Silicon Valley tech bros describe as ‘hallucinations’.)

Stage three is when reality hits. It dawns on the knowledge worker that they can only survive by offering something better than the AI. And this is when many understandably start to panic.

There will be some fields in which this is more plausible than others no doubt.

Arguably, I can still write something more original about AI than AI can write about itself. The problem is that the bot is quicker and cheaper than me. Some businesses will therefore inevitably opt for bland rehashed tosh that’s available in 30 seconds and eschew a writer inclined to send them an invoice.

What about the professions? That’s a really interesting area.

We may be reluctant to cede medical diagnoses to AI and demand that someone qualified looks at the output of algorithmic read-out before prescribing powerful medications or amputating limbs. The problem is that the amount of knowledge possessed by the latest generations of artificial intelligence will quickly start to outstrip a symposium of surgeons.

What do we do when an AI’s analysis of a blood test or scan is more likely to be correct than that of a doctor? The bot can, after all, see patterns and clues that humans often cannot. How do we feel when its recommended treatment for a condition is based on a synthesis of every clinical trial that has ever been conducted and every peer-reviewed paper ever published on the subject?

At this point, a doctor is in danger of being a knowledge worker whose knowledge is redundant.

Professions such as medicine, accountancy and law have historically relied on people learning huge volumes of facts by rote and then interpreting them intelligently in real-world situations.

A lawyer sees the relevance of one court case to another, because she has studied a large number of principles and precedents. She knows where to look for source material she can quote in correspondence or to a judge in court.

A doctor examines a troubled knee bone and realises immediately it’s connected to the thigh bone. He draws on what he’s learnt at medical school and in clinical practice to make those connections, offer diagnoses and suggest treatments.

There seem to be only two things standing in the way of AI taking over this kind of work completely.

The first obstacle is technological. The hallucinations have to stop and the likes of Google, Microsoft and other technology giants need to find a way of weening the models off the magic mushrooms. (There is a debate apparently in the tech community as to how far this is possible and the extent to which it is at the very core of the whole generative AI model.)

The second obstacle is human. How far do we want the most important of decisions to be taken by machines? If we’re going to make a stand, it had better be soon. Very soon.

--

--

philwoodford

Writer, trainer and lecturer. Co-host of weekly news review show on Colourful Radio.