The Automation of Work

What the automation of work entails - for work and for people.

Automation is about taking what humans are needed to do and doing it by machines.

What drives it? Sometimes nothing more than simple economics – bringing down costs. That’s what it’s usually seen as, and no wonder it’s derided as anti-human. Replacing human beings, depriving them of work, for your own gain. That’s often there, but I think that drives adopting automation, not creating it.

There’s often another impulse behind creating it, one that’s undoubtedly pro-human. The belief that a human should only have to do what a human is needed to do, unless he wants to. Something tedious and repetitive is not what you want to have to spend time on. Automation is freedom from tedium.

Types of Work

So much for what drives it; what happens once it’s there?

To think about that, I’ll split work in four quadrants across the dimensions of meaningfulness and usefulness. It’s a gross simplification, obviously personal rather than universal. What one person considers useful / meaningful needn’t be the same as another – but it’ll suffice.

What do I mean by the words useful and meaningful? That deserves an entire essay of its own, but I’ll leave it at this. Useful being it makes someone’s life better in some way. Meaningful being that I, the one doing it, think that what I do matters.

Useful but not meaningful – probably routine things like clearing people’s dues, handling their complaints, sewage, making deliveries, handling the counter at the checkout line, and the like. Basically someone needs to do it, but not many people would want to. Things that are low paid but essential, yet, do them for a little while and you might begin to question your existence. And if you ask kids what they want to be when they grow up, you definitely won’t find these in the responses.

Meaningful but not useful – things like performance reviews, recommendation letters. These don’t really do anything – not the way products like food, clothes, gadgets or services like cleaners, delivery agents, mechanics serve needs (at best, they can lead to something else that might do something useful). But they mean something. For instance, in the case of recommendation letters – that some person thinks and expects that some other person sat down to write something genuine about yet another person that he (the first person) can base his decision on. Setting their time on fire is a signal – that this was worth the time it took to write it. In fact, what I’m doing right now – writing this – is probably in this bucket too.

Neither useful nor meaningful – formalities, things done just because you’re supposed to do them, like sitting in office till late for the sake of being seen doing it, or calling a meeting because someone’s said there needs to be a meeting, and so on. Or things you do that you know won’t do anything – like making high level strategy presentations telling other people in very vague terms how they should be doing their job (which you’ve never done yourself), and leaving it at that. The funny thing about this segment, now that I think about it, is that it’s got two sub-segments, one of which is very highly paid and the other very low-skilled.

Useful and meaningful – I’m not sure how to classify this. What I do know is that it’s not an inbuilt feature of any particular work or profession. Which is to say it’s unlikely to hold across all or even the majority of any particular field or profession (though I’m sure it’s present more in some than others). I can imagine some doctors who feel their work means something and helps someone, but I can imagine many more who don’t. I can imagine some lawyers who feel so too, but many more who don’t. Just as I can think of some writers who genuinely change the way you think about something, but many more who churn out what they know sells. The same probably holds for artists, people working in companies / governments / non-profits – some would feel that way, but I can’t imagine all or even most of them doing that. So I guess it’s as much a function of the person’s makeup and particular context as it is their work.

Consequences of Automation

Automation effects all of these in different ways.

The useful but not meaningful bloc is the first to be impacted. Not surprisingly, because it’s stuff that needs to be done, but usually not willingly. Drones to handle complaints, robots to clean drains, self-driving vehicles and so on.

The meaningful but not useful chunk is impacted too. As Ethan Mollick points out in Setting time on fire, AI can not just do the job faster, but do it better than most people as well. Which is to say “that you may actually be hurting people by not writing a letter of recommendation by AI, especially if you are not a particularly strong writer”. And so it is for college applications, performance reviews, presentations and the like – unless you really like doing it, you’ll find it hard not to get the job done better with less effort.

But it means that this bucket diminishes – often what gave much of this work meaning was the signal, the time set on fire. Now that that’s gone, the meaning likely goes too. I can imagine (unfortunately not simply imagine, but actually recall) trying to pretend (and others playing along) like a pointless presentation or document I’m working on matters, even though I know that at best someone might just skim it in a few seconds. But now that I can generate it in a few seconds, it becomes much harder to maintain the charade – now it’s no longer just useless but also meaningless. And so this bucket collapses into the third bucket – not useful and not meaningful.

Or does it? I was going to write so, I had in fact already written this paragraph – The two buckets- ‘not useful and not meaningful’ – and ‘meaningful but not useful’ – become one, because the latter dies. What does its dying mean? Perhaps that it will become impossible to do meaningful work that isn’t also useful.

But now I think this won’t be entirely true. It will be true where effectiveness is closely correlated (positively) with value. Effectiveness meaning the outcome you produce; value meaning how the end user feels. This is usually true where emotions and feelings don’t enter into the picture. To take the example of churning out documents and presentations – your boss, if he cares at all, would care only about the final product, not so much about how you felt about it or how you did it. It holds here because effectiveness translates to value, and efficiency (amount of input needed to produce output) in turn correlates with effectiveness (and thus value). Thus, tech that multiplies efficiency magnifies effectiveness and hence value – which means that anything that increases efficiency is good.

Suppose though I want to gift someone something – I could just have my AI assistant find out from theirs what their most desirable thing would be and give that. But would the happiness of the end user depend on receiving the optimal product or on something more nebulous – something which I’m aware I’m highly unqualified to talk about – like the feeling in the gift-giver and the thought behind the gift? Or think of dating applications that simply filtered each other’s AI assistant’s profiles and set up optimal matches at optimal venues. In short, picture any scenario where the most ‘effective’ product or service or outcome needn’t be the one that makes the user feel the best. Maybe here it’s more useful to do something less useful but more meaningful.

The neither useful nor meaningful work hopefully takes a dent too. Writing tedious reports you know for sure (where you don’t definitely know, you can impart some, howsoever small, meaningfulness) no one will ever read simply to fulfil formalities – now no one needs to actually write them either. Perhaps it’ll even be possible to offload ceremonial work to AI assistants soon – sitting in office till late, attending events where you simply register your presence. Perhaps, but I’m not very sanguine about it – in my opinion human capacity and love for bullshit is too strong for any innovation to trim.

The last is useful and meaningful work – perhaps here you can get the greatest boost of all. A person’s output increases, and much of the tedious bit and grunt work can be delegated away. They retain only those aspects where their skill is truly needed, increasing their throughput.

There are other possibilities though. You can imagine different relations with technology. ‘Leveraging’ implies a superior-inferior relationship between the human and the tech. It’s what I do to write this on a laptop – I leverage the technology, but I can’t dream of it replacing me and writing in my place. ‘Collaborating’ is a partnership of near equals – someone who wants to write an application to a college or a job might collaborate with ChatGPT and refine & prune its output. ‘Overseeing’ is a different relationship – an inferior-superior one. The word ‘oversee’ makes it sound like you, the human, are in charge, but it’s actually more like simply ‘observing’ – the human (the one operating at least, if not the one who designed, the machine) is subordinate here. The machine is fully competent – at any rate, far more than you or any human can hope to be. You simply observe it, and occasionally try to chip in with some contribution, more to feel you’re doing something than to actually add any value to the process.

As tech improves I imagine processes move from leveraging to collaborating to observing. The particular stage depends on the kind of work as much as it does the technology. For millennia, writing has only been leveraging – whether paper or ink or whatever technology was available. Quite rapidly, we’ve moved from leveraging to collaborating and even observing. Of course, the particular stage varies with the context. I can imagine using an LLM to write performance reviews and recommendation letters entirely by itself, and myself simply observing. But I can’t see it (as of this date) writing this piece for me itself.

What it comes to is that, maybe – if automation gets good enough to the point it outperforms most humans at meaningful and useful tasks, you retain usefulness but lose meaning. Take a doctor performing surgery or a judge pronouncing verdicts or a lawyer arguing a case. Reduced to simply observing (overseeing) AI, they might question their contribution.

Humanity benefits as overall usefulness probably shoots up by several multiples – more surgeries performed, more cases disposed. And we do progress – “Civilization advances by extending the number of operations we can perform without thinking about them”. Now you don’t even need think about performing surgery, the way you don’t need to think about infrared radiation and LEDs when using a remote control.

I wonder though, what happens to the individuals who used to perform these tasks? Are they happier?

Bottom of the Pyramid

When you impact work and how it’s done, you inevitably affect the people who do the work. It sounds noble, the thought of freeing people from drudgery, leaving people only to do what they’re needed for. And it is how civilization advances – only when some people are freed from wondering what they’ll eat or who’ll protect them because others will produce food and fight, can they do other things.

And it is ennobling, but not for everyone. I imagine it is highly so for those who drive the innovation. What they’re doing is highly useful and highly meaningful, at least in their eyes.

And for another segment of those who can access the innovation, comprehend how to use it (not necessarily comprehend it – the way you can use Wi-Fi and airplanes without knowing how they work) and respond to it – things might go either way. Offloading meaningless and useless work to artificially intelligent assistants could be satisfying. Losing the meaning in your own work probably won’t be. But there are many different ways people can play it out – from saving yourself for those aspects of work you find worthwhile to simply finding value from something outside of work.

But for a not insignificant chunk of people, this ennobling is threatening. Those who perform either the useful but not meaningful work, or the not useful and not meaningful work. The former genuinely face a danger of being replaced by technology – their work is essential, so it has to be done anyhow, and tech might do it better and cheaper. The danger to the latter isn’t so grave, though they make far more noise – the possibilities for the creation of meaningless work are infinite.

The response is usually the old Captain Swing at work, the fear of replacement. But that’s just the outward form it takes. It goes deeper than that, and the trite suggestions thrown loosely of ‘upskilling’ and ‘retraining’ don’t really acknowledge that it needs more than simply teaching an old dog new tricks.

Because what someone else would call a drudgery, something one would long to be freed from, is for others a security to cling on to. Sometimes, a person doesn’t want to be ‘freed’ from drudgery, even if it seems incomprehensible to someone else. And perhaps they’re not wrong either. Perhaps it’s even an honest recognition of the fact that one can’t do anything better, or at any rate, doesn’t want to.

I remember telling someone what a waste of a human being it was that a person had to stand and mark on a sheet whoever entered the cafeteria to eat so that they could be billed. How easy it would be to replace him with a technological solution. And being told in turn that that person wouldn’t want to be freed. In all likelihood he was extraordinarily grateful to be able to solve the problem of making a living by doing something requiring virtually no skill whatsoever. The fact that it was unskilled and monotonous wasn’t a drawback but a bonus – he probably couldn’t do the job if it wasn’t. Whether the person or the circumstances are to blame is another question, but subsequently, in similar scenarios of the same nature in different contexts, I’ve found that this analysis is actually accurate.

The cause of the difference in these two attitudes is usually simple – the base of Maslow’s hierarchy, physiological needs. Tedium is presumably a luxury, something people who have time and resources to read and write this kind of stuff can afford. Those faced with a hand-to-mouth existence can’t afford to indulge in it – the question of tedium is secondary to the question of securing needs. While those of us in the first camp can’t hope to understand the latter – perhaps they can intellectually, like I’ve tried to do here – but not truly.

And maybe, many people don’t want to go beyond Maslow’s base either – in Kafka’s words, “So long as you have food in your mouth you have solved all questions for the time being”. Nor is there any reason they need to ascend Maslow’s pyramid if they don’t want to.

Automation – of which AI is a major, though not the whole, part – will probably continue to transform work incredibly, as it has. I’m sure it’ll create new awesome capabilities, like it’s already doing. Which gives a fillip to doing meaningful, useful work. It lowers the bar to produce such work, and it makes producing it easier. Which means more people can do such work, and they can do more of it.

The amount of meaningless work some of us do individually with our time (meaningless work directly by a person) might, hopefully, now reduce. Though the amount of meaningless work done as a whole (total meaningless work produced) will probably increase manifold, given the sheer efficiency and ease with which it can be produced (without a human having to do it). As will the meaninglessness of a particular work for a lot of works (essays like this? You can just generate them).

I doubt it’ll eliminate useless work though, even if it multiplies the efficiency with which it’s done. The human capacity to produce pointless work is unbounded. And it likely might just multiply its magnitude as well, as did computers and smartphones and the internet and most innovations. Keynes foretold that around nearly this time (2030) people would work only 15 hours a week. On the contrary, they work significantly more than they did in his time. Useless work is after all a cultural creation – the demand comes from people, who give it economic value and thus ensure supply. Technology only helps to meet the demand humans create.