Can a skill ever become ‘obsolete’? To put it specifically, does technology make something become ‘pointless’ to put in the effort to learn?
The example everywhere today is obviously AI. Is it pointless to learn to code? Are writing, reading, and maybe even art – poetry, photography, filming, singing – unnecessary if technology can do it for you?
It’s not a new question, definitely; prior examples abound. Did printing make writing, or calligraphy, pointless, or the advent of cars make horse-riding needless? Will self-driving cars in turn do the same to driving? The same principle applies to crafts like woodworking or weaving.
A specific example makes things concrete, and because it’s contemporary, AI works well, although a general argument ought to hold across domains.
In the first place, what is ‘redundant’? Bypassing a discussion on semantics, something is redundant if it is neither a means nor an end. To put it simply, I might code or write for fun (for the sake of it) or for utility (as a means to do something). If I don’t like it, and if I don’t need to do it, then I probably won’t – and hence it’s redundant, at least for me.
By that logic, it’s easy to argue that virtually nothing will become universally redundant, that is, of no value to anyone in the world. The earth is vast, and somewhere, I’m sure, you’ll always find someone who’ll like to write or stitch or sing or draw or ride horses or drive even if there’s no necessity to.
So, regardless of what anyone says will happen in the future, and how particular skills might become obsolete, that’s no reason for me to abandon them if I enjoy them.
But now, putting aside the intrinsic reason to do things, is there a case that certain skills will be made obsolete, and hence one shouldn’t learn them, except for recreation? That’s a harder and more interesting question.
There are two ways to approach it. One is a technical argument – look at the specifics of each case and try to predict the future. Can self-driving cars get so good that they can handle chaotic traffic conditions and rugged terrain? Can autonomous robots develop the nuance, grip, and speed to handle complex tasks and replace human workers? Can LLMs or AI in general come up with original ideas, write and produce content as good or better than a human could?
The technical approach is one I’ll sidestep entirely. For one, completely lacking domain expertise, I have no locus standi whatsoever to comment on the nitty-gritty of any of those examples, particularly when much better versed people have widely varying opinions. Technology and the future are notoriously difficult to get right, and even experts are so off the mark that decades later it’s amusing to look back with a smile and wonder how they could have got it so wrong.
And then, even if I was one hundred percent certain my assessments were correct, at best I’d be able to make the argument for one particular example. No, writing won’t be obsolete because LLMs can only predict content based on what they’re trained on, not come up with new matter. Or driving will be obsolete because it’s only a question of a machine’s latency beating a human’s response time to a stimulus. Whatever the logic, it applies to that specific domain only, and tells us nothing about others. And after all, if you are going to talk out of your hat and make far-fetched arguments, you might as well be ambitious and try to encompass everything.
So I’d rather take a more general approach to the question of whether skills do become obsolete (other than for intrinsic pleasure). There are some rather obvious edge cases – unlikely or rare exceptions – where skills do matter, which are worth getting out of the way first. Technology doesn’t penetrate the entire world, so pockets remain where there aren’t phones, laptops, and the like – hence, you never know when you might require an obsolete skill. And then, nothing is error-proof – AI generated code often comes with bugs, AI generated reports have errors, auto-pilot systems might fail or misfire – and so, someone with skill and knowledge benefits from it in these cases.
While true, these are edge cases, exceptions. It hardly seems worth the effort to spend tremendous effort and time honing a skill just for the eventuality that maybe, someday, somewhere, it might come in handy. The cost-benefit doesn’t tally; most would probably prefer to chuck it and take their chances.
So now the question is – putting aside recreation and exceptional circumstances, is it still worth putting in the effort to learn skills that technology might be able to do for you instead?
The boring answer to that is that it depends, but depends on what? Firstly, on what I want to do with it, and how good I need to be at it. It’s common to hear that ‘you don’t need to know how to code’, but who is ‘you’? A project manager or even a data analyst perhaps could get by without it, but a developer – and especially a good one? Similarly, an F1 driver probably would need to learn to drive, even if, in a world of autonomous vehicles, most people didn’t.
Which is to say, if this skill is a core competency, my bread and butter, something in which I need to have a high level of performance or output, then probably, it’ll never be obsolete or a waste of time for me to learn. I could get by with AI generated images and videos whenever I need them, but a filmmaker or artist hoping to compete in the market or hold people’s flickering attention mightn’t. I could get by with using AI to produce essays, but an author hoping to write a bestseller probably couldn’t. And I could get by using AI to write code but someone at big tech pushing a change affecting a billion people maybe couldn’t. None of which is to say that they can’t leverage a tool to enhance their output, but relying entirely on it is another matter.
Why can’t you rely entirely on a tool? One reason is obviously the potential for mistakes, especially if the product is going to reach millions of people. But the main reason I think is simply differentiation, or quality. You might or might not believe that AI can – or ever will – produce blockbuster movies or books, or build fantastic apps. If you believe it won’t, then it’s easy to argue that there’ll always be an avenue for human skill in these fields.
But even if you concede the possibility that it might, still, I believe, human skill will not be redundant, for the simple reason of differentiation. Imagine a world where with a single click you can write great books, or build stunning apps or websites, or produce awesome movies, or create amazing trading algorithms.
Does it follow that everyone is now a bestselling author or movie producer or millionaire entrepreneur or a hotshot trader? I think not. What follows is either a raising of the average level of the products, or some sort of personalization. Personalization would be that now you no longer need a market to consume content; everyone just produces their own movies or books by instructing the AI. Whether you’re in the mood for a action flick or a detective novel, you just ask, and it’s delivered to you, custom-made to know what you’d like.
While possible, I find this hard to imagine (even if technologically feasible) – a lot of the intrinsic value of products is social, discussing and sharing views about them (especially in the case of content like movies & books). More likely is that the average level of quality of the market rises, similar to how mobile phones or laptops available today are much better than the ones two decades ago. Improved technology raises the baseline level of all products, but it doesn’t make every product equal, let alone perfect.
If a single click produces amazing books or movies, then everyone can and will produce them, and the world drowns in them. But it hardly works that way, that every product commands an equal market share. If perfection was achieved, then it might, for there are no degrees of perfection. If AI produced books or movies or products that were perfect, that simply couldn’t be improved, then perhaps all would be equal. But in the absence of perfection, there’s a lesser and a greater, and each competes for and appeals to different markets, whether on cost or culture or features. Within each niche, though, there’s always a Pareto principle or power rule, with a small number winning the lion’s share.
So, in this market of elevated quality, what distinguishes an incredible (incredible even by our new standards) product from an average (again, relative to the new standards) product? Either the model of the AI or the human touches of the producer. But if it was the AI, then everyone would presumably use that better model, nullifying the difference, so it can’t be. Something, then, separates the leaders from the rest, something that can’t be replicated by everyone.
That something obviously depends on humans, for the very reason that some can do it and others can’t, whatever you might call it – skill, talent, luck, genetics. In short, you might have the most stupendous technology that can write books or create songs or build applications for you, but that won’t negate talent and effort. Regardless of what some might say that there’s no ‘point’ learning to code or to write well or even delve into any specific domain or pursuit (no need for lawyers, no need for teachers, no need for therapists, AI will do everything) effort and dedication do pay off, and for the willing, there’ll always be an avenue to do something with themselves.
It’s even clearer if you take a zero-sum competitive game, like for example, trading. An AI that comes up with the best profitable trading strategy (and already, many are promising to sell them), would spread like wildfire, promising to make everyone rich. Even ignoring the glaring question of why a company that could earn through trading with its AI would sell subscriptions for that AI instead of just using it themselves, it’s still inconsistent. If everyone adopts the same strategy, that strategy no longer works, for you could do better by pre-empting it – not to mention that, akin to everyone in a theater rushing out through the narrow doors after a shout of ‘Fire!’, only a few will make it, and most will get hurt.
The idea that a black box technology – something you don’t need to understand, but just use effortlessly to get what you want – will erode and level out differences among humans, as well as propel anyone and everyone to the highest level, is, I think, driven by a combination of laziness and well-meaning wishful thinking (other than, of course, the monetary incentives of those selling the black boxes). Laziness because it’s appealing to imagine being an expert or making incredible products / content without having to put in any effort. And well-meaning because you can achieve not just paradise, where everyone is happy, but the communist paradise, where everyone is also equal. Solve not just poverty but also inequality at one go.
Black boxes are always suspect. One needn’t shun them – you can use an LLM or Wi-Fi without knowing how they work – but, especially the ones that offer advice, like an LLM – ask you to follow them on blind faith, like a God. Sometimes, you can argue, God makes sense, and is right – but in a crowded marketplace where everyone and everything claims divinity and asks you to follow them on faith, the very idea of God is suspect.
Even with the best of technology, then, I don’t think human effort or skill will ever be redundant, and it would be very dull if it weren’t so, with everyone the same at everything.