Management, Peter Drucker famously said, is a liberal art. It may be informed by data and enhanced by technology. But it remains an art, not a science, and one practised by people, not machines. Do you want to be managed by an algorithm or a “platform”? Me neither.
But, you may object, this is sentimental and out of date. Computer code already influences our lives in all sorts of unseen ways, nudging us into purchasing decisions and co-ordinating our customer experiences. Apps make things happen. They connect passengers with available taxi drivers, or instruct delivery drivers and couriers to bring goods to our offices or our homes. Sophisticated and productive factories and “fulfillment centres” operate with precious few human hands in sight. And those human beings who do remain will be monitored and “supervised”, in the first instance, by technology, not people.
Does the human manager face obsolescence, then, to be replaced by shiny and unflinching machinery? Hardly. Computers process data at miraculous speeds, and are only getting quicker. But do they exercise judgment? Can they actually think? Or do they just do what we, the humans, tell them to do?
Writing for the McKinsey Quarterly in 1967, Drucker made his view clear. “We are beginning to realise that the computer makes no decisions; it only carries out orders. It’s a total moron, and therein lies its strength,” he wrote.
Again, technologists may quibble. Fifty years on, “artificial intelligence” and “machine learning” are held up as a powerful challenge to Drucker’s robust dismissal of supposedly moronic technology. Didn’t IBM’s Deep Blue defeat the apparently invincible Garry Kasparov at chess? And if that wasn’t good enough, how about Google’s AlphaGo, which beat the world Go champion only last year? That was simply not supposed to happen – Go being an infinitely subtle and varied game that requires the human touch and human insight to play it properly.
The march of the machines is formidable, irresistible, and broadly to be welcomed. The story of the last three hundred years, in compressed and over-simplified form, tells of technological innovation making old ways of doing things redundant, while greater efficiency is achieved, generating greater profits, making people better off and creating a need for new jobs and new kinds of work. (I warned you it was an over-simplified version.) Techno-pessimism is unhelpful and too gloomy by half. The basic fact of what is often called “digital transformation” or the “fourth industrial revolution” is that new jobs will be created even while others are destroyed.
What is not being destroyed is human life: human beings and their needs. This means that human-oriented goods and services will still be needed, delivered by other living, breathing human beings. And while work is still being carried out by people human managers will be needed, too. Drones and robots cannot do it all.
Consider the growing need for health and social care, demanded by citizens living longer and fuller lives. Robots may have a role to play here, perhaps to supplement and support the work done by people. But could a machine ever truly care for a person in the way that a well trained and managed living employee can? (That verb: to care. If we are using it properly it implies the presence of a living thing, not a robot. Robots may or may not be morons, but they surely do not care.)
Fifty years ago Drucker saw a paradoxical benefit in the arrival of computers. “It forces us to think, to set the criteria. The stupider the tool, the brighter the master has to be – and this is the dumbest tool we have ever had,” he said.
Computers are dumb no more. They have phenomenal capacity, processing power, and speed. They can learn. They can get better at what they do. They also don’t get tired, don’t complain, and require neither food, holiday, nor payment. They may be some people’s idea of the ideal employee.
But we need to keep the claims of the technologists in perspective. What is grandly labeled “artificial intelligence” may not always be quite as clever as all that. We should not presume that the machine will always come up with the best answer. Coders are human, after all. And as the writer Margaret Heffernan has observed, “Artificial intelligence is unlikely to be the solution to genuine stupidity.”
Management does not just mean caring; it means paying attention. Gadgets can monitor and measure how many steps we have taken or how quickly we have completed a task. But machines cannot supervise us in the way that a human manager can. Homo sapiens is not redundant yet.
In that same McKinsey article Drucker also observed: “We must learn to make knowledge productive.” This remains the fundamental human challenge facing human managers.
About the author:
Writer/speaker on business, politics, management
This article first appeared on Linkedin.