Those who like to make projections about the future tend to be those with the deepest knowledge of fledgling technology. In the current context, this means projecting how Artificial Intelligence, Big Data – gathering systematic intelligence of habits and preferences as employees and customers – and 3D Printing are set to transform many industries and lifestyles. Such futurists like to project a geek’s paradise of an information-rich, automated world of smart cars, smart buildings and bespoke manufacturing.
But has futurism been here before? In the 1950s and 1960s in western Europe, there were projections of what the world would look like by the Year 2000 (remember that?), and plans based on such projections were not always effective. The new inventions of the day, principally in the realm of transport, dominated thinking. So we had architects’ drawings of super-highways, urban roads, space-rockets and smaller flying vehicles.
By the early 21st Century, however, it was clear that the dominance of technology in such planning had caused harm. Communities had been torn apart by urban motorways. In some cities exhaust fumes posed a direct threat to human health, to say nothing of carbon emissions and climate change. Many cities are now reversing those errors: introducing pedestrianised zones, creating more green spaces, banning cars – trying to let communities breathe again. Such initiatives do not only improve health and the environment, but tend to boost economic development and reduce crime.
Are we making the same mistake again in the 21st Century? Are too many plans based around what technology can do, rather than what people need?
A truly radical idea to help the future work for us is for research on technology, and on the human community, to be much more closely coordinated. This should be based on unifying concepts that technology ought to serve people, while observing timeless ethical principles of: first, do no harm; never use people as a means to and end, and so on.
An examination of the evidence base for effective human organizations and other communities points clearly to the need for a step change improvement in our leadership and management. As I have set out in my book The Management Shift, this can be summarized as a shift:
- From a controlling mindset to an empowering one,
- From setting rules to establishing principles,
- From issuing instructions to creating teams,
- From overseeing transactions to building alliances,
- From a focus on short-term profits to serving all stakeholders.
Collectively, business schools and the wider management community have not fully modernized the business model, as I wrote in an earlier blog. There is still the cultural bias of referring to people as ‘resources’ and to people management as ‘the soft stuff’, as though organizations do not comprise people. Application of the rich evidence base on maximizing employee engagement and collective intelligence is applied patchily, at best. The most dynamic employers create meaningful careers for staff, reward them well, and pay attention to collective morale and engagement. Not only do such employers provide a more human working environment, they also tend to perform better – including on financial measures.
Slow implementation of such research findings opens up the risk that technology will be badly implemented. Much every-day experience often bears this out. For example, customer service automation is often badly implemented, creating a short and inappropriate list of options for the frustrated consumer.
We often have intelligent IT, but unintelligent organizational design. The separate ‘silos’ of the conventional corporation mean that technologists, the personnel function, customer service managers and marketing department are at arm’s length apart from each other – or even further. If website programmers are not communicating well with product designers and the marketing folk, preferably co-creating technology, services and products together; and if morale in the workplace is poor, the customer experience is unlikely to be thrilling.
Meanwhile, an ever-present fear is that ‘Big Data’ will rapidly become ‘Big Brother’ – snooping on us as workers and consumers and compromising our privacy, seeking to trick the unwary customer, rather than engage them in a lasting relationship. The surest defence against this is to maintain a relentless and principled commitment to nurturing the human community, and honouring timeless ethical principles.
A lesson from corporate scandals – LIBOR-rigging, horsemeat passed off as beef, illegal phone-tapping – is the primacy of ethical conduct. It is a glaring omission that management has never developed its own ethical code, similar to doctors’ Hippocratic Oath. It may be impractical to have everyone with managerial responsibility formally registered, but a move towards a more formal statement of intent to avoid fraud, deception, cruelty, exploitation and so on would be a liberating initiative – especially as we see how such enlightened practice actually helps the business perform better.
If technology is mis-used, through snooping on staff and customers, there is a risk of backlash against all technology, and a rise in people opting out of social media altogether.
What would be even more revolutionary than Big Data, etc. would be a Copernican shift in management thinking, in the spirit of Peter Drucker and other enlightened thinkers. This would replace the current misanthropic obsession with company structure and data, with a philosophy based on an understanding of real human communities, and how businesses can profit by serving them.
This philosophy will not only get the best out of people, it will get the best out of technology. Along with the urban motorways of the 20th Century, we need to ditch the cold, hierarchical corporation of silos, separation and obedience.
About the author:
Vlatka Hlupic is Professor of Business and Management at Westminster University, CEO of the Drucker Society London has advised major international organisations and is a management consultant.