Finding something that machines can’t do
by Lesley Crane

guestPosted by

One of the pervasive and persuasive myths associated with burgeoning technology in the workplace is that it would create thousands of new and liberating jobs: the truth is more like a wholesale stripping of human employment (see Liviu Nedelsecu). Another preaches that technologies will afford workers more time: more time to think, reflect, to be creative, to learn and innovate, to work from home. Recent visits to several private and public sector organizations told a different story. I was left with a strong impression of people running hard to keep up – grateful for the business, but perhaps at a loss as to how to increase the hours in a day. This is arguably indicative of workplace culture in which it is technology which defines the tasks that people do. It takes little stretch of the imagination to envisage technology as coming to define roles and responsibilities, even values. This sign of things to come can easily be seen than in the practice of organizational knowledge management.

 

The modern practice of knowledge management (KM) emerged in the early 1990s in the resource drought left by the reverse-engineering debacle. The influential theory of the knowledge creating firm promoted by Ikujiro Nonaka and his colleagues offered 3 ideas which, although not new, were nonetheless embraced with enthusiasm perhaps because they came labelled with an attractive glittery badge called ‘knowledge’.

 

First, there was the idea that firms are not just input-output information processing factories. Instead they should be seen as knowledge and information generators, the success of which relies on efficient and effective interaction with the environment in which the firm operates.

 

Linked to this, the second idea promoted knowledge as the firm’s most important and valuable asset which in turn elevated the status of knowledge creating and sharing to top position on corporate agendas.

 

The third idea, and probably the most influential aspect of Nonaka’s theory, is the defined structure of knowledge as possessing two constituents – tacit and explicit. It is this last idea which has largely driven modern KM, arguably into a brick wall. Tacit knowledge – difficult to articulate, influential to action, the most valuable form of knowledge – needs to be converted to explicit knowledge in order to leverage its potential. This is a simplistic expression of the theory’s contents, but it is the version that has been most taken at face value. The upshot is a global industry and practice dedicated to harnessing (tacit) knowledge, largely through the application of technology. Little of which has actually worked by all accounts.

 

The typical KM practice within an organization is centred around some kind of monolithic database into which workers are expected to record their everyday experiences, share their professional profiles, and communicate with whoever. Sophisticated systems might even record and analyse people’s technology-use behaviours, generating data for predictive analytics (which practice has all the promise of evolving into a third myth). The focus then is on motivating and incentivising people to engage with technology. Whilst this is a satisfactory organizational aim in the generic sense, it promotes a de-humanized view of knowledge.

 

Similar to mono-directional and generalist organizational learning approaches, this approach to the management of knowledge misses the critical point about human behaviour. Most knowledge is shared and created, and most learning takes place in discourse in social interaction (see Nancy Dixon).

 

But there is more to it than this, and this is perhaps one of the greatest differentiators between intelligent machines and humans – perhaps even more than creativity and irrationality: tacit knowing. Paul Duguid, in his essay on The Art of Knowing, reasons that tacit knowing, or ‘knowing how’, makes explicit knowing -‘knowing that’ – actionable. You cannot do one without the other. Further, decades of research in cognitive psychology yields a view of tacit knowledge as that which the agent abstracts automatically and unconsciously from the environment, and which influences action. Connect this to the idea that around 95% of what we do in any day is done automatically without conscious control (see Bargh et al.). Conventional approaches to organizational knowledge management and learning ignore all this.

 

Here is the nub. To a larger or lesser extent people are continuously learning and sharing knowledge in social interaction. The ways in which we do this, and their consequences, are influenced and shaped by the contextual particulars in which we interact. This is, in essence, how we make sense – mostly automatically – of the environment in which we exist. Even when we read the words of another, the writer has no control over how we interpret and make sense of what is written. The writer and reader interact in much the same way as two folk talking. This is the stuff of innovation and new knowledge, the oil of decision-making and problem-solving.

 

To pay no attention to the influence and function of tacit knowing in the modern, technologically driven environment is to give primacy to technology. So what then happens when, as many predict, machines are developed which learn and think for themselves – IBM’s Watson, for instance? Promoted as leading to ‘human intellectual advantages’ by enabling new human-computer partnerships, Watson is still a highly sophisticated computer that is designed to learn and think. If the organizational practice of knowledge management, and other similar practices, continue to emphasise the use of technologies in, for instance, human knowledge sharing, then it is conceivable that in the future the human element may be deemed no longer necessary. Computers are better at sharing ‘de-humanized’ knowledge than humans are.

 

Tacit knowing is a common enough ability in humans, but irreplaceable by technology. As Paul Zak recently reminds us, Peter Drucker saw work as a social enterprise. He also envisaged social innovation as of far greater importance and impact than any technology. Draw your own conclusions.

 

Reference: Bargh, J. & Chartrand, T. (1999). The unbearable automaticity of being. American Psychologist, 54, (7): 462 – 479

 

About the author:

Dr. Lesley Crane is an independent consultant specializing in effective human communications in organizational knowledge, learning and leadership. Her forthcoming book, Knowledge and Discourse Matters, published by J Wiley & Sons, elaborates on the themes and ideas touched on here.

Leave a Reply

Your email address will not be published. Required fields are marked *