The Human Imperative – Extended Abstract – Invitation to Comment

guestPosted by

by Julia Kirby and Richard Straub

As exponentially advancing digital technology transforms so much of work and the world, questions inevitably arise about the place of the human being. Some warn of a diminishing role for the human, such as in decision-making, starting perhaps with the simple tasks now performed by chatbots, but soon enough in more creative problem-solving. It is easy to imagine technology’s superhuman powers – its advanced algorithms, deep learning networks, and other AI strengths, its Blockchain dynamics, big data processing, and so forth—taking us inexorably toward the Singularity Ray Kurzweil envisions.

The questions are all the more urgent given the rate of change around us. It’s true that the future is always uncertain – remember the old quip, “prediction is difficult, especially about the future” – but today we are faced with heightened, and seemingly increasing, turbulence. Many look to technology to provide the confidence needed to navigate through uncertainty.

Yet there is a countercurrent emerging that calls for reasserting the human role. The experience of cities and nations responding to the COVID-19 crisis has emboldened these voices, as it highlights the human creativity and judgment essential not only to balance competing social and ethical priorities but to accomplish scientific breakthroughs and overcome logistical challenges. The same countercurrent is rising inside organizations where “data-driven” decision-making so often falls short of the sound judgment that, however tainted by cognitive biases, combines science with common sense.

As economic, fiscal, cultural, and political crises escalate in the wake of pandemic, the tension between the technocratic and the humanistic forces is reaching a breaking point. The former see a time of upheaval as an opportune moment to effect a large-scale “reset” to a system currently flawed in many ways. The latter reject any such revolutionary redesign as inimical to human nature which craves, as Peter Drucker put it, a balance between “change and continuity.” Which is the best way forward, and how can we ensure that it prevails? 

Leading thinkers at our 2021 Forum will grapple with important questions including but not limited to the following:

  • Must there be a human imperative at the core of organizations? How would we define it? What threatens it most today? How could good management serve it better?
  • Forced to make decisions under highly dynamic conditions, should organizations rely more heavily on data and analytics? What are the risks of moving away from human judgment?
  • Do we need better ways of discovering truth and thinking through the complex issues of our time? What insights should we take from philosophy, psychology, and other realms to prepare our minds for the age of AI?
  • What should we hope for—and fear—in the aftermath of a year of remote working? Will less in-person contact become the norm? How might human beings as social animals and community builders respond?
  • What lessons can we take from the Covid-19 crisis about the clashing perspectives of scientific experts, policymakers, business leaders, and ordinary citizens and workers—and how they should be prioritized or integrated to best serve the needs of humanity? 
  • As in every time of upheaval, some today say we should not let “a serious crisis go to waste.” But is seizing the chance to enact sweeping change a humane impulse? What can we learn from the history of sudden revolutions, whether political, cultural, or organizational?
  • Central to the human condition is the ability to learn from evidence and experience—both our own and others’. How is it, then, that human organizations prove so resistant to collective learning? How do we stop making the same mistakes?
  • What changes to management education would better equip managers with the knowledge and competences they need today? Are there useful models to be found in how other professions are mastered?
  • Peter Drucker insisted that to be a change leader, an organization must also “establish continuity internally and externally.” But that human-friendly balance he advised means nothing to a computer. Is it still valid as a principle for management?
  • So much of recent human achievement has resulted from growing capabilities in administration and leadership that the past hundred years have been called “the management century.” How can we extend that run and make even greater progress in the future?

This article is one in the “shape the debate” series relating to the 13th Global Peter Drucker Forum, under the theme “The Human Imperative” on November 10 + 17 (digital) and 18 + 19 (in person), 2021.


  1. There is a difference between valid objectives which emerge from human understanding of a situation and action required within a situation. External demands, social regulations, cultural expectations guide the need. The leadership challenge involves forecastIng dynamic need and planning production to match that need. Today’s valid objective will change tomorrow. Today’s production system and product or services which are ‘decided‘ via AI are only as good as the logic built into the AI yesterday AI might be good enough but in my view human judgement is always necessary to make simple decisions around ‘customer satisfaction’ and ‘discretionary activities’. When computer says no today, a human must revise the code to imagine and resolve the new scenario which AI was not designed for yesterday. The examples of flawed AI: eg flash crash in stock market, indicate human judgements are inevitably required.
    Overall the themes proposed are valid. Maybe add something on legitimacy to act. (Differences in politics, wealth, and ethics may be relevant subjects).
    For some history around the ‘forecasting’ problem. You could find historical data in Knight F. H. ‘Risk Uncertainty and Profit’ 1921 and 1933; and in Coase R.H. 1937 The nature of the firm. Economia.
    And for some history on the political vs legal debates see Commons JR 1934 ‘Institutional Economics. Its place in political economy’. (Republished 1959 Wisconsin Univeristy Press).

  2. Reasserting the human role – if not now, when? The pandemic gives us a unique opportunity to question the digital networks that have grown in leaps and bounds. Leadership and management – in both the public and private sectors – are challenged to lead the discussion and act consistently. The Global Peter Drucker Forum is a unique platform and capable of setting standards. The virtualisation of the Forum has increased its importance. A fine and paradoxical example of how digital communication technologies give us the chance to get the human dimension back – or, as quoted above, to “balance between change and continuity.”

  3. As we face the economic, fiscal, cultural and political challenges hinted at in the conference abstract, I believe our main concern must not be reduced to the “tension between the technocratic and humanistic forces” in search of answers and solutions.

    Much more fundamental questions are at stake and once answered, they will richly inform the technocratic vs. humanistic debate.

    The first question is about management as a profession. If we agree with Drucker that management is the most important social function, the one that lies at the heart of a society’s ability to build functioning institutions (in business, government and the social sector) in service of the common good, then why haven’t we established management as a true profession? To quote from Gordon & Howell (Higher Education for Business, 1959): a) where is the “systematic body of knowledge of substantial intellectual content”? b) where are the “standards of professional conduct, which take precedent over the goal of personal gain”? and c) where is “the enforcement of minimum standards and competence”?

    In his great book “From higher aims to hired hands – the social transformation of American business schools and the unfulfilled promise of management as a profession”, Rakesh Khurana masterfully shows us what went wrong with the first project to professionalize management. It ended with management largely understood as technical gadget bag of tricks and with absolving managers of any meaningful moral obligations. Interestingly, that first project started at a time of great societal and economic change at the end of the 19th century and was seen as instrumental to deal with the corresponding challenges.

    I would argue we are in a similar situation today. If we are to deal successfully with the aforementioned challenges we need truly professional management. I also believe we are in a much better position to succeed with a renewed effort at professionalizing management – if only we’ve learned so much from the first, failed attempt. So what holds us back?

    The second question is one of worldview. If we talk about the “human imperative” it matters greatly what we believe it means to be human. To say it bluntly: if we humans are nothing but cosmic accidents (as the majority view seems to be these days), then we might just as well stop here. In that case, nothing matters anyway. There is no meaning in our existence, there cannot be any standards for truth. Morality, justice and social good are whatever I think they are. Human nature is neither good nor bad, it just is. True is what the majority thinks is true and that can change tomorrow.

    I reject this worldview and so did Drucker. He believed in objective truth and held to a deeply Christian concept of human nature. I quote Joseph Maciariello from a talk he gave on “Peter Drucker’s Theology of Work” ( “Peter Drucker, for much of his life, tried to bridge the work of the executive with the norm, or what ought to be. And as best I know of the Kingdom of God at work, that is what his norm was.” In that same talk, Joseph Maciariello quotes Drucker like this: “I started out teaching religion, I’m only too aware that human beings perversely insist on behaving like human beings. This means pettiness and greed, vanity and lust for power and, yes, evil. After all, my first book was on the rise of Nazism”.

    It is clear why Drucker declared management to be an inherently moral activity, and rightfully so. This means that unless we are prepared to have an honest debate about worldview and human nature, we cannot have a meaningful conversation about the human imperative in management. More importantly, we cannot hope to solve the world’s ills – with or without professional management. Are we prepared to have such a debate?

  4. I look very much forward to discussing these burning issues at the Forum! We should, however, not forget that technology also supports us in many ways (automating burdensome manual tasks, better-informed decisions etc.). A key issue is perhaps rather how we use technology and for what purposes. Is technology as such but rather its “technocratic” use the problem, i.e., not taking the human aspect into account?

  5. You ask: Must there be a human imperative at the core of organizations? How would we define it? What threatens it most today? How could good management serve it better?

    We imbibe from Peter Drucker that respect for humans needs to be at the core of every organization. What threatens it the most today is explained in Orwell’s preface of Animal Farm: patriotism.
    In one of his books (I think, Post-Capitalist Society), Drucker explains the difference between patriotism and citizenship. He says something like: patriotism is the willingness to die for one’s country. Citizenship is the willingness to live for one’s country.
    Perhaps, the human imperative we need to define today is employee citizenship? I think, management could serve it better by reading Drucker’s books and the works of the many thinkers he cites: Hamilton, Kierkegaard, Shakespeare, and Orwell.

  6. Julia/Richard – a lot of unpack here. I’d suggest that the global pandemic has accelerated our adoption of advanced tech in every facet of our lives. So its pervasiveness is inevitable. Today, humans use tech to augment their work. My computer still needs me to initiate a task, an app, or a device. I envision a not-so-distant future of human augmenting tech work.

    Let me focus on my area of expertise and work over the past two decades: strategic relationships. At the moment, other than the highly transactional IoT, tech has no relationship with another tech. The interpersonal interactions are between individuals, not logos, buildings, or devices. Sociologists tell us that an average individual can proactively nurture 100-150 relationships at any point in time. Complex analysis of our thousands of contacts and hundreds of deeper relationships is required to understand which ones and why?

    What if every morning I got up, an Ironman-like Jarvis continuously, proactively, intentionally, strategically, and thus quantifiably, recommended the prioritized list of relationships I should touch base with, add value to, and engage/influence to move my key strategic priorities forward? Imagine how much more productive, efficient, and impactful we would be in me, the human, augmenting the tech? The tech isn’t in a viable position to ascertain my strategic priorities or make sound judgments on why I should spend more time with Julia than Richard. But it absolutely can do the heavy lifting of the in-depth analysis for me.

    I hope my comments have contributed to the discussions. I’m fascinated by this topic and look forward to reading insights from others.

Leave a Reply

Your email address will not be published. Required fields are marked *