We need to have hard conversations on the value of AI
by Mark Esposito, Terence Tse and Josh Entsminger

guestPosted by

These months have proven to be emblematic of the dangers of a hyperconnected world. Coronavirus cases continue to grow and grow fast, and asymmetries rise around the world at a pace we may have not imagined when 2020 started. Yet the digital nature of our hyperconnected world may prove to hold some of the critical solutions needed to scale novel approaches to the problems associated with the pandemic. The issue is not the virus alone: it’s more about the reactions to the virus such as information on the resources we need to allocate or an understanding of the wider consequences for businesses trying to respond.

Drucker Forum 2020

AI solutions

Among these digital systems, few are being more heralded more than AI-powered solutions. Despite its current novelty, AI is anything but new. The marked increase in experimentation in the pandemic, and the ensuing interest from governments and corporations alike, represents a new global conversation on AI.

Novel cases of AI use are quickly spreading across international media, such as rapid assessment of patient scans at scale for improved covid-19 detection, improved accuracy for global case tracking and prediction, wide review and collection of online articles relevant for awareness and assessment, and advanced chemical analysis to assist vaccine creation. Want some examples? From the BlueDot’s predictive awareness to Alibaba’s AI diagnostics ranging to transportation with the Hong Kong Mass Transit’s autonomous robotic cleaners and the herald of health-care AI with Boston Children’s Hospital’s HealthMap program, these programs have demonstrated a superior form of utilization of machine learning. Also noteworthy are DeepMind’s AlphaFold as well as the Center for Disease Control and Prevention’s assessment bot to finish with Facebook’s social network safety moderating. The icing on the cake comes from application with inherent ethical norms, such as BenevolentAI’s drug screening program.

The marked increase in experimentation in the pandemic, and the ensuing interest from governments and firms alike, represents a new state of affairs in the global conversation on AI.

As overwhelming this list of applications is, it demonstrates a broader public hope and commercial awareness for the increasing potential of AI as a fundamental piece of the modern technology landscape.

Reality before experiments are scaled

But a dose of reality is needed as the demand for experimentation grows into a demand for scaling. As not all problems demand AI solutions, nor are all existing AI solutions up to the task of many highly uncertain problems, so not all organizations are advanced enough to effectively deploy and leverage such solutions without creating second-order effects. While solutions at scale are needed, and new practices and means are in place to experiment, we need to be sure that organizations looking to put these experiments into play have a thorough understanding of what the “job to be done” really is. As with most transformations, such agendas are sometimes less about the technology than the culture, work, and mental models being changed such that new productivity, opportunities, and social advancement is actually achieved and made sustainable.

AI concerns

This concern extends to the question of how national  and municipal governments look to leverage these emerging technologies to help improve the speed, scale, and sophistication of responses to high-impact, low-probability events like large-scale systemic shocks. Whether it is governments looking for strategic investment for AI competency or for firms looking for proven AI applications, similar concerns need to emerge. For a more mature conversation, we need to move from what we want AI to do towards a more real, deeper conversation on what we need from AI in order to respond to crises while not generating a fundamental, deeper vacuum of rights.

We need to go further, as despite the innovativeness of those cases mentioned, broader strategies are needed for engaging with foresight into the principal and value-driven challenges brought on by AI. This will include creating the means for effective conversations on, amongst other things,  whether to sacrifice privacy to ensure health-care capacity, whether data ownership should be private or publicly managed, and whether the potential inequality from some applications outweigh the benefits.

What is the value of AI? As states look to AI to reshape their post-pandemic response, we need to have hard conversations on what the value of AI really is. All of this begins with a real appreciation for what AI can and cannot do when subjected to the demands of operational improvements at scale. These conversations need to happen together, and now to build better frameworks of use. Otherwise the huge potential of these technologies will be to no avail for the betterment of society when we need it the most.

About the Authors:
Mark Esposito Co-Founder & CLO, Nexus FrontierTech, Professor, Bestselling Author, Advisor to National Governments. 
Terence Tse is professor of finance at ESCP Business School and co-founder of Nexus FrontierTech. 
Josh Entsminger is a doctoral student in innovation and public policy at the UCL Institute for Innovation and Public Purpose.

This article is one in the “shape the debate” series relating to the fully digital 12th Global Peter Drucker Forum, under the theme “Leadership Everywhere” on October 28, 29 & 30, 2020.
#DruckerForum

Leave a Reply

Your email address will not be published. Required fields are marked *