The debate about ChatGPT has swept across the world like a storm, with hundreds of millions of users overwhelming OpenAI’s servers. Many people are both terrified and wowed by the new artificial intelligence (AI) tools, which seem to shake the very foundation of many sectors that were thought to be the last to be automated. Sectors such as education, art and design, and many others that rely on human creativity, critical thinking, and communication, are all being impacted by these tools.
Before we get too caught up in the hype, it’s important to examine why these new AI tools are making headlines. The answer is simple: they provide helpful responses to questions that are human-like.
The AI genie is already out of the lamp
Let’s begin by providing some context on Artificial Intelligence (AI). AI is a broad field of computer science that encompasses a variety of techniques and technologies for creating algorithms that can perform tasks that would typically require human intelligence, such as perception, reasoning, and decision-making. Google’s CEO has even gone so far as to say that AI is more profound to humanity than fire.
AI technology is already permeating almost all industrial sectors. Manufacturing companies are using machine learning to improve their processes and optimize supply chain operations. Retailers are using AI technology to improve operations and enhance customer experiences through better inventory management, demand forecasting and much more. Big consulting firms offer all kinds of AI-labeled services with the promise of improved decision making, increased operational efficiency, enhanced customer insights, and the holy grail of competitive advantage. The use of AI-tools in business, management, and organization is having a significant impact on the way leaders think and behave. At the same time, we must also be aware of the potential perils of AI technology and consider the ethical implications of its use.
Over the last few years, experts have provided deeper and more balanced perspectives on the promise and potential perils of AI technology. Notable AI experts such as Ray Kurzweil, Nick Bostrom, and Max Tegmark have all argued that the development has the potential to bring significant benefits for humanity – yet they also warn of potentially serious negative consequences, including the loss of control over the AI created. When AI moves from specific (AI-tools today) to general intelligence (humans), and then surpasses it, self-improving algorithms could outsmart its human creators, leading to unintended and unimaginable consequences.
Generative AI is a subfield of AI that focuses on creating models and algorithms that can generate new data or content, such as text, images, or music. Generative AI models are trained on large amounts of data and can learn the underlying patterns and structure of the data. Once trained, these models can be used to create new, previously unseen data that is similar to the training data, but not identical. However, they are not immune to the ethical concerns that come with the use of AI, of which bias and privacy are just two.
If AI was the new thing, generative AI is the “new new” thing.
Large Language Models
It’s human nature to think that AI-tools think and know, especially when they talk back to us. However, this anthropomorphism is misplaced. The latest set of AI-tools are just advanced algorithms that use probabilities of words following other words using Bayes’ famous 18th century probability theorem. ChatGPT and similar tools do not think, know, or feel. However, who knows what the n:th version of these tools will do.
The 20th century philosopher Ludwig Wittgenstein coined the term “language games” to describe how we use the same word but often mean different things. Human language is fluid and intertwined with our collective behavior, such that the meaning of words evolves. The term “strategy,” for example, gives rise to different meanings among military commanders than in a board meeting, and no meaning at all for a four-year-old. Our language evolves.
ChatGPT exemplifies highly advanced “large language models” (LLM) that are capable of completing a wide range of natural language processing tasks, such as language translation, summarization, question answering, and text generation. It utilizes deep learning techniques, specifically transformer-based neural networks, and has been trained using a vast amount of internet-based textual data, resulting in hundreds of billions of parameters. As a result, GPT is considered one of the most powerful language models currently available.
The advancement of sophisticated algorithms in conjunction with increased computational power has enabled LLMs such as ChatGPT to mimic human language to a remarkable extent, providing almost human-like responses to our prompts. The possibilities to create value are endless and include assistance with writing and editing reports, summarizing data, automating customer service interactions, extracting insights and patterns from large amounts of unstructured text, and more.
Dreaming of electrical sheep?
It’s worth noting that we have not yet reached the level of general, or strong, AI, but ChatGPT-3 provides a glimpse into what’s to come. The next version of this model is set to increase the number of parameters from 175 billion to mind -bending trillions, and will also be trained on an unprecedented quantity of interactions with the current version. In the future, the scope of generative AI is set to expand beyond text to include images, voice, music, video, and beyond. It remains to be seen how these developments will unfold. As science fiction writer Philip K. Dick once wrote, we may begin to see AI dream of electrical sheep.
Recent scientific articles listing a tool as a co-author have sparked outrage among publishers and authors. Publishers are now scrambling to develop or revise their policies. While there is no universal agreement on the issue, it is likely that the inclusion of algorithms as authors will not be accepted, as they do not meet the criteria for authorship, including the ability to take responsibility for the content and integrity of scientific papers. On this note, as an author, I would like to express my gratitude to ChatGPT-3 for its valuable inputs based on my prompts. I would also like to thank Simon Caulkin for adding his final, and human touch to the text.
About the author:
Johan Roos is Chief Academic Officer at Hult International Business School and Senior Advisor Peter Drucker Society Europe
 Stokel-Ealker., C, “ChatGPT listed as author on research papers: many scientists disapprove,” Nature (online ahead of print). doi: 10.1038/d41586-023-00107-z.