{"id":4734,"date":"2024-08-30T08:25:07","date_gmt":"2024-08-30T06:25:07","guid":{"rendered":"https:\/\/www.druckerforum.org\/blog\/?p=4734"},"modified":"2024-08-30T08:25:09","modified_gmt":"2024-08-30T06:25:09","slug":"knowledge-in-the-age-of-aiby-david-weinberger","status":"publish","type":"post","link":"https:\/\/www.druckerforum.org\/blog\/knowledge-in-the-age-of-aiby-david-weinberger\/","title":{"rendered":"Knowledge in the Age of AI<br>by David Weinberger"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"1024\" height=\"538\" src=\"https:\/\/www.druckerforum.org\/blog\/wp-content\/uploads\/Weinberger_D_1200x630px-1-1024x538.jpg\" alt=\"\" class=\"wp-image-4737\" srcset=\"https:\/\/www.druckerforum.org\/blog\/wp-content\/uploads\/Weinberger_D_1200x630px-1-1024x538.jpg 1024w, https:\/\/www.druckerforum.org\/blog\/wp-content\/uploads\/Weinberger_D_1200x630px-1-300x158.jpg 300w, https:\/\/www.druckerforum.org\/blog\/wp-content\/uploads\/Weinberger_D_1200x630px-1-768x403.jpg 768w, https:\/\/www.druckerforum.org\/blog\/wp-content\/uploads\/Weinberger_D_1200x630px-1-1536x806.jpg 1536w, https:\/\/www.druckerforum.org\/blog\/wp-content\/uploads\/Weinberger_D_1200x630px-1.jpg 1600w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>2,400 years ago, Socrates argued that the \u201cjustified true belief\u201d&nbsp;(JTB) theory of knowledge that is still popular today was not adequate. He agreed that knowledge was a type of belief, and that it had to be a true belief if it were to count as knowledge. But if you\u2019re just guessing and your guess happens to be correct, that can\u2019t count as knowledge. Rather, you have to have a good set of reasons \u2014 a justification \u2014 for that belief.&nbsp;<\/p>\n\n\n\n<p>But, for Socrates, that\u2019s still not enough for a belief to count as knowledge. You can\u2019t just be reciting some justification from memory. You have to <em>understand<\/em> it.&nbsp;<\/p>\n\n\n\n<p>I personally would add one more letter to this: \u201cF\u201d for framework.&nbsp; Every single thing we know is part of a larger system of knowledge. If you know the water is boiling in your tea kettle because you hear its whistle, then you also know that water is a liquid, flames heat things, things can transmit heat to other things, and so on, until your entire knowledge framework has been drawn in.<\/p>\n\n\n\n<p>So, guess what two things the knowledge that comes from machine learning (ML) \u2014 what we generally&nbsp; mean by \u201cAI\u201d these days \u2014 doesn\u2019t have: understandability or a framework from which its statements of knowledge spring.<\/p>\n\n\n\n<p>We might want to say that therefore ML doesn\u2019t produce knowledge. But I think it\u2019s going to go the other way as AI becomes more and more integral to our lives. AI is likely to change our idea of what it means to know something.<\/p>\n\n\n\n<p><strong>Inexplicable knowledge<\/strong><\/p>\n\n\n\n<p>Sometime soon you&#8217;ll go in for a health exam and your doctor will tell you something like this: Everything looks good, except you have a 75% chance of having a heart attack within the next five years. You\u2019ll respond that that\u2019s nuts given your vital signs, diet, exercise routine, genetics \u2026 The doctor will agree but add that the prediction came from an AI diagnostic system that has proven itself to be reliable, even though no one can figure out how it comes to its conclusions. Initially you\u2019ll be skeptical because you want to understand how it came up with that diagnosis, by which you\u2019ll mean you want to understand how it fits into your framework of what causes heart attacks.<\/p>\n\n\n\n<p>You\u2019re unlikely to get that understanding, and that\u2019s more or less on purpose.<\/p>\n\n\n\n<p>With traditional computing, a developer would write a program that captures what we know about the causes of heart attacks: cholesterol levels and blood pressure, how they correlate for reasons that our framework explains, and so on.&nbsp;&nbsp;<\/p>\n\n\n\n<p>But we don\u2019t program machine learning models that way. In fact, we don\u2019t program them at all. We enable them to program themselves by letting them discover patterns in the tons of data we\u2019ve given it. Those patterns may be so complex that we simply can\u2019t understand them, but as long as they help increase the system\u2019s accuracy, who cares?<\/p>\n\n\n\n<p>Actually, lots of people care, because the inexplicability of these systems means that they can hide pernicious biases. That\u2019s one important reason there\u2019s so much research going on to make \u201cblack box\u201d AI more understandable.<\/p>\n\n\n\n<p>&nbsp;But the tendency of AI to train itself into inexplicability for the sake of accuracy may be giving us a different idea about how knowledge works, for there must be something about these wildly complex interrelationships of data that captures an essential truth about the world.&nbsp;<\/p>\n\n\n\n<p>Perhaps it\u2019s this:<\/p>\n\n\n\n<p>Our frameworks have been composed of generalizations that oversimplify a world made of particulars in complex interrelationships. That ML works reveals the limits of generalizations and reveals the power of the particulars that compose the world. It doesn\u2019t take away from the truth of those hard-won generalizations \u2014 Newton\u2019s Laws, the rules and hints for diagnosing a biopsy \u2014 to say that they fail at predicting highly particularized events: Will there be a traffic snarl? Are you going to develop allergies late in life? Will you like the new Tom Cruise comedy? This is where traditional knowledge stops, and AI\u2019s facility with particulars steps in.&nbsp;<\/p>\n\n\n\n<p>Recognizing the weaknesses of generalized frameworks is much easier when we have machines that bring us more accurate knowledge by listening to particulars. But it also transforms some of our most basic beliefs and approaches.<\/p>\n\n\n\n<p>Michele Zanini and I recently wrote a brief <a href=\"https:\/\/hbr.org\/2024\/07\/ai-has-a-revolutionary-ability-to-parse-details-what-does-that-mean-for-business\">post<\/a> for <em>Harvard Business Review<\/em> about what this sort of change in worldview might mean for&nbsp; business, from strategy to supply chain management. For example, two&nbsp; faculty members at the Center for Strategic Leadership at the U.S Army War College have <a href=\"https:\/\/media.defense.gov\/2023\/Oct\/02\/2003312488\/-1\/-1\/0\/CSL%20ISSUE%20PAPER%20-%20VOL%202-23.PDF\">suggested<\/a> that AI could fluidly assign leadership roles based on the specific details of a threatening situation and the particular capabilities and strengths of the people in the team. This would alter the idea of leadership itself: Not a personality trait but a fit between the specifics of character, a team, and a situation.&nbsp;<\/p>\n\n\n\n<p>AI\u2019s effect on our idea of knowledge could well be broader than that. We\u2019ll still look for justified true beliefs, but perhaps we\u2019ll stop seeing what happens as the result of rational, knowable frameworks that serenely govern the universe.&nbsp; Perhaps we will see our own inevitable fallibility as a consequence of living in a world that is more hidden and more mysterious than we thought. We can see this wildness now because AI lets us thrive in such a world.&nbsp;<\/p>\n\n\n\n<p>Such a vision seems to me not only to be true, but to be liberating, humbling, and joyous, and thus a truth we would do well to embrace, even if it took inscrutable machines to teach it to us.<\/p>\n\n\n\n<p><strong>About the author:<\/strong><\/p>\n\n\n\n<p><em><strong>David Weinberger, Ph.D.<\/strong>, <\/em>writes about technology&#8217;s effect on our ideas. He is a long-time affiliate of the Harvard Berkman Klein center.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>2,400 years ago, Socrates argued that the \u201cjustified true belief\u201d\u00a0(JTB) theory of knowledge that is still popular today was not adequate. He agreed that knowledge was a type of belief, and that it had to be a true belief if it were to count as knowledge. But if you\u2019re just guessing and your guess happens to be correct, that can\u2019t count as knowledge. Rather, you have to have a good set of reasons \u2014 a justification \u2014 for that belief.\u00a0<a href=\"https:\/\/www.druckerforum.org\/blog\/?p=4734\">[\u2026]<\/a><\/p>\n","protected":false},"author":3,"featured_media":4738,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_seopress_robots_primary_cat":"none","_seopress_titles_title":"","_seopress_titles_desc":"","_seopress_robots_index":""},"categories":[347],"tags":[348,359],"_links":{"self":[{"href":"https:\/\/www.druckerforum.org\/blog\/wp-json\/wp\/v2\/posts\/4734"}],"collection":[{"href":"https:\/\/www.druckerforum.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.druckerforum.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.druckerforum.org\/blog\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.druckerforum.org\/blog\/wp-json\/wp\/v2\/comments?post=4734"}],"version-history":[{"count":2,"href":"https:\/\/www.druckerforum.org\/blog\/wp-json\/wp\/v2\/posts\/4734\/revisions"}],"predecessor-version":[{"id":4740,"href":"https:\/\/www.druckerforum.org\/blog\/wp-json\/wp\/v2\/posts\/4734\/revisions\/4740"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.druckerforum.org\/blog\/wp-json\/wp\/v2\/media\/4738"}],"wp:attachment":[{"href":"https:\/\/www.druckerforum.org\/blog\/wp-json\/wp\/v2\/media?parent=4734"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.druckerforum.org\/blog\/wp-json\/wp\/v2\/categories?post=4734"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.druckerforum.org\/blog\/wp-json\/wp\/v2\/tags?post=4734"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}