“Show Me the Money” vs. “Show Me the Source” The Gen AI Disconnect Leaders Must Bridge to SHOW Progress
By Jayshree Seth

guestPosted by

In Jerry Maguire’s memorable words, leadership is demanding that generative AI show them “the money.” According to PwC’s 2026 Global CEO Survey of more than 4,400 CEOs, 56% report seeing no significant financial benefit from AI to date. The mandate is clear: prove the return, and prove it quickly.

Meanwhile, a senior researcher is reviewing an AI-generated literature summary that confidently cites a paper that does not exist. Her response is not frustration but calibration. She is not asking for the money. She is asking for “the source.”

As generative AI moves into technical functions like R&D, it exposes a disconnect inside organizations. Leaders need to understand it urgently, because misreading it risks both the returns being chased and the ingenuity being relied upon.

Two Groups. Two Rational Questions. One Technology.

When researcher adoption of generative AI lags behind executive enthusiasm, the instinct is to reach for change management explanations. People resist new tools. Scientists are conservative. More training is needed. These explanations are not wrong. But they are incomplete.

They miss something fundamental: the two groups are asking different questions of the same technology, and both are rational.

Leaders evaluate generative AI through a business outcomes lens. Did efficiency improve? Did time-to-insight shorten? Did costs come down? These are legitimate questions, and generative AI can begin to answer them, particularly in domains involving knowledge synthesis, ideation, and workflow acceleration.

Researchers evaluate generative AI through an epistemic lens. Is this output verifiable? Can I trace it to a primary source? Is the citation real? Is the mechanism consistent with evidence? These are also legitimate questions, and they are what scientific rigor demands. Accepting compelling but unverifiable output is not efficiency. It is risk.

The two groups are not disagreeing about generative AI. They are operating from different professional obligations and standards of evidence.

What the Research Confirms

This caution is not anecdotal. A 2025 study presented at the ACM CHI Conference examined generative AI adoption at a U.S. National Laboratory. Its headline finding: the most significant barrier to adoption was lack of reliability, including hallucinations and poor source citation. Forty-four percent of scientist respondents raised this explicitly. Put simply: “I need to be able to tie an LLM output back to an authoritative source.” This is the scientific method in action.

The hallucination problem is particularly acute in R&D because it strikes where accuracy matters most. A 2025 study reviewing experimental research on generative AI found that when AI is applied beyond its capabilities, it can harm performance by introducing errors and lowering output quality. In R&D, this does not just fail to help. It creates additional work for experts who must detect and correct errors.

More troublingly, AI outputs rarely signal their own unreliability. For a researcher whose experimental direction may depend on prior evidence, a confident wrong answer is not a minor inconvenience. It can mean months of wasted effort.

Where Generative AI Helps — and Where It Does Not

For leaders setting expectations, the key insight is simple: AI’s value in R&D is not uniform. It varies by task.

Generative AI can excel at large-scale knowledge synthesis, hypothesis generation, cross-domain analogy, and content drafting. These are areas of real productivity gain, and the business case is sound.

It is weaker in exacting work. As probability-based language models, these systems struggle where accuracy and traceability are critical.

Generative AI is strong at breadth and brittle at precision. Leaders who ignore this distinction, and attribute hesitation to resistance, will achieve neither adoption nor returns.

Leaders Need to SHOW Up Differently

Bridging this disconnect requires leaders to show up differently, not with mandates for adoption, but with better judgment about where and how generative AI is deployed.

Four shifts matter:

Skepticism is a signal, not a setback.
When researchers push back, they are identifying known failure modes. Overriding this skepticism removes a critical quality control mechanism.

Honor the workflow, not just the tool.
Before measuring ROI, ask what kind of thinking the task requires. Generative AI supports synthesis and ideation well. It struggles with verification and traceability. Deployment strategies that reflect this outperform blanket adoption.

Own the timeline, not the hype.
Productivity gains in synthesis and drafting can appear quickly. Returns from precision work require domain-specific tools, strong data infrastructure, and trust built over time. Deloitte’s 2025 survey found most organizations achieve satisfactory AI ROI in two to four years, far longer than typical expectations. Expecting both on the same timeline invites disappointment.

Win with ingenuity, not just efficiency.
The most significant returns in R&D will not come from automation alone. They will come from leaders who amplify human ingenuity, using AI to extend capability rather than shortcut standards. As the 2026 Drucker Forum theme suggests, the next era of innovation will belong to organizations that combine human ingenuity with machine intelligence.

The Disconnect Is Also a Bridge

“Show Me the Money” and “Show Me the Source” may seem opposed. They are not. They are complementary signals pointing to the same truth: generative AI in technical work such as R&D is a powerful but specific tool.

Leaders who understand why researchers are cautious, and recognize that caution as professional competence rather than resistance, will make better decisions. Researchers who recognize that leaders face real pressure to deliver returns, and that AI does create value in specific domains, will engage more constructively.

Organizations that bridge this gap will not just capture better AI returns – they will build the kind of human-machine partnership that next-generation innovation demands.

You can get to the money. But first, you have to respect the source.

About the author:

Jayshree Seth is a Corporate Scientist at 3M and holds 80 patents for a variety of innovations. In 2025 she was named to the Thinkers50 Radar and is the author of The Heart of Science trilogy published by Society of Women Engineers. Jayshree was inducted into the National Academy of Engineering in 2026. She is currently leading R&D use cases for generative AI at 3M.

Leave a Reply

Your email address will not be published. Required fields are marked *