
“People will remember three people in Poland’s recent history,” the CEO of Grupa Kety, Roman Przybylski told me at the Christmas gathering of the European Economic Congress (EEC) in Warsaw: John Paul II, Lech Wałęsa, and Leszek Balcerowicz.
Two spiritual and political leaders; one economist—a trinity revealing the moral weight economic transformation can demand.
Professor Balcerowicz, a living legend among economists and policymakers, was the architect of the radical reforms that pulled Poland out of economic freefall: hyperinflation, empty shelves, and a paralyzed command and control system. His critics spoke of “shock without therapy”, and the human cost was undeniably high. Yet the reforms set Poland on a path to becoming one of Europe’s fastest growing economies.
What struck me was that Balcerowicz did more than remove constraints. He shifted something deeper. Citizens moved from being subjects of the state, from passive recipients of plans to becoming market actors, responsible agents in their own economic lives. That demanded judgment, risk taking, and moral courage. The real transformation was not price liberalisation; it was a new relationship to one’s own agency. Painful as it was, that shift worked.

Roman Przybylski, Leszek Balcerowicz, and Johan Roos at EEC in Warsaw, 4 December 2025.
Today, a different kind of transformation is under way inside organizations rather than across whole economies. As AI systems colonize more coordination and decision making, professionals face a similar choice. Do they become subjects of the system, polite approvers of machine recommendations, operators of processes they did not author toward purposes they did not choose? Or do they remain authors and stewards, using AI as a powerful tool while retaining responsibility for what their organizations actually stand for?
This is not primarily a technology question. It is a question of citizenship, and therefore of leadership. At the EEC event, discussions focused on hard assets: public procurement to strengthen Polish firms, defense investment building local capacity, reforming state-owned industries for strategic sovereignty. A Wall Street analyst described trillions flowing into AI infrastructure: data centres, compute, cloud capacity reshaping global markets. Important priorities, all of them. Yet something was conspicuously absent from those conversations: leadership. Poland will debate how many billions to invest in algorithms and infrastructure. My question was simpler: How much will it invest in the leaders who must decide what those algorithms should serve?
Most leadership development still trains people as sophisticated operators—better prompts, faster dashboards. Useful, but not leadership. That creates subjects of the system, not stewards of it.
If AI is to serve as augmentation rather than quiet automation of leadership, three mindset shifts are essential.
First, interrogate—do not simply obey.
Leaders need decision processes in which algorithmic output is always questioned, never accepted as an oracle. When a model proposes a course of action, someone must ask: What is the “because” here? On what assumptions does this rest? Where might this system be blind? Critical thinking erodes when fluent outputs seduce us into mistaking confidence for logic; rebuilding it requires visible structures for tracing reasoning behind recommendations. Board meetings and executive committees should model this discipline. Interrogating algorithms is not Luddism; it is loyalty to reasoning itself.
Second, protect real presence.
One meeting a month, lock the devices away. Run a 90 minute leadership dialogue with no screens—only human beings, real disagreement, and shared sensemaking. Authentic communication depends on embodied presence: voice, gesture, eye contact, and the subtle synchronisation of attention. If every important conversation is mediated, it can help us prepare, but cannot replace the trust emerging when people meet without digital intermediaries.
Third, reward judgment over metrics.
Promotion systems and career paths still overwhelmingly reward those who optimise metrics: hit the target, shave the cost, improve the KPI. Yet the leaders we most need are those who know when to ignore the dashboards. They are willing to say, “The data points to X, but Y is right—because of what we know about our stakeholders, our history, our purpose, our obligations.” Without this capacity to integrate context sensitive judgment, leadership collapses into technically efficient irresponsibility.
Poland has an advantage here that many countries do not. It carries a living memory of what it means to transform a broken system with moral clarity. That experience is not just economic history; it is a reservoir of leadership knowledge.
Poland’s future in the AI era will depend less on the sophistication of the algorithms it deploys, or on how much capital it attracts, than on the kind of leaders it cultivates. The global investment wave will reward countries and companies that pair world class infrastructure with world class judgment. The country will, and should, debate industrial reform, energy transition, and defence spending. But it should devote equal energy to a more uncomfortable question: Will we allow AI to turn professionals back into subjects, or will we use it to deepen a hard won culture of citizenship and responsibility? If Balcerowicz’s generation proved that economic subjects could become market citizens, this generation now has to prove that algorithmic subjects can become professional citizens. That may be the most important transformation yet and the one for which Poland is, uniquely, prepared.
About the author:
Johan Roos is executive director of the Vienna Center for Management Innovation (VCMI) and presidential advisor at Hult International Business School. In Spring 2026, Routledge will publish Johan’s book Human Magic: Leading with Wisdom in an Era of Algorithms.
