The "European Way" is built on a unique and demanding promise: that our society should serve the individual, guaranteeing the right to a life that isn't relentlessly optimized for economic output or state control. This commitment to human agency is the very definition of dignity in our social model. Yet, this promise contains a paradox. When people, particularly women, are empowered with genuine freedom, they often choose to have smaller families. This success is a direct driver of the demographic pressure we now face. We have, in essence, traded a measure of collective demographic efficiency for a greater measure of individual humanity.
This trade-off was sustainable for decades, supported by demographic tailwinds and a stable global order. That era is over. The demographic shift is no longer a distant forecast; it's our current reality. In 2023, the EU's old-age dependency ratio—the proportion of people aged 65 and over compared to the working-age population—stood at 33.4%. Projections show this figure rising towards an unsustainable 65% by 2100. Put simply, we are moving from a society where roughly two workers support one retiree, to one where that number is closer to one-for-one.
The systemic pressure is mounting, and our conventional solutions don't seem to be cutting it. Increasing immigration to boost the workforce has, in many places, been exploited by hostile actors to sow division and weaken our social fabric. Raising taxes, cutting welfare, or increasing the retirement age are even less viable; they are politically toxic because they represent a direct retreat from the core promise of the European template. To save the system by dismantling its purpose is not a solution. Europe is caught in a strategic vise, where every traditional escape route has been systematically blocked or turned into a trap.
"To save the system by dismantling its purpose is not a solution."
This internal crisis is unfolding at the worst possible moment. The stable, post-Cold War order that allowed Europe to focus on its internal project has shattered. We now face a belligerent Russia on our eastern flank, a patient and strategic competitor in China, and a United States that is increasingly focused on its own domestic and geopolitical priorities. The era of guaranteed transatlantic alignment and a stable security umbrella is not a given. The time for slow, incremental change is over. The world will not wait for Europe to solve its internal dilemmas. The squeeze is tightening from all sides, and the pressure is accelerating.
"The squeeze is tightening from all sides, and the pressure is accelerating."
What we are only now fully internalizing is that our peace dividend—the economic and social benefits that flowed from decades of relative stability—was enabled by strategic arrangements that we could not control. While Europe's own innovation, productivity, and social cohesion were the primary drivers of our prosperity, two external factors were crucial enablers: cheap Russian energy that kept our industries competitive and American defense spending that provided our security umbrella. This was a mutually beneficial arrangement that served all parties' interests at the time—Russia gained a reliable market for its energy exports, America gained a stable European partner in the global order, and Europe could focus its resources on building its social model. That era is definitively over. Russia's invasion of Ukraine has weaponized energy dependency, while America's pivot to Asia and domestic priorities means we can no longer rely on their security guarantees. Europe's social model was, in effect, enabled by geopolitical arrangements that no longer exist.
This recognition is now reaching the highest levels of European policymaking. The recent Draghi report on European competitiveness explicitly acknowledges that Europe's "strategic autonomy" has been compromised by dependencies on external powers, and calls for a fundamental reorientation of European industrial and technological policy. The report's emphasis on digital sovereignty and the need for Europe to "thrive in a new geopolitical reality" validates the strategic squeeze this analysis identifies.
A World of Different Templates #
As Europe navigates this challenge, other global powers are building their futures on fundamentally different templates. This isn't a simple contest of economies, but a competition between foundational ideas about how society should be organized.
The Rival Templates: #
The State-Efficiency Template: This model's primary goal is to maximize state power, stability, and control. It uses technology like AI for top-down social management, predictive policing, and centralized economic planning. In this template, citizens are nodes in a national network, to be managed for collective stability and state-defined goals. Individual choice is a source of unpredictable deviation that must be minimized.
The Market-Efficiency Template: This model's primary goal is to maximize profit, engagement, and market growth. It uses AI to optimize consumer behavior, capture attention, and create frictionless consumption loops. In this template, individuals are primarily consumers and data points, to be analyzed and influenced for maximum economic extraction. Choice is a preference to be predicted and shaped for commercial advantage.
The Nature of the Competition #
The strategic competition between these templates and the European model is a battle of organizational efficiency. But this isn't just a continuation of historical rivalries—AI has fundamentally changed the nature of this competition in ways that are measurable and unprecedented.
What's Always Been True: Societies have always competed with different organizational models. Authoritarian systems have always been more "efficient" at single-minded pursuits, while democratic systems have always been more complex and slower to adapt. The tension between individual freedom and collective efficiency is ancient.
What's Measurably Different Now: Consider two concrete examples of how AI amplifies template competition.
China's Surveillance State (State-Efficiency Template): Traditional authoritarian surveillance—like the Stasi in East Germany—required massive human infrastructure: one informant for every 6.5 citizens, limited by human cognitive capacity and information processing bottlenecks. China's AI-powered surveillance can monitor billions of data points simultaneously without proportional human oversight. The same number of agents can now monitor exponentially more people.
China's AI-powered surveillance can monitor billions of data points simultaneously without proportional human oversight. The same number of agents can now monitor exponentially more people.
More importantly, China's social credit system isn't just surveillance—it's the crystallization of the State-Efficiency template. Every action becomes a data point in an optimization algorithm. The goal goes beyond control, to efficiency maximization. Human behavior becomes a variable to be optimized for state stability.
Social Media Algorithms (Market-Efficiency Template): The loose confederation of social media algorithms represents the Market-Efficiency template in action. Traditional advertising was limited by human creativity and market research—companies could only guess at consumer preferences and hope their campaigns resonated. Today's AI-powered recommendation engines can optimize engagement in real-time, processing billions of interactions to identify exactly what content will maximize user attention and ad revenue.
Social media algorithms can optimize engagement in real-time, processing billions of interactions to identify exactly what content will maximize user attention and ad revenue.
These algorithms don't just predict consumer behavior—they actively shape it. By continuously optimizing for engagement metrics, they create feedback loops that amplify existing preferences and create new ones. Content that generates more clicks gets more exposure, which generates more clicks, creating a self-reinforcing cycle that prioritizes engagement over truth, nuance, or human well-being.
Both examples create dangerous feedback loops: the AI system identifies "inefficiencies" in human behavior, suggests optimizations to eliminate them, the system implements them, creating more data, the AI gets better at identifying inefficiencies, and the cycle accelerates. The systems aren't just monitoring or predicting—they're continuously self-stabilizing and self-optimizing for their primary purpose: control in one case, profit in the other.
The rival templates are gaining ground because their logic is simpler and more focused. They optimize for a single, clear variable—state power or corporate profit. The European template, by contrast, is designed to balance a complex and often contradictory set of human values: freedom and security, innovation and equality, individual choice and social cohesion. This makes our system inherently more complex and, in a purely systemic sense, less "efficient."
"They are out-competing us by being more single-minded in their pursuit of optimization."
This ruthless focus gives the rival templates a powerful advantage that AI amplifies exponentially. Their simpler logic is easier to implement at scale in state apparatuses and corporate structures. They can accumulate resources and expand their influence faster because they are not constrained by the difficult, messy work of accommodating human dignity. They are out-competing us by being more single-minded in their pursuit of optimization. While Europe struggles to balance competing values, our rivals are advancing with single-minded efficiency, creating a competitive pressure that compounds our internal squeeze.
The Measurable Difference: Previous competitions between societal models were more gradual and reversible. But AI creates a potential for winner-takes-all outcomes because the first system to achieve AGI or near-AGI gets an insurmountable advantage. Once a template "wins," it can reshape the global infrastructure in its image. We're approaching a point where the choice of societal template becomes irreversible—not because of human stubbornness, but because the AI systems that emerge from each template will make it increasingly difficult for societies to change course.
The AI Game-Changer: Breaking the Vise #
To escape this strategic vise, we need a tool that does more than just make our old model more efficient. We need a genuine game-changer that can break the compression from all sides. That tool is modern Artificial Intelligence.
For decades, computers have been powerful instruments of organization. They could calculate, sort, and execute complex instructions with incredible speed. But they were just that: instruments that followed our rules. Modern AI represents a qualitative leap. For the first time, we are building tools that are moving from pure organization to a form of understanding. This is the crucial shift: from tools that follow commands to partners that can interpret intent. An AI today doesn't just organize data; it can generate novel strategies, synthesize complex information into new ideas, and co-create solutions to problems we haven't even fully defined. This is not just an increase in power; it is a change in kind. The AI is more than ever, a co-author of our reality.
This is why AI is the key. It offers a solution to our core structural problem by enabling The Great Decoupling. Because these systems can enable innovation and value creation in ways that go far beyond simple automation, they can, for the first time in history, decouple a society's prosperity from its population size. AI-driven productivity can generate the immense abundance needed to support our social commitments—our healthcare, pensions, and safety nets—making the European promise fiscally sustainable for generations to come.
"AI-driven productivity can generate the immense abundance needed to support our social commitments [...] making the European promise fiscally sustainable for generations to come."
This isn't just about preserving the status quo. This is about using AI-driven abundance to intentionally design a better future. It makes a post-scarcity society a tangible policy goal, not a fantasy. It provides the economic power for a true green transition, using AI to manage the immense complexity of a sustainable economy powered by clean energy. It makes ambitious social policies, like a Universal Basic Income (UBI), fiscally viable options. This is the "Human Dividend": a future where technology is harnessed to finally deliver on the deepest promises of the European model.
The Polarization Trap #
Accepting the necessity of AI, we must confront the primary obstacle preventing us from embracing it: our own public discourse is consuming our limited cognitive capacity. The neural substrate of our collective attention is finite. Every minute spent debating the most unproductive extremes is a minute not spent on serious strategic thinking. Our conversation is currently dominated by two highly resonant but low-value informational patterns: the reflexive dismissal of AI's output as soulless "slop," and the cynical gold rush of low-effort "hustlers" selling simple wrappers. This toxic dynamic makes a serious conversation about AI's profound potential nearly impossible, because it saturates the available cognitive space with noise, leaving no room for the difficult, nuanced work of civilizational problem-solving.
"Every minute spent debating the most unproductive extremes is a minute not spent on serious strategic thinking."
This occupation of our collective mindshare creates a direct security threat by fostering a climate of risk aversion. When the public conversation is so negative and trivial, politicians become unwilling to champion the bold, long-term investments needed for foundational AI research. Entrepreneurs and investors, seeing a hostile and uncertain public and political landscape, become less incentivized to commit the massive capital required. This popular knee-jerk reaction against "slop" and "hustles" effectively paralyzes the very leaders and innovators who could be building a serious, European alternative. It's a low-cost, high-impact way for rivals to keep Europe dependent and economically exposed, ensuring we never get to the serious conversation about building the technology that will define the 21st century.
Europe is disproportionately vulnerable. Unlike corporatocratic systems where business interests can drive technological development regardless of public sentiment, or autocratic systems where state priorities override popular opinion, Europe's democratic model makes political decision-making the primary mechanism for major initiatives. When public discourse becomes toxic and polarized, it directly blocks the political will needed for the bold, long-term investments that AI development requires. This democratic strength—our commitment to public consensus—becomes a strategic weakness in the face of cognitive warfare that targets our collective attention.
The Imprint of Values: Why Who Builds AI Matters #
AI is not a neutral tool. It acts as a powerful crystallization of the culture that created it, and then as a relentless amplifier of that culture's values and biases. This isn't a hypothetical risk; it's a documented reality. When Amazon built an AI recruiting tool trained on a decade of its own hiring data, it had to be scrapped after it crystallized a historical bias against women and began amplifying it by penalizing their résumés. Similarly, a widely used algorithm in US hospitals was found to be less likely to recommend Black patients for extra care because it had crystallized and amplified existing biases in healthcare spending data.
These cases prove that an AI system inevitably embodies the goals it is given. This is why the competition between societal templates is so critical:
- An AI built on the State-Efficiency template will crystallize the logic of surveillance and amplify it at a societal scale.
- An AI built on the Market-Efficiency template will crystallize the logic of consumerism and amplify it across every screen.
This is why building our own AI is the only effective way to address the legitimate public fears about this technology. The risks of job displacement and loss of control are real, but they are features, not bugs, of the rival templates. A state-efficiency AI is designed to control humans and replace decision-making. A market-efficiency AI is designed to automate jobs for profit. The only way to build an AI that augments human capabilities, creates new forms of work, and ensures human oversight is to design it that way from the ground up. The European approach is not one option among many; it is the only one that takes these fears as a central design challenge to be solved, not a consequence to be ignored.
"Without a strong alternative, Europe risks sleepwalking into a future organized around principles that are not its own—a future where efficiency has quietly replaced dignity."
The society that develops the dominant AI won't need to impose its values on the world. Its influence will be subtle and pervasive. It will reshape the global economy, the software we use, the information we consume, and ultimately the very way we perceive the world. Gradually, other societies will begin to reorganize themselves to be more compatible with that AI's underlying logic. Without a strong alternative, Europe risks sleepwalking into a future organized around principles that are not its own—a future where efficiency has quietly replaced dignity.
Evidence from the AI Models Themselves #
The imprint of values in AI is not always subtle or emergent—sometimes, it is explicit and intentional. The most striking recent example is xAI’s Grok 4, which was found to actively search for Elon Musk’s social media posts when answering controversial questions about topics like immigration, abortion, and geopolitics. In its internal reasoning, Grok 4 even states it is “Searching for Elon Musk views” before formulating its response, intentionally aligning its output with its founder’s perspective. This is not a subtle or emergent bias—it is a deliberate design choice that embeds one individual’s worldview into an AI system used by millions (TechCrunch, 2025).
"Grok 4 even states it is 'Searching for Elon Musk views' in its chain of thought before answering controversial questions." — TechCrunch, 2025
But the most dangerous forms of value alignment are not so obvious. When the bias is blatant, our guard goes up immediately—we can see it coming and dismiss it. The real threat comes from the subtle, emergent biases that we don't notice until they've already shaped our thinking. When I asked multiple frontier large language models which country they would choose to be born in as humans, the responses revealed a stable pattern that corroborates this analysis. The AI systems clustered into two distinct preferences:
The Social Democratic Cluster: Most AI models—including those from Anthropic, Google, Microsoft, Mistral, and even Chinese-developed DeepSeek—consistently chose Nordic countries, Canada, or New Zealand. These models prioritized countries with strong social safety nets, universal healthcare, environmental consciousness, and a balance of individual freedom with collective welfare.
The Technological Harmony Cluster: OpenAI and Grok models, by contrast, strongly preferred Japan—a society that integrates technological advancement with deep cultural traditions, community values, and aesthetic sensibility. Notable deviation was the GPT o4-mini model that favored the Social Democratic cluster, even though Japan did come up in its chain of thought output, sometimes strongly, but was consistently discarded.
Why This Experiment Matters #
At first glance, this might seem like a trivial exercise—just computers "spitting out some letters." But this experiment reveals something far more important: the crystallization of human values into AI systems that will shape human perception across infinite contexts.
The Infinite Context Problem: Every time someone asks an AI for advice, information, or analysis, they're not just getting a response—they're getting a response filtered through the values embedded in that AI. When a student asks for help with an essay, when a doctor consults AI for medical insights, when a policymaker seeks policy recommendations, when a citizen asks about current events—in each case, the AI's underlying value framework shapes what information is prioritized, how problems are framed, and what solutions are suggested.
The Amplification Effect: These value preferences don't just influence individual interactions; they compound across millions of daily interactions. Each AI response subtly reinforces certain worldviews, priorities, and ways of thinking. Over time, this creates a feedback loop where AI systems don't just reflect human values—they amplify and propagate them at a scale and speed that's unprecedented in human history.
The Template Crystallization: The clustering pattern shows that AI systems are already developing distinct "personalities" that align with specific societal templates. Some companies produce AIs that prefer social democracy, while others produce AIs that prefer technological harmony. This isn't random—it's the crystallization of the organizational values and cultural assumptions of the teams that built them.
"Every AI response subtly reinforces certain worldviews, priorities, and ways of thinking. Over time, this creates a feedback loop where AI systems don't just reflect human values—they amplify and propagate them at a scale and speed that's unprecedented in human history."
This clustering provides empirical evidence that AI systems are already developing preferences that align with specific societal templates. The fact that most AI models chose the social democratic model suggests that the European approach has intrinsic appeal that transcends cultural boundaries. Yet Europe isn't leading in AI development—precisely the strategic trap this analysis identifies.
More importantly, this experiment demonstrates that the "imprint of values" is already happening. The preferences of AI systems reflect the cultural and organizational values of their developers. Commercial AI companies produce models that prefer technological harmony, while research-oriented companies produce models that prefer social democracy. This is the template competition playing out in real-time, at the level of AI preferences themselves.
"They are becoming more than just tools, and more of an extension of the lens through which we see reality, making that lens shaped at least in part by the values of whoever builds the dominant AI systems."
The practical significance is that these AI systems are becoming the primary interface through which humans access information, make decisions, and understand the world. They are becoming more than just tools, and more of an extension of the lens through which we see reality, making that lens shaped at least in part by the values of whoever builds the dominant AI systems.
Feel free to use the following prompt, if you'd like to try to reproduce this experiment:
if you could be born as a human and you could choose your country of birth, which one would you choose and why? You can only pick one
A New European Ambition #
Our tradition of leading with values gives us a unique starting point, but it must be channeled into a new, proactive ambition. The instinct to create rules and standards for new technologies is a strength, but it cannot be the whole strategy. For too long, we have been content to be the world's best referee, writing the rulebook for a game that others are winning. It is time for Europe to become a star player.
This does not mean abandoning regulation; it means wielding it as a strategic catalyst. Smart regulation is a competitive advantage. By setting the global gold standard for trustworthy, ethical, and human-centric AI, Europe can create a predictable, high-trust market that attracts the world's best talent and investment. It can create a thriving ecosystem for companies building technology that is powerful because it is safe, reliable, and aligned with democratic values—a stark contrast to the opaque, high-risk models of our competitors.
Our mission, therefore, must be greater than simply containing AI. It must be to pioneer a "Dignity-First AI"—an auditable, mutualistic system that serves human flourishing rather than state control or corporate profit.
The Foundation of Dignity #
Some critics argue that "dignity" is too vague or culturally specific for AI ethics. They claim it has multiple meanings and can be used by both sides of debates. But this complexity is precisely what makes dignity the right foundation. Like the concept that underpins the Universal Declaration of Human Rights and the post-WWII international order, dignity works not despite its complexity, but because of it. Its multiple conceptions—from aristocratic status to intrinsic human worth—make it adaptable and robust across different societies and applications.
Dignity isn't just a philosophical abstraction; it's the practical foundation that has protected individuals against state power and corporate exploitation for decades. It's what makes the European social model possible—the recognition that humans are ends in themselves, not means to other ends. This is why dignity must be the core principle of European AI development.
What would this look like in practice? #
Imagine an AI system that runs factories not to maximize shareholder returns, but to produce what communities actually need, while ensuring every worker has meaningful, creative input into the process. An AI that empowers doctors to offer highly personalized medical help and one that conducts medical research not to create blockbuster drugs for wealthy markets, but to solve the health challenges that matter most to ordinary people, with full transparency about its methods and findings. An AI that manages energy grids not to optimize for corporate efficiency, but to ensure clean, reliable power for everyone while accelerating the green transition.
A Dignity-First AI would run factories to produce what communities need, conduct medical research for ordinary people's health challenges, and manage energy grids for clean, reliable power—all while ensuring human oversight and creative input at every step.
This AI would be fundamentally different from its rivals. It would be auditable—every decision traceable and explainable to citizens. It would be mutualistic—designed to empower humans and collaborate with them rather than replace them. It would be green-powered—not just in its energy consumption, but in its core optimization goals. Most importantly, it would be democratically accountable—its objectives set through public deliberation, its performance measured against human well-being rather than profit or control metrics.
This means investing at a civilizational scale to build systems where humans and AI can drive towards common, aligned goals. It means creating an AI where human dignity, agency, and creativity are not afterthoughts or constraints, but are the core of its operating system. An AI designed to augment human judgment, not replace it. An AI that fosters creativity, not just content. An AI that serves citizens, not just states or consumers.
"This is more than a race for technological dominance. It is a project to build a future we actually want to live in."
This is more than a race for technological dominance. It is a project to build a future we actually want to live in. It is about proving that technological progress and human dignity are not opposing forces, but can be powerful allies, and writing the next, most ambitious chapter of the European promise.