Subsidiarity in the Age of AI:
A Catholic Framework for AI Governance
by CAPP-USA
Catholic Social Teaching offers a practical framework for AI governance based on the principle of subsidiarity — ensuring decisions about artificial intelligence remain as close as possible to the people they affect – ensuring technology serves the human person, the family, and the local community.

Artificial intelligence laws and governance must begin with subsidiarity.
WHY AI GOVERNANCE MATTERS RIGHT NOW
A small number of technology firms already control the most powerful AI models, the largest datasets, and the infrastructure required to train them. At the same time, governments around the world are proposing sweeping regulatory frameworks that could centralize decision-making even further.
The risk is not only technological disruption.
It is the emergence of what some analysts describe as a “digital Leviathan” — a concentration of algorithmic power capable of shaping markets, information, and even public decision-making.
Catholic Social Teaching offers a way to think about this challenge that is rarely discussed in technology policy debates: subsidiarity.
KEY TAKEWAY
Catholic Social Teaching provides a framework for AI governance grounded in subsidiarity: decisions should be made at the lowest competent level—local, national, and international—so that artificial intelligence remains accountable to the human person and the common good.
What is Subsidiarity in Catholic Social Teaching — and Why It Matters for AI
Subsidiarity—articulated most clearly by Pope Pius XI in Quadragesimo Anno (1931)—holds that functions which can be performed effectively by lower-level communities (families, local businesses, towns) should not be seized by higher-level ones.
Rooted in human dignity and ordered toward the common good, subsidiarity promotes responsibility, participation, and creativity while preventing the dangerous over-centralization of power.
In the context of artificial intelligence laws—encompassing regulation, ethical standards, innovation, and deployment—this principle offers a framework for balancing global coordination with local responsibility, ensuring that AI strengthens rather than erodes human agency. Without it, the Digital Leviathan fills the vacuum.
Learn more about subsidiarity, the most misunderstood principle.
Catholic Social Teaching approaches artificial intelligence primarily through its core moral principles: human dignity, solidarity, subsidiarity, resulting in the common good.
The Church does not reject technological innovation but insists that new technologies must serve the human person rather than dominate or replace human responsibility.
Artificial intelligence governance is not primarily a technical problem. It is a question of power — who holds it, and who is protected from it.
Artificial Intelligence and Catholic Social Teaching
Recent Vatican guidance—including Pope Francis’s reflections on “algor-ethics” and the 2025 document Antiqua et Nova—emphasizes that AI governance must remain accountable to society at multiple levels, ensuring that technological power does not become detached from human dignity and moral responsibility.
In his 2020 message to the Pontifical Academy for Life, he emphasized that human dignity, solidarity, and subsidiarity should guide what he called “algor-ethics”—the ethical design of algorithms that promote the common good rather than concentrate technological power in the hands of a few.
“The dignity of the person, justice, subsidiarity and solidarity” must guide the development of technologies that influence human decision-making.
This warning reflects a broader concern within Catholic Social Teaching about what Pope Francis has called the “technocratic paradigm”—a mindset that treats the world as a problem to be solved through technical control rather than a reality to be lived with moral responsibility.
AI governance is therefore not merely a technical issue.
It is ultimately a question of power — who holds it, and who is protected from it.
The Vatican’s Guidance: Antiqua et Nova
The Vatican’s 2025 AI ethics document Antiqua et Nova applies Catholic Social Teaching directly to artificial intelligence.
“The responsibility for managing this wisely pertains to every level of society, guided by the principle of subsidiarity and other principles of Catholic Social Teaching.”
The implication is clear. The ethical governance of AI cannot be delegated solely to governments, corporations, or international bodies. Responsibility must instead be shared across all levels of society, including:
- individuals
- families
- civil society
- universities and research institutions
- businesses
- governments
- international organizations
Subsidiarity ensures that each level contributes according to its proper competence.
Subsidiarity’s critics argue it can license inaction — that local actors, lacking technical expertise and resources, will simply defer upward anyway, recreating the centralization it sought to prevent.
This is a real risk. The principle only functions when lower-level actors are genuinely equipped to exercise responsibility.
AI governance under subsidiarity therefore requires capacity-building — funding local ethics boards, training municipal officials, supporting civil society auditors — not merely decentralizing authority onto institutions that cannot yet bear it.
A Subsidiarity-Based Framework for AI Governance
Subsidiarity suggests a three-level framework for AI governance.
The Subsidiarity Test
A practical way to apply this principle is what might be called a “subsidiarity test” for AI governance: At what level can this decision be made responsibly while remaining closest to the people affected by it?
If a local institution can govern the technology responsibly, higher authorities should support rather than replace it. Only when problems exceed that level should authority move upward.
This test helps policymakers distinguish between issues that genuinely require centralized coordination and those that should remain under the authority of communities, institutions, and nations closest to the consequences.
- Local communities – best positioned to identify harmful algorithmic bias in schools, hiring systems and medical technologies.
- National governments – establish legal safeguards that protect human dignity, labor (addressing especially labor disruptions), and privacy protections while ensuring fair competition.
- International cooperation – necessary when confronting risks that transcend borders, such as autonomous weapons, cross-border data exploitation, or systemic technological concentration.
Critics rightly ask: Can subsidiarity survive contact with a global platform? Meta’s algorithmic feed doesn’t pause at a border.
The answer lies in distinguishing between standards and implementation. Subsidiarity doesn’t require every community to write its own privacy code from scratch — it requires that communities have meaningful input into how universal standards are applied locally, and real recourse when they aren’t.
The EU’s GDPR offers a partial model: baseline rules set at the necessary level with enforcement kept close to the affected person.
Properly understood, subsidiarity does not weaken governance—it orders it, ensuring that responsibility is distributed according to competence rather than becoming detached from the people it affects – protecting human dignity.
In this way, subsidiarity safeguards society from two equal dangers: technological anarchy and a technological Leviathan.
APPLYING SUBSIDIARITY TO AI GOVERNANCE
This layered model challenges the assumption that AI must be governed primarily through centralized national and global regulation.
Instead, responsibilities should be distributed according to competence rather than becoming detached from the people it affects, thereby protecting human dignity.
ETHICAL OVERSIGHT
At the local level, schools, hospitals, and professional associations are best positioned to conduct ethics audits tailored to their communities — a hospital ethics board reviewing diagnostic AI knows its patient population in ways a federal regulator never will.
National authorities step in where local capacity fails or where harms cross borders, as with discriminatory hiring algorithms operating across jurisdictions.
INNOVATION AND ECONOMIC IMPACT
Subsidiarity encourages distributed innovation over monopolies.
A farmer cooperative developing a crop-monitoring tool, a regional hospital building a patient-triage assistant, a small-town library creating a multilingual search service are expressions of local knowledge that no distant engineer possesses.
National policy has a role here, but it is a supporting one: protecting workers displaced by automation, preventing monopolies from crowding out regional innovators, and ensuring that the economic gains from AI don’t simply migrate upward to shareholders and away from the communities where the technology actually operates.
GLOBAL AND NATIONAL REGULATION
Some AI risks genuinely exceed what any local or national actor can manage alone.
Autonomous weapons systems, cross-border surveillance networks, and the environmental costs of massive data centers are not problems a city, state or even a mid-sized nation can resolve unilaterally.
Here, international coordination is not a betrayal of subsidiarity — it is subsidiarity functioning correctly, with higher authority engaging precisely where lower levels lack competence.
The critical discipline is restraint: global governance bodies must resist expanding their remit beyond what truly requires their intervention – leaving everything else to the levels closest to the people affected.
A Real-World Example: Switzerland
Switzerland provides a practical laboratory for subsidiarity in action. (See: Jovan Kurbalija’s Diplo writings).
While Switzerland is not explicitly applying Catholic Social Teaching, its governance structure illustrates a principle the Church articulated nearly a century ago: complex societies function best when responsibility is distributed rather than centralized. In an age of global digital platforms, this insight may be more relevant than ever.
By decentralizing many AI initiatives across cantons and communities, it fosters a “Trinity” of governance: innovation, local empowerment, and coordinated oversight.
COMMUNITY AI LABS
Libraries and local innovation hubs reimagined as AI knowledge centers support grassroots projects in agriculture, tourism, and more—serving the “soil and the street” while avoiding top-down pitfalls and enabling alignment on big risks.
CITIZEN-LED PROJECTS
National innovation grants often prioritize grassroots participation allowing citizens, universities, and small research teams to address ethical concerns such as privacy, transparency, and responsible data use.
In this way, the subsidium—support from higher authority—empowers rather than replaces local initiative.
INTER-CANTON COORDINATION CIRCLES
Cantons collaborate horizontally—sharing best practices and ethical audits—before seeking federal intervention. No centralized “AI czar.”
This treats communities as active architects, harmonizing liberty with the common good.
WHAT THIS MODEL DEMONSTRATES
It reflects a Catholic insight: human creativity flourishes when responsibility is shared across society’s living fabric.
The Swiss model reflects the essence of subsidiarity: problems solved at the lowest competent level, with higher authority providing support when necessary. As global AI summits in 2025–2026 increasingly default toward top-down frameworks, Switzerland’s experience reminds us that decentralization remains viable—and arguably more resilient. It treats local communities not as passive consumers of AI, but as active architects of their own future.
It shows that it is possible to harmonize individual liberty with the common good without creating a centralized digital Leviathan.
The results are uneven but instructive. Switzerland consistently ranks among the top nations in AI readiness indexes, yet canton-level AI labs remain underfunded compared to federal initiatives.
This is precisely where subsidiarity’s companion principle — solidarity — must activate: higher authorities providing genuine subsidium when local capacity falls short, not waiting for failure.
As recent global AI governance talks (Paris 2025, Seoul 2026) lean toward centralized accords, subsidiarity reminds us that true resilience may lie in distributed, human-scale accountability.
Switzerland demonstrates that the Leviathan is not inevitable.
The Bottom Line: A Catholic Framework for AI Governance
Artificial intelligence will shape the future of work, communication, and governance.
The question is: Will its power concentrate in distant institutions—or remain accountable to the communities it serves?
Catholic Social Teaching answers clearly: Through subsidiarity. AI governance should empower families, communities, institutions, and nations before defaulting to centralized control. Properly ordered, artificial intelligence can strengthen human creativity and cooperation rather than weaken them.
Subsidiarity is never a license for isolation; guided by Solidarity, it ensures the larger community supports the smaller one, providing the resources needed to lead AI initiatives.
The task for policymakers is not simply to regulate AI. It is to ensure that this powerful technology remains at the service of the human person and the common good.
Here is a practical starting point: before any AI system is deployed in a school, hospital, or local government service, require a community impact assessment conducted by that institution — not a federal agency, not the vendor.
Build the habit of local accountability first. Scale up only when the question genuinely exceeds local competence. That is subsidiarity in practice.
The alternative — waiting for Brussels, Washington, or Silicon Valley to get it right — is not neutrality. It is surrender.
Subsidiarity offers a third way — one where technology remains accountable to the human person and ordered toward the common good.
CAPP-USA (Centesimus Annus Pro Pontifice, Inc.) is the United States affiliate of the Vatican-based pontifical foundation of Fondazione Centesimus Annus Pro Pontifice, established by Pope St. John Paul II in 1993 to promote Catholic Social Teaching in fidelity to the Magisterium of the Catholic Church. CAPP-USA is a 501(c)(3) nonprofit organization.





