Generative AI and Human Dignity
“Face and Voice Are Sacred: Protecting What Makes Us Human”
by CAPP-USA

Mia Zelu is one example of many virtual influencers created with generative AI. The Instagram account has 245K followers as of this article.
Generative AI technologies now enable the creation of synthetic humans indistinguishable from real persons—raising unprecedented questions about identity, consent, and human dignity. We face urgent ethical and social risks—highlighted by recent AI apps that generate synthetic bodies, faces, and voices, and by pressures on (esp. young) actors to permanently surrender their likeness.
How should public institutions and those committed to Catholic Social Teaching (CST) respond to these emerging violations of human dignity and justice?
From the CST standpoint, the response is not: “How do we regulate AI?” but rather:
How do we ensure that digital systems remain ordered to the human person, to authentic work, to truth, and to the common good?
Grounded in CST and Pope Leo XIV’s 2026 Message for the World Day of Social Communications, which declares that “faces and voices are sacred”—unique expressions of the imago Dei and foundational to human encounter—this article proposes practical, policy-level responses that public institutions and faith-driven leaders can credibly advance.
Why Catholic Social Teaching Offers a Distinctive Framework
Unlike approaches that frame AI governance primarily through privacy law, intellectual property, or market competition, CST begins with the non-negotiable dignity of the human person.
Protecting human likeness in the age of AI is not a technical problem requiring only technical solutions. It is a civilizational question about what we believe human beings are—and what kinds of social structures honor that truth.
This shift in starting point leads to different questions – not merely “Who owns this data?” but rather “Does this technology serve authentic human flourishing?”
The following eight proposals translate this person-centered vision into concrete institutional responses—recognizing that technology is never neutral, but is always either ordered toward or against the common good.
1. Treat Face and Voice As Extensions of the Person—Not Digital Property
Following the teaching of Pope Leo XIV, public institutions should treat biometric likeness (face, voice, body, gestures) as person-linked goods, not as ordinary, commodifiable data.
Policy responses
- Pray for Balance: Establish a distinct legal category for human likeness and identity data
- Prohibit default or blanket licensing of one’s likeness
- Require explicit, revocable, informed consent for any synthetic reuse
This upholds CST’s core, anthropological principle that the human person is never a means to an end. (Rerum Novarum) (Centesimus Annus)
2. Enshrine a “Right Not to Be Digitally Replicated”
CST consistently emphasizes that dignity must be protected not only socially, but structurally.
Pope Leo XIV warns that AI simulation risks appropriating faces and voices, fabricating parallel realities, and eroding trust in human communication. (Message for the World Day of Social Communications)
Concrete proposal
- Establish a civil right:
- to refuse digital replication,
- to withdraw consent at any time, and
- to demand deletion of trained likeness models (model erasure)
This affirms CST’s understanding of personal freedom as a moral good rooted in the person—not market dynamics.
3. Protect Workers, Especially Vulnerable Young Talent, From Coercive AI
Contracts
Actors, particularly emerging and young performers, face pressures to surrender likeness rights perpetually or post-mortem. This represents a classic CST labor exploitation concern, not merely an intellectual property issue.
Public institutions should:
- Prohibit perpetual or post-mortem digital likeness contracts,
- Require time-limited, purpose-specific agreements with fair, per-reuse compensation.
Note: California’s AB 2602 (effective 2025), which voids non-consensual or vague digital replica provisions in performer contracts, and AB 1836, establishing post-mortem protections, provide workable models that could be extended through international agreements.
This flows directly from the CST principle of the dignity of work and the priority of labor (the person) over capital. (Rerum Novarum)
4. Regulate Generative AI Under a “Human Dignity Impact Assessment”
This CST judges technology by its impact on human flourishing/integral human development. (Sollicitudo Rei Socialis) (Caritas in Veritate)
A practical tool:
- Require AI systems that manipulate identity (voice, face, body) to undergo a human dignity impact assessment, similar to environmental or financial risk reviews.
Assessment criteria could include:
- Does this technology replace human presence?
- Does it undermine trust, consent, or personal agency?
- Does it create structural pressure on vulnerable groups? (E.g., young performers or marginalized communities)?
Such an approach mirrors CST’s call for Integral Human Development (Pope St. John Paul II, 43) and could include independent oversight bodies, potentially including ethicists informed by CST and connected to Vatican initiatives. (e.g., Rome Call for AI Ethics)
5. Require Unmistakable Disclosure of Synthetic Persons
Truth is non-negotiable in CST. Pope Leo XIV explicitly warns against deception made possible through deepfakes and digital simulation.
Institutions should mandate:
- Clear visual and audible labeling of AI-generated faces and voices,
- Universal watermarking and provenance tracking for synthetic media.
This safeguards the common good and sustains social trust—preserving what might be called the moral ecology of communication, the shared foundation that makes truthful human encounter possible.
6. Treat Non-consensual Deepfake Abuse As A Violation of Personal Dignity
CST evaluates harm holistically—beyond economic or reputational damage. Pope Leo XIV highlights deepfakes as violations of privacy, intimacy, and human dignity, including fraud, cyberbullying, and sexualized content. (Message for the World Day of Social Communications)
Public policy should recognize:
- Non-consensual deepfakes as direct assaults on personal integrity, and
- Enable stronger civil remedies, expedited platform takedowns, and proportionate penalties.
7. Promote International Coordination Grounded in Human Dignity
CST’s principle of solidarity requires preventing a “race to the bottom” in biometric and generative AI standards.
Public authorities should support:
- Cross-border standards addressing likeness misuse,
- Shared enforcement and common legal definitions, and
- Cooperation with Vatican initiatives (e.g., Dicastery for Communication collaborations).
8. Mobilize Catholic Institutions and Civil Society For Public Formation and
Positive Guidance
While the previous seven proposals address public policy and legal frameworks, lasting cultural change requires formation and witness. The Church’s distinctive role is cultural and educational, fostering integral human development. (Caritas in Veritate)
Realistic actions include:
- Coordinated teaching on digital dignity—explaining why face and voice are sacred extensions of the person, not neutral data.
- Formation programs for Catholic educators, employers, media leaders, and policymakers.
- Integration into diocesan school curricula and university ethics courses.
- Partnerships with Pontifical Academies and Dicasteries.
- Encouragement of voluntary industry codes among Catholic business networks (aligned with FCAPP’s mission).
- Balanced and positive promotion of AI that enhances (rather than replaces) human creativity and authentic work.
Moving Forward: From Principle to Practice
The question before us is not whether AI will continue to develop. It is whether it will develop in ways that recognize faces and voices as sacred—or treat them as raw material for extraction.
Our response will shape not only markets and regulations, but the moral imagination of the next generation. Will they inherit a world that honors the sacred or one that treats it as ‘tradeable’?
To summarize: for FCAPP members and those committed to CST, the path forward involves three levels of engagement:
Immediate actions—Including: Support legislation (modeled on California’s AB 2602/1836 and other global initiatives); Advocate for human-dignity impact assessments in emerging AI regulation (including EU AI Act and U.S. state initiatives); Demand robust transparency and disclosure standards for synthetic media.
Institutional formation—Including: Integrate digital dignity education into Catholic universities, diocesan programs, and professional networks; Partner with existing Vatican initiatives. (Rome Call for AI Ethics, Dicastery for Communication)
Cultural witness—Including: Model alternative practices in Catholic institutions—media organizations, healthcare systems, educational bodies—that demonstrate how technology can enhance rather than displace human presence and creativity.
Antiqua et Nova: A Gift on Artificial Intelligence
AI Ethics: Cardinal Parolin’s Address to CAPP-USA
Cardinal Parolin’s Address to CAPP on Generative AI, June 2024
CAPP-USA (Centesimus Annus Pro Pontifice, Inc.) is the United States affiliate of the Vatican-based pontifical foundation of Fondazione Centesimus Annus Pro Pontifice, established by Pope St. John Paul II in 1993 to promote Catholic Social Teaching in fidelity to the Magisterium of the Catholic Church. CAPP-USA is a 501(c)(3) nonprofit organization.





