This paper outlines a practical path for adopting AI responsibly in Catholic higher education. It starts from a simple claim: AI can support learning and research only if human judgment remains in charge. The paper reviews current policies (e.g., disclosure of AI use, human oversight, academic integrity) and aligns them with UNESCO guidance and Pope Francis’s Global Compact on Education, which calls for an educational pact centred on human dignity, participation, social friendship, and care for our common home. Large language models are treated as cognitive artefacts that reshape, but must not replace, reasoning and authorship. The authors propose three complementary strategies: (0) Confinement – short-term limits to protect assessment and integrity; (1) Remodulation – redesign of goals, assessments, and literacies for an AI-rich environment; (2) Cooperation – responsible partnerships in teaching and research that preserve the right to publish and public scrutiny. Concrete actions include AI literacy, transparent disclosure, governance for sensitive data and dual-use risks, environmental accountability, integrity education, and support for at-risk scholars. Ultimately, Catholic higher education is the necessary workshop for this elaboration, the place where innovation is disciplined by human judgment, oriented to truth, and accountable to the common good envisioned by the Global Compact on Education.
De Florio, C., Gomarasca, P., From Machine Learning to Humane Learning. Responsible AI adoption in Catholic Higher Education, <<EDUCA>>, 2025; (11): 129-160. [doi:10.82251/tq7w-x561] [https://hdl.handle.net/10807/327616]
From Machine Learning to Humane Learning. Responsible AI adoption in Catholic Higher Education
De Florio, Ciro;Gomarasca, Paolo
2025
Abstract
This paper outlines a practical path for adopting AI responsibly in Catholic higher education. It starts from a simple claim: AI can support learning and research only if human judgment remains in charge. The paper reviews current policies (e.g., disclosure of AI use, human oversight, academic integrity) and aligns them with UNESCO guidance and Pope Francis’s Global Compact on Education, which calls for an educational pact centred on human dignity, participation, social friendship, and care for our common home. Large language models are treated as cognitive artefacts that reshape, but must not replace, reasoning and authorship. The authors propose three complementary strategies: (0) Confinement – short-term limits to protect assessment and integrity; (1) Remodulation – redesign of goals, assessments, and literacies for an AI-rich environment; (2) Cooperation – responsible partnerships in teaching and research that preserve the right to publish and public scrutiny. Concrete actions include AI literacy, transparent disclosure, governance for sensitive data and dual-use risks, environmental accountability, integrity education, and support for at-risk scholars. Ultimately, Catholic higher education is the necessary workshop for this elaboration, the place where innovation is disciplined by human judgment, oriented to truth, and accountable to the common good envisioned by the Global Compact on Education.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



