Artificial intelligence and the question of moral personhood: A philosophical enquiry

Authors

DOI:

https://doi.org/10.51867/AQSSR.3.2.21

Keywords:

Artificial Intelligence, AI Ethics, Derivative Moral Agency, Moral Personhood, Moral Agency, Responsibility Gaps

Abstract

The rapid integration of artificial intelligence (AI) into high-stakes domains such as healthcare, finance, and public administration has intensified debates on moral agency, moral personhood, and accountability. As AI systems increasingly shape human outcomes, concerns arise regarding whether they qualify as moral agents or should bear responsibility for their actions. This study adopted a systematic literature review design grounded in qualitative philosophical inquiry to critically examine the moral status of AI and its implications for ethical governance. The study was guided by John Searle’s critique of strong AI and Floridi and Sanders’ theory of derivative moral agency, which provide a conceptual basis for evaluating AI’s capacity for moral understanding, intentionality, and accountability. The target population comprised scholarly literature in AI ethics, moral philosophy, and philosophy of technology, with the accessible population including peer-reviewed journal articles, books, and conference proceedings published between 2019 and 2026. A criterion-based sampling technique was employed to select relevant studies based on predefined inclusion criteria such as topical relevance, scholarly rigor, and conceptual contribution. Data collection involved systematic identification, screening, and extraction of key arguments from selected sources. A critical normative and thematic analysis was conducted to evaluate competing perspectives on AI moral personhood. The findings reveal that contemporary AI systems lack genuine moral understanding, autonomous intentionality, and accountability, thus failing to meet the criteria for moral personhood. The study further establishes that attributing moral status to AI risks creating responsibility gaps and weakening human accountability. It concludes that AI should be understood through derivative moral agency, where responsibility remains with human actors and institutions. The study recommends strengthening regulatory frameworks, institutional oversight, and ethical training to ensure responsible AI governance.

References

Coeckelbergh, M. (2010). Robot rights? Toward a social-relational justification of moral consideration. Ethics and Information Technology, 12(3), 209-221. https://doi.org/10.1007/s10676-010-9235-5

Coeckelbergh, M. (2021). Narrative responsibility and artificial intelligence: How AI challenges human responsibility and sense-making. AI & Society. https://doi.org/10.1007/s00146-021-01375-x

Coeckelbergh, M. (2022). The political philosophy of AI: An introduction. Polity Press.

Danaher, J. (2021). (Co-authored context) Danaher, J., & Nyholm, S. (2021). Automation, work and the achievement gap. AI Ethics, 1, 227-237. https://doi.org/10.1007/s43681-020-00028-x

Danaher, J. (2022). Tragic choices and the virtue of techno-responsibility gaps. Philosophy & Technology,35(2), 1-26

https://doi.org/10.1007/s13347-022-00519-1

Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349-379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d

Floridi, L., Cowls, J., King, T. C., & Taddeo, M. (2018). Ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707. https://doi.org/10.1007/s11023-018-9482-5

Gunkel, D. J. (2018). Robot rights. MIT Press. https://doi.org/10.7551/mitpress/11444.001.0001

Gunkel, D. J. (2023). Person, thing, robot: A moral and legal ontology for the 21st century and beyond. MIT Press.

https://doi.org/10.7551/mitpress/14983.001.0001

Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501-507. https://doi.org/10.1038/s42256-019-0114-4

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1-21. https://doi.org/10.1177/2053951716679679

Nyholm, S. (2018). Attributing agency to automated systems: Reflections on human-robot collaborations and responsibility-loci. Science and Engineering Ethics, 24(4), 1201-1219. https://doi.org/10.1007/s11948-017-9943-x

Nyholm, S. (2023). This is technology ethics: An introduction. (Wiley-Blackwell or equivalent academic press edition; exact publisher varies by imprint but commonly cited as 2023 monograph).

Santoni de Sio, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence. Philosophy & Technology, 34(4), 1057-1084. https://doi.org/10.1007/s13347-021-00450-x

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417-457. https://doi.org/10.1017/S0140525X00005756

Tigard D. W. (2021). Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible. Cambridge quarterly of healthcare ethics : CQ : the international journal of healthcare ethics committees, 30(3), 435-447. https://doi.org/10.1017/S0963180120000985

Downloads

Published

2026-04-18

How to Cite

Khagayi, S., Odoyo, C., Jepchirchir, A., Muhanga, C., & Ocholi, H. (2026). Artificial intelligence and the question of moral personhood: A philosophical enquiry. African Quarterly Social Science Review, 3(2), 228-232. https://doi.org/10.51867/AQSSR.3.2.21

Similar Articles

1-10 of 113

You may also start an advanced similarity search for this article.