Can AI be moral? The quest for moral machines and what it means for human morality

Yuxin Liu, CTMF PhD Fellow

Can machines be taught to understand right from wrong? Should we strive to build AI that can navigate complex ethical dilemmas, or is morality an inherently human trait that technology can never truly grasp? These are some of the key questions at the heart of machine ethics – a field dedicated to equipping AI with moral reasoning in the pursuit of artificial moral agents (known colloquially as moral AI).

One emerging version of a limited moral AI is Artificial Moral Advisors (AMAs): AI systems designed to provide moral guidance based on embedded ethical frameworks that are deemed acceptable by programmers, ethicists, or the general public. Unlike fully autonomous moral agents, AMAs function as an advisory tool, assisting humans in navigating complex moral dilemmas. The idea behind AMAs is that they could help us make better moral choices and even cultivate better moral motivations and characters – a process philosophers call AI moral enhancement.

Large language models like ChatGPT can be regarded as a form of AMA, as they can offer moral advice when prompted. For example, AskDelphi, an online chatbot from the Allen Institute’s Delphi experiment was designed as a research prototype in an effort to become a step closer to achieving machine ethics. It provided simple "yes" or "no" answers to users' moral dilemmas. The chatbot quickly went viral upon its release, attracting both curiosity and criticism. Scholars and the public raised concerns about the Western-centric viewpoints of its training dataset, its biased and offensive or absurd judgments, and its inability to adapt to shifting moral norms.

Many have pointed out complications in AMAs providing prescriptive moral advice. For example, humans may become overly reliant on AMAs for moral judgement and decision making, passively deferring to AI recommendations without critically engaging in moral reflection themselves. Over time, this reliance could erode moral reasoning capabilities. To address this, some scholars propose an alternative, dialogic approach: an AMA that, rather than telling people what to do based on pre-designed principles, would engage its users in back-and-forth deliberative exchanges to facilitate moral reasoning.

An AI that can be seen as a small step in this direction is the Peter Singer AI (PSai), a chatbot trained on the writings of renowned philosopher Peter Singer to discuss topics such as global poverty, veganism, and animal rights. Whilst PSai can serve as a valuable entry point for users interested in exploring Singer’s moral philosophy further, upon closer engagement, users have observed that the bot lacks sophistication. At best, its attempts at prompting reflection are limited to a somewhat generic follow-up question at the end of every answer. PSai is a hollow caricature not yet capable of the moral sensitivity required to pick up nuances in a user's words that could spark deeper deliberation, nor does it help the user discover contradictions or other gaps in their own reasoning.

Where does this leave us? One thing most would agree is that humans often lack full access to the comprehensive information, knowledge, or expertise required to reach the best possible moral choice. Without ascribing sentience or moral agency to AMAs, they could serve as valuable informational or educational tools, providing us with relevant and necessary insights to make informed decisions. For example, an AMA could calculate the carbon emissions of different transportation modes and routes when planning a trip to help you reach an environmentally responsible option, though you would have to make the final decision yourself. Ultimately, no matter how advanced an AMA might be, the fundamental drive to act morally must still come from within us, and in the end, humans retain the final moral judgement and responsibility for our own actions.

Dive deeper into this topic by reading Yuxin’s extended blog here!

*For references, please see full blog.


About the contributor:

Headshot of Yuxin Liu. Her hair is pulled back off her face, and she is wearing a dark red top and pearl earrings, with a subtle smile. She is outside on a sunny day, in front of the Salisbury Crags in Edinburgh.

Yuxin Liu is a PhD Fellow at the Centre for Technomoral Futures. Her current research involves the interdisciplinary study of moral psychology, including thinking and reasoning, cognitive biases and heuristics, moral intuitions, moral decision-making, and human-AI interaction. Her project, ‘Human Moral Judgements Towards Artificial Intelligence Systems’, is co-supervised in the School of Philosophy, Psychology and Language Sciences by faculty in Philosophy and Psychology.

Next
Next

Philosophy of Machine Learning Expert, Dr Emily Sullivan, to join the Centre for Technomoral Futures