“Moral Agency without Consciousness” - Jen Semler (Cornell Tech)
Canadian Journal of Philosophy, First View (2025)
By Jen Semler
Could a cold, unfeeling robot, devoid of any first-personal experience or emotions, be a genuine moral actor? Could it truly act for moral reasons and understand why certain actions are wrong? Intuitively, the answer seems to be “no,” and many philosophers have argued as such. Insofar as there is some special ingredient that renders an entity capable of acting morally wrongly and being morally responsible, phenomenal consciousness seems like a good candidate. Surely, the thought goes, a nonconscious entity couldn’t really be a moral agent.
Against this view, I think that our cold, unfeeling robot, under the right conditions, can be a genuine moral agent. In my recent paper, “Moral Agency without Consciousness”, I cast doubt on what we might call the consciousness requirement for moral agency. My claim is a conceptual one: phenomenal consciousness is not necessary for moral agency. While there may be various technical and conceptual hurdles to cross if we want to build an artificial moral agent, our inability to equip an AI system with phenomenal consciousness would not doom the project.
Proving that consciousness is not necessary for moral agency is a difficult task. The most straightforward way to do so would be to offer an example of a nonconscious entity that is a moral agent. But it’s not clear that any such entities exist. The only non-controversial instance of moral agency is that of paradigmatic adult humans. Corporations are considered by some to be nonconscious moral agents, as they lack phenomenal consciousness at the group level, but still it might be claimed that the consciousness of individuals gives rise to group moral agency. I can’t point to a clear-cut case of nonconscious moral agency to prove my point.
But I can, and do, offer a sketch of how an entity might meet the most important requirements for moral agency without phenomenal consciousness. My approach, then, is to take four candidate necessary conditions for moral agency—action, moral concept possession, responsiveness to moral reasons, and moral understanding—and argue that consciousness is not necessary for possessing those capacities. For each capacity, I describe how the capacity can be instantiated without consciousness, and I argue that this instantiation fulfills a requirement for moral agency.
Briefly, my picture of a nonconscious moral agent looks like this. A nonconscious entity can be an agent in the sense of having the capacity for intentional action because it can have (nonconscious) mental states that constitute reasons for action. The nonconscious agent can possess concepts, including moral concepts, to the extent that the agent can accurately and appropriately use those concepts (even if the agent lacks the phenomenal aspect of those concepts). The nonconscious agent can be responsive to moral reasons because it can learn to identify morally relevant features of a situation, can recognize moral reasons qua reasons, and can change its actions in light of those moral reasons. The nonconscious agent can have moral understanding in virtue of having abilities to reason about and apply moral concepts in novel situations—and grasping the relationships between reasons and explanations doesn’t require phenomenal capacities.
There is a looming objection. It might be claimed that my picture of nonconscious moral agency tacitly redefines the concept of moral agency. Roughly, my response is that the objector must offer reasons for why the nonconscious entity I describe fails to meet the standard requirements for moral agency. At the very least, the burden is on the defender of the consciousness requirement to explain either why phenomenal consciousness is required for “properly” fulfilling the conditions for moral agency I have highlighted, or which additional consciousness-requiring capacity is necessary for moral agency.
My argument opens the door for the possibility of nonconscious artificial moral agents (i.e., AI systems or robots that are moral agents). But it’s important to note that my argument doesn’t imply that Chat-GPT or any other existing AI system is a moral agent. The bar for moral agency, on my view, is still high. And without all the benefits consciousness provides for moral agency, it might be even harder to create a nonconscious artificial moral agent. But the key conceptual point is that consciousness is not necessary for moral agency, and it’s not enough to claim that AI systems can’t be moral agents because they lack consciousnesses.




I would love to read a reason for why the first question is not a stupid question.
I don't have a problem with the lack or illusion of consciousness, but the reduction of agency to an almost economic model is a possible mis-framing. I will look at the paper if I can.
Quick thoughts: ① Is a self-driving car going anywhere at all when it moves.
② And I will have a think non-consciously or not. I suspect that morality, if it is an outcome of an urge that also produces the self worlding among others worlding their selves, is not covered here.
③ For a counter example (thinking ahead of reading the paper) consider the conscious agent who has no morality, I.E the narcissist whose lack of empathy precludes them from selfing the world among others doing the same, because they cannot distinguish between the self and the world. They could tech-tree grind their way through deontological rules like a good gamer, until they get to the top of the hierarchy they percieve (and often create to suit themselves) and become the world they truly are/empire. How does this differ from an unconscious moral agent.
LLMs map what anthropologists call 'social learning' or at least the record of social learning (not intelligence). And the multi-modal one-shot success stories support this view I suspect.
PS I regard most analytical philosophy as tech-tree grinding gamers put effort into, and continental philosophy as fan fiction.
I prefer the other when reading the other one.