Some suggest those with limited rationality (for example, people who are mildly mentally disabled or infants[1]) also have some basic moral capabilities.
Otherwise, without free agent's a priori fundamental source, socially essential concepts created from human mind, such as justice, would be undermined (responsibility implies freedom of choice) and, in short, civilization and human values would crumble.
Some theorists discard any attempts to evaluate mental states and, instead, adopt the doctrine of strict liability, whereby one is liable under the law without regard to capacity, and that the only thing is to determine the degree of punishment, if any.
An example of this would be a young child old enough to understand right from wrong, yet they hit their siblings on an occasion when they get angry.
Agents need to become experts in this control in order to be able to declare something as moral or immoral and retain reputability.
Utilitarian philosophers Jeremy Bentham and Peter Singer have argued that the key to inclusion in the moral community is not rationality — for if it were, we might have to exclude some disabled people and infants, and might also have to distinguish between the degrees of rationality of healthy adults — but the real object of moral action is the avoidance of suffering.
[9] Research suggests that the moral judgment of an action may not depend on whether the agent is a human or a robot.
There is also evidence that some non-human species, especially other primates, can demonstrate empathy and emotions such as guilt or grief, though some thinkers dispute this.
[16][17] However, humans display distinctive capacities related to intelligence and rationality such as the ability to engage in abstract and symbolic thought and to employ complex language.
[19] They may also point out behaviors which in humans are described as moral activities, such as the punishment of individuals who break social norms.