Moral Decision-making in Autonomous Systems: Enforcement, Moral Emotions, Dignity, Trust and Deception
Author(s)
Advisor(s)
Editor(s)
Collections
Supplementary to:
Permanent Link
Abstract
As humans are being progressively pushed further downstream in the decision-making process of
autonomous systems, the need arises to ensure that moral standards, however defined, are adhered to by
these robotic artifacts. While meaningful inroads have been made in this area regarding the use of
ethical lethal military robots, including work by our laboratory, these needs transcend the warfighting
domain and are pervasive, extending to eldercare, robot nannies, and other forms of service and
entertainment robotic platforms. This paper presents an overview of the spectrum and specter of ethical issues raised by the advent of
these systems, and various technical results obtained to date by our research group, geared towards
managing ethical behavior in autonomous robots in relation to humanity. This includes: (1) the use of
an ethical governor capable of restricting robotic behavior to predefined social norms; (2) an ethical
adaptor which draws upon the moral emotions to allow a system to constructively and proactively
modify its behavior based on the consequences of its actions; (3) the development of models of robotic
trust in humans and its dual, deception, drawing on psychological models of interdependence theory;
and (4) concluding with an approach towards the maintenance of dignity in human-robot relationships.
Sponsor
Date
2011
Extent
Resource Type
Text
Resource Subtype
Paper