Abstract
Artificial intelligence has been increasing the autonomy of man-made artefacts such as software agents, self-driving vehicles and military drones. This increase in autonomy together with the ubiquity and impact of such artefacts in our daily lives have raised many concerns in society. Initiatives such as transparent and ethical AI aim to allay fears of a ``free for all'' future where amoral technology (or technology amorally designed) will replace humans with terrible consequences. We discuss the notion of accountable autonomy, and explore this concept within the context of practical reasoning agents. We survey literature from distinct fields such as management, healthcare, policy-making, and others, and differentiate and relate concepts connected to accountability. We present a list of justified requirements for accountable software agents and discuss research questions stemming from these requirements. We also propose a preliminary formalisation of one core aspect of accountability: responsibility.
Users
Please
log in to take part in the discussion (add own reviews or comments).