Agency Law and Artificial Intelligence

By Tan Cheng-Han and Daniel Seng Kiat-Boon

A feature of modern living today is the ubiquity of automated systems or artificial agents. Such agents are implementations of machine learning using neural networks and deep learning. They vary in their level of sophistication and complexity. What they have in common is that they supplant and automate processes that would otherwise require human intervention. An artificial agent’s choice of action at any instance depends on (a) its built-in knowledge and (b) the sequence of content its sensors have perceived (the agent’s percept sequence). The choice is effected by mapping every percept sequence to each choice of action, by way of different implementations or combinations of implementations known as ‘models’.  While humans, who have desires and preferences of their own, choose actions that produce desirable results from their point of view (or additionally, are morally, ethically and legally correct), machines do not have desires and preferences of their own.  Instead, an artificial agent is programmed to maximize its performance based on these models and one that does so successfully is said to exhibit ‘rationality’ or ‘intelligence’.

Because of this, some commentators have advanced the theory that artificial agents should be treated as legal agents, i.e., equivalent to a person who has been conferred authority to alter the legal position of such person’s principal. We consider this and related matters in our recent chapter ‘Agency Law and AI’ published in the Cambridge Handbook of Private Law and Artificial Intelligence.

While it is true that artificial agents may be part of a process that can lead its user to assume legal liability in contract or tort, we do not believe that artificial agents are true intermediaries in the way that legal agents are. For one, a legal agent must be a person. Until legal personality is conferred on artificial agents in a manner similar to that conferred on corporations, artificial agents do not have the same status as legal agents. Beyond this, agents at law must have sufficient capacity to understand the nature of the transaction entered into. In the case of human agents, this comes about because of the individual’s cognition where not impaired, while with corporations such awareness and knowledge are attributed through the organs of the company, notably the board of directors. No equivalent cognition is present in artificial agents. Accordingly, in the absence of a true agent whose actions on behalf of the principal bind the principal and a third party, where artificial agents are involved they merely facilitate direct contracting between two counterparties. Thus, where trades are effected through an automated process, the law regards the parties as using the platform to contract with one another directly rather than the artificial agent/platform being a separate intermediary that brings both parties together.

This is not a distinction without a difference as the existence of a separate intermediary raises additional issues of liability and, therefore, complexity. Indeed, one frequently cited justification for treating artificial agents as legal agents is that doing so opens up another avenue of redress for malfunctioning artificial agents, in that the ‘principal’ or operator of such automated or software agents may be held vicariously liable in tort for the actions of the agent. We question if this is truly necessary given that the operators or owners of the artificial agent may be potentially liable in negligence and/or contract should the automated agent not perform as expected. Fair outcomes can be reached without recourse to the law of agency.

In our view, as things stand at the moment, artificial agents are mere instrumentalities of persons or legal entities, and liability for wrongs caused by artificial agents have to be dealt with on this basis. We wish to refer to this as the ‘instrumentality principle’ and our contribution to the current discourse is to reinforce the fact that many of the issues raised in relation to artificial agents can be resolved by simply treating such ‘agents’ as instruments.

Keywords:  Agents, Personhood, Contract, Torts

AUTHOR INFORMATION

Tan Cheng-Han, SC is the Chief Strategy Officer and Professor of Law at NUS Law.
Email:  lawtanch@nus.edu.sg
LinkedIn: http://www.linkedin.com/in/cheng-han-tan-21aaa5267

Daniel Seng Kiat-Boon is the Co-Director, Centre for Technology, Robotics, Artificial Intelligence and the Law and Associate Professor at NUS Law.
Email: danielseng@nus.edu.sg
LinkedIn:  https://www.linkedin.com/in/daniel-seng-4424b66/